id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
242705714
pes2o/s2orc
v3-fos-license
COVID-19 in Liver and Kidney Transplant Recipients: An Initial Single-center Experience in Iran <jats:p /> Introduction The Coronavirus Disease 2019 (COVID-19) pandemic has immensely affected worldwide, giving rise to significant morbidity and mortality (1).Besides age, underlying disease represents a major risk factor for COVID-19 severity, with kidney and liver transplant recipients being highly susceptible to a severe course of the disease despite not being at a greater risk of infection relative to the general population (1)(2)(3).In fact, patients with solid organ transplants have a higher rate of mortality (~20%) due to COVID-19 than their non-transplant counterparts (1,4). The clinical presentation and outcome of COVID-19 in solid organ transplant recipients vary widely between patient groups and countries (1).Furthermore, limited data are available about the state of such patients (particularly liver transplant recipients) in Iran, with only a small number of cases having been studied (5)(6)(7).Therefore, the present study aimed to shed light on the situation of kidney and liver transplant recipients admitted with COVID-19 in Iran. Methods We report 30 kidney and liver transplant recipients who were admitted to Abu Ali Sina Organ Transplant Center (the largest transplantation referral center in the Middle East) due to COVID-19 between March 20 and October 20, 2020.All kidney/liver transplant adult recipients who had a positive severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) nasopharyngeal RT-PCR assay and required admission were included.A second SARS-CoV-2 RT-PCR assay was performed in our hospital for further confir-mation. Results and Discussion In this cross-sectional study of COVID-19 in kidney and liver transplant recipients, 30 adults with a mean age of 52 ± 11.75 years were included.In this population, 76.70% were males and the remaining patients 23.30% were females.The mean Body Mass Index (BMI) was 25.38 ± 3.21 kg/m 2 .The majority of the patients were kidney transplant recipients (70%), while 30% were liver transplant recipients. The mean temperature, heart rate, respiratory rate, systolic blood pressure, diastolic blood pressure, oxygen saturation, and Glasgow Coma Scale (GCS) score were 36.94 ± 0.52°C, 92.83 ± 18.95, 18.76 ± 1.85, 129.23 ± 22.19 mmHg, 77.90 ± 18.33 mmHg, 89.60 ± 11.22 %, and 14.80 ± 0.61, respectively.Notably, four patients had prolonged capillary refill times, three of whom were severely dehydrated.Also, 40% of the patients required wheelchair assistance at the time of presentation.The comorbidities of the patients prior to admission are represented in Figure 1. Figure 2 summarizes the frequency of different complications and conditions diagnosed during hospitalization among these patients.Clearly, the most frequent complication was acute kidney injury. There were nine recorded COVID-19-related deaths in this population during the study period, representing a 30% in-hospital mortality rate.This included six out of 21 (28.6%)kidney transplant patients and three out of nine (33.3%) liver transplant patients.Notably, the majority of these patients experienced acute kidney injury prior to their death, including five of the six in the former group and two of the three in the latter group.The remaining 21 patients were discharged, and complete recovery was recorded for them all at the three-month follow-up.Among our sample population of solid organ transplant patients, the most prominent signs and symptoms were fatigue and shortness of breath (each 73.3%), followed by fever and headache (each 53.3%), nausea/vomiting (50%), myalgia (43.3%), and cough (40%).Notably, 36.7% of our patients had altered consciousness.Compared with reports on general Chinese and Iranian populations, it seems that our patients had a fairly higher rate of shortness of breath and fatigue, which is probably explained by the underlying condition and highlights a more severe disease course in these patients (8,9).Among kidney transplant COVID-19 patients, the most frequently reported symptoms are fever (85%), dry cough (70%), myalgia (60%), and dyspnea (57%) (10).Variations may be explained by differences in study populations, with our results being limited to those solid organ transplant recipients that required hospitalization. Among our study population, the most frequent complication was acute kidney injury, which occurred in 60% of the patients.This is comparable with the reported rate of AKI in the general population of hospitalized COVID-19 patients (56.9%) (11), implying that solid organ transplant recipients are not at a greater risk of AKI than nontransplant recipients.Our result is also comparable with that of the study by Monfared et al., where 12 out of 22 (~55%) kidney transplant recipients with COVID-19 in Rasht, Iran, developed AKI during hospitalization (6).However, a notable finding of our study was that roughly 78% of the liver or kidney transplant recipients who died due to COVID-19 experienced AKI, compared with a rate of 53% among those who recovered.This indicates the necessity of paying greater attention to this complication as a marker associated with fatality. In Iran, the in-hospital mortality rate of COVID-19 has been reported as 24.4% in a national retrospective cohort study (9).According to the literature, solid organ transplants have a higher rate of mortality (~20%) due to COVID-19 than their non-transplant counterparts (1,4).This issue was corroborated by our results, where an in-hospital mortality rate of 30% was recorded, comprising 28.6% of kidney transplant patients and 33.3% of liver transplant patients.This is in line with a mortality rate of 27.27% reported in a similar Iranian study, involving 22 kidney transplant recipients hospitalized due to COVID-19 (6), and is also comparable with the rate of 29.2% reported among kidney transplant patients in Scotland (12).The slightly higher mortality rate in the Iranian studies relative to some research from other countries involving solid organ transplant recipients with COVID-19 may be explained by differences in healthcare systems.Our relatively high rate of mortality may also be explained by the high mean age (52 ± 11.75 years) of our study group considering that COVID-19 mortality increases with age (13).Nonetheless, what is clear from our study is that the situation of hospitalized liver and kidney transplant patients afflicted with COVID-19 may be worse in Iran than in other countries, and further efforts must be devised to prevent and, when necessary, manage this disease in this high-risk population group. It can be concluded that, following the trend described in studies from other countries, liver and kidney transplant patients afflicted with COVID-19 in Iran experience a higher rate of severe disease and mortality than the general population. Footnotes Authors' Contribution: NZS: Study concept and design, data acquisition, data analysis, drafting, and revising the paper.MK: Contributed to conceiving and designing the study, interpreting the data, commenting on drafts, and making significant revisions to the paper.SAH: Contributed to designing the study, interpreting the data, and making significant revisions to the paper.KM: Contributed to designing the study, interpreting the data, and making significant revisions to the paper. Conflict of Interests: The authors declare no potential conflicts of interest to the research.
2021-10-15T15:04:25.349Z
2021-10-13T00:00:00.000
{ "year": 2021, "sha1": "959560cbbd7e53843519740f1c3aa8bf3417c03b", "oa_license": "CCBYNC", "oa_url": "https://brieflands.com/articles/semj-119138.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e7969db442aa55e387af04e6671135a07b13d1c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4492322
pes2o/s2orc
v3-fos-license
Immunological and Inflammatory Impact of Non-Intubated Lung Metastasectomy Background: We hypothesized that video-assisted thoracic surgery (VATS) lung metastasectomy under non-intubated anesthesia may have a lesser immunological and inflammatory impact than the same procedure under general anesthesia. Methods: Between December 2005 and October 2015, 55 patients with pulmonary oligometastases (at the first episode) successfully underwent VATS metastasectomy under non-intubated anesthesia. Lymphocytes subpopulation and interleukins 6 and 10 were measured at different intervals and matched with a control group composed of 13 patients with similar clinical features who refused non-intubated surgery. Results: The non-intubated group demonstrated a lesser reduction of natural killer lymphocytes at 7 days from the procedure (p = 0.04) compared to control. Furthermore, the group revealed a lesser spillage of interleukin 6 after 1 (p = 0.03), 7 (p = 0.04), and 14 (p = 0.05) days. There was no mortality in any groups. Major morbidity rate was significantly higher in the general anesthesia group 3 (5%) vs. 3 (23%) (p = 0.04). The median hospital stay was 3.0 vs. 3.7 (p = 0.033) days, the estimated costs with the non-intubated procedure was significantly lower, even excluding the hospital stay. Conclusions: VATS lung metastasectomy in non-intubated anesthesia had significantly lesser impact on both immunological and inflammatory response compared to traditional procedure in intubated general anesthesia. Introduction The increasing evolution of non-intubated thoracic surgery allowed the execution of progressively more complicated operations in patients with different pathologies [1][2][3][4][5][6]. Our program of non-intubated thoracic surgery named the Awake Thoracic Surgery Research Group is-to our knowledge-the oldest surgical program specifically created for this purpose by one of us (TCM), who is still the main coordinator [7]. To date, more than one thousand non-intubated procedures were carried out in our department [8]. Surgery of lung metastases has been performed since the beginning of our experience [9]. Early operations were done under epidural anesthesia and three-port video-assisted thoracic surgery (VATS) [10] but starting from 2005, lung metastasectomies have been preferably accomplished through a unique thoracoscopic access under non-intubated anesthesia [11]. Traditional intubated surgery [12,13] and moreover one-lung ventilation [14][15][16] demonstrated several important adverse effects in both systemic inflammation and immunology, thus facilitating postoperative infections and cancer recurrence [17][18][19]. Conversely, the effects of non-intubated operations have been extensively evaluated over the years, disclosing intriguing implications on inflammatory stress [20] and immunological response [21]. As a matter of fact, these operations proved capable of generating a lower level of inflammation and lesser degree of immunologic depression than the traditional ones [22,23]. On these bases, we think that the use of non-intubated anesthesia appears particularly suitable in the surgery of oligometastatic patients. Herein, we analyzed some pattern of inflammatory and immunological response after lung metastasectomy carried out under non-intubated anesthesia. Results Demographic and pathological features of the two groups resulted homogeneous, as shown in Table 1. Immunological Impact Postoperative immunologic trends are shown in Table 2 and in Figure 1. A representative fluorescence-activated cell sorting (FACS) photo is shown in Figure 2. As expected, total leukocytes count increased after surgery in both groups. However, we found a more rapid decrement in the non-intubated group but without reaching the between-group significance threshold (p = 0.06). The total lymphocytes count showed a lesser drop in the non-intubated group in both post-operative day 1 (p = 0.05) and post-operative day 7 (p = 0.05), with the non-intubated group also displaying a nearly-significant more rapid restoration of the baseline value. Among the subpopulations in the non-intubated group, there was a significant lesser reduction of natural killer lymphocytes at 7 days following the procedure (p = 0.04) compared to the intubated group ( Figure 1). On the other hand, the other subpopulations did not present significant difference between groups. Inflammatory Impact The postoperative variations of interleukin 6 and interleukin 10 are reported in Table 3. As expected, the values increased rapidly in the postoperative period, persisting above the baseline values for the whole observation period. However, interleukin 6 showed a more significant increment in the intubated group starting from day 1 (between-group difference p = 0.03) and persisting at day 7 (p = 0.04) and day 14 (p = 0.05) ( Figure 1). No differences between groups were found in interleukin 10 levels. Morbidity There was neither in-hospital nor 30-day postoperative mortality in both groups. Major morbidity rate was significantly higher in the intubated group 3 (5%) vs. 3 (23%) (p = 0.04). In the non- Inflammatory Impact The postoperative variations of interleukin 6 and interleukin 10 are reported in Table 3. As expected, the values increased rapidly in the postoperative period, persisting above the baseline values for the whole observation period. However, interleukin 6 showed a more significant increment in the intubated group starting from day 1 (between-group difference p = 0.03) and persisting at day 7 (p = 0.04) and day 14 (p = 0.05) (Figure 1). No differences between groups were found in interleukin 10 levels. Morbidity There was neither in-hospital nor 30-day postoperative mortality in both groups. Major morbidity rate was significantly higher in the intubated group 3 (5%) vs. 3 (23%) (p = 0.04). In the non-intubated group, we experienced only two patients with persistent air leak and one with arrhythmia, whereas in the intubated group two patients developed pneumonia and one had a persistent air leakage. The median hospital stay was 3.0 vs. 3.7 days (p = 0.033), but even excluding the hospital stay, the estimated costs for the non-intubated procedures were significantly lower (median expenses: €3100 vs. €3900; p = 0.03). Discussion Morbidity rate after thoracic surgery is often related to one-lung ventilation [14][15][16]24], although mitigated by the minimally invasive approaches [25,26]. In particular, there is increasing evidence that one-lung ventilation might generate a number of anatomic changes in both dependent and non-dependent lungs. Their effects are similar to a compartmental inflammatory injury [27][28][29][30][31][32][33] that may impact the immunological response. In the present study, we found that the non-intubated procedure can reach successful results with a significantly lower morbidity rate. The exiguous number of intubated patients did not allow strong conclusions to be drawn. However, we experienced a significantly lower decrement of natural killer lymphocytes at day 7 as well as a significant attenuation of interleukin 6 response. Avoidance of one-lung ventilation may also have contributed to the more physiologic lymphocyte response observed in non-intubated patients. The effects of one-lung ventilation on natural killer activity have been known since 1993 [34]. Furthermore, other authors [35][36][37] have shown that one-lung ventilation can evoke a cascade of many oxidative changes, eventually resulting in a compartmental release of pro-inflammatory mediators including interleukin 6. The activation and secretion of this mediator could lead to a transient increase of cortisol plasma level, interfering with natural killer activity [38,39]. This immune-depressive effect induced by one-lung ventilation may also have an impact on oncological conditions. It is not rare that patients operated for lung metastases rapidly develop an unexpected new lung metastasis [18]. This may be due to the presence of occult metastases that had a rapid growth to the lack of immune control related to postoperative immunologic depression [40][41][42]. In our previous study, we did not find significant differences in postoperative survival in patients undergoing colorectal lung metastasectomy [11], but a larger study sample with longer follow up and hopefully on a randomized basis will probably achieve different results. The surgery of lung metastases is an argument that has always stimulated our attention [43][44][45][46][47]. Since 2000, we started a program of VATS operations under thoracic epidural anesthesia in awake and collaborative patients affected from different pathologies [8]. To our knowledge, this is the oldest surgical program specifically created for this purpose. The confidence in this kind of procedure is now quite high and increasingly recognized all over the world. Despite the surgical pneumothorax, the evaluation of vital parameters showed a satisfactory arterial oxygenation both intra and postoperatively [11]. This allowed an immediate resumption of many daily activities, faster recovery, shorter hospitalization and lower economical costs. The further data presented in this paper about inflammatory and immunological response may contribute to the justification of a rationale for lower morbidity and increase the confidence in this kind of procedure. We acknowledge that this study has evident limitations due to its non-randomized nature and small control group. However, we think of this as an observational study prior to reaching a more robust evidence through more structured and controlled investigations. Materials and Methods Between December 2004 and October 2015, a total of 55 patients referred to our center for pulmonary oligometastases successfully underwent uniportal VATS lung metastasectomy under non-intubated anesthesia. Clinical features of the patients cohort in the study are summarized in Table 1. Thirteen patients scheduled in the same period for the same procedure who refused the non-intubated anesthesia were used as a control group. They underwent a traditional VATS procedure in general anesthesia under one-lung ventilation. The study was a single-center and retrospective matched analysis between a non-intubated group vs. control group undergoing metastasectomy under intubated general anesthesia. Inclusion criteria for the non-intubated surgery were patient's preference, generic indications to non-intubated anesthesia [9], and the presence of peripheral oligometastases-no more than two-at the first episode and resectable with a wedge resection. Bilateral lesions were approached in two separate sessions in different days. This study was submitted and approved by the Internal Review Board at Tor Vergata University of Rome with the authorization code 627/15. Electrocardiogram, pulse oximeter, systemic and central venous blood pressure, body temperature, arterial blood gases, end-tidal CO 2 , and bispectral index were continuously monitored during the operation [48]. Just before the procedure, a 5 mL solution of 2% lidocaine was aerosolized for 5 min to prevent cough reflex. During the operation, the patient inhaled O 2 through a ventimask to maintain saturation greater than 90%. Intercostal bloc was habitually achieved by separate local injection of lidocaine 2% (4 mg/kg) and ropivacaine 7.5% (2 mg/kg). All intrathoracic phases were regularly well tolerated by intraoperative intravenous administration of benzodiazepine (midazolam 0.03-0.1 mg/kg) or opioids (remifentanil 15 µg/kg/min). Incidental anxiety or a panic occurring intraoperatively were sedated slightly by increasing the continuous propofol (0.5 mg/kg) infusion without interfering with spontaneous breathing. The procedures were accomplished with the patient lying in lateral decubitus position through a single small 30-40 mm port incision located at the most fitting intercostal space to reach and remove the suspect nodule. Intercostal muscles were retracted by the Alexis (Alexis ® , Applied Medical, Rancho Santa Margarita, CA, USA), thus allowing the introduction of the thoracoscope and the instruments. Whenever necessary, a mounted gauze pad to hinder pulmonary movements was also introduced. The lesion was detected by both digital and instrumental palpation and resected by linear stapler. At the end of the procedure, one 28 Ch chest tube was collocated at the posterior limit of the surgical wound. Drinking, eating, and walking was generally allowed in the same day of surgery. Patients were discharged after radiological evidence of complete lung re-expansion, limited pleural effusion (no more than 100 mL/day), and no air leakage. Patients with protracted air leakage (>5 days) were discharged with a Heimlich valve. Blood samples were always withdrawn through an antecubital vein in the morning (7:30 a.m.) just prior the operating session and at postoperative days 1, 7, and 14. Samples were sent to the Laboratory of Onco-hematology of our institution for immediate real-time tests without need of storage. Total lymphocytes were measured with a cell counter (Coulter Beckmann, MedLab, Cupertino, CA, USA). Circulating concentrations of interleukin 6 and 10 were measured using commercially available human colorimetric enzyme-linked immunosorbent assays (Quantikine ELISA, R & D Systems, Europe Ltd., Abingdon, UK). Statistical analysis was performed with the SPSS 18 computer software package (SPSS ® 18 version, Chicago, IL, USA). Non-parametric tests were prudentially preferred using Wilcoxon for within group and Kruskal-Wallis for between-group evaluations, respectively. Data were expressed as median interquartile range. Significant threshold was considered p < 0.05. Conclusions In the last decades, increasing attention has been dedicated to the importance of systemic inflammation and immune-competence in oncologic patients. Uniportal VATS lung metastasectomy in non-intubated anesthesia had a significant lower impact on both immunological and inflammatory response compared to the traditional procedure in general anesthesia, intubation, and one-lung-ventilation.
2017-07-26T05:11:05.341Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "cdc530fc515466a21438a6aa810aaa79fdf66e61", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/7/1466/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cdc530fc515466a21438a6aa810aaa79fdf66e61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18326848
pes2o/s2orc
v3-fos-license
Analysis of Multiuser MIMO Downlink Networks Using Linear Transmitter and Receivers In contrast to dirty-paper coding (DPC) which is largely information theoretic, this paper proposes a linear codec that can spatially multiplex the multiuser signals to realize the rich capacity of multiple-input multiple-output (MIMO) downlink broadcast (point-to-multipoint) channels when channel state information (CSI) is available at the transmitter. Assuming single-stream (or single-mode) communication for each user, we develop an iterative algorithm, which is stepwise optimal, to obtain the multiuser antenna weights accomplishing orthogonal space-division multiplexing (OSDM). The steady state solution has a straightforward interpretation and requires only maximal-ratio combiners (MRC) at the mobile stations to capture the optimized spatial modes. Our main contribution is that the proposed scheme can greatly reduce the processing complexity (at least by a factor of the number of base station antennas) while maintaining the same error performance when compared to a recently published OSDM method. Intensive computer simulations show that the proposed scheme promises to provide multiuser diversity in addition to user separation in the spatial domain so that both diversity and multiplexing can be obtained at the same time for multiuser scenario. INTRODUCTION Recently, multiple-input multiple-output (MIMO) antenna coding/processing has received considerable attention because of the extraordinary capacity advantage over systems with single antenna at both transmitter and receiver ends.Independent studies by Telatar [1] and Foschini and Gans [2] have shown that the capacity of a MIMO channel grows at least linearly with the number of antennas at both ends without bandwidth expansion nor increase in transmit power.This exciting finding has proliferated numerous subsequent studies on more advanced MIMO antenna systems (e.g., [3,4,5,6,7,8,9]).Performance enhancement utilizing MIMO antenna for single-user (point-to-point) wireless communications is by now well developed.The presence of other cochannel users in a MIMO system is, nonetheless, much less understood. In general, a base station is allowed to have more antennas and is able to afford more sophisticated technologies.Therefore, it is always the responsibility of the base station to design techniques that can manage or control cochannel signals effectively.In the uplink (from many mobile stations (MSs) to one base station), space-division multiple-access (SDMA) can be accomplished through linear array processing [10,11] or multiuser detection by sphere decoding [12].However, since a mobile station has to be inexpensive and compact, it rarely can afford the required complexity of performing multiuser detection or have a large number of receiving antennas.Support of multiple users sharing the same radio channel is thus much more challenging in downlink (from one base station to many mobile stations). Promoting spectral reuse in downlink broadcast channels traces back several decades and the method is based on socalled "dirty-paper coding" (DPC) [13].By means of known preinterference cancellation at the transmitter, DPC encodes the data in a way that the codes align themselves as much as possible with each other so as to maximize the sum capacity of a broadcast channel [14,15,16].However, dirty-paper techniques are largely information theoretic and worse of all, the encoding process to achieve the sum capacity is data dependent.This makes it inconsistent with existing communication architectures.For this reason, conventional downlink space-division multiplexing approaches tend to control the multiuser signals based on their signal-to-interferenceplus-noise ratio (SINR) using linear transmitter and receivers [17,18,19]. In [17,18], the objective is to maintain for every user a preset SINR for acceptable signal reception.A joint power control and beamforming approach is presented, but a solution is not guaranteed to exist.Subsequently in [19], a closed-form solution that optimizes the base station antenna array in maximizing a lower bound of the product of multiuser SINR is proposed.The problem, however, is that in any of these works, the cochannel users are not truly uncoupled, and the residual cochannel interference (CCI) will not only degrade users' performance, but also more importantly, destroy the independency for managing multiuser signals (since the power of cochannel users must be carefully adjusted jointly).Since it is advantageous to handle users in an orthogonal manner (i.e., zero forcing (ZF)) in the spatial domain, recent attempts focus on the new paradigm of orthogonal space-division multiplexing (OSDM) in the downlink [20,21,22,23,24,25,26,27]. In [20,21], support of multiple users using a so-called joint transmission method is introduced in the context of code-division multiple-access (CDMA) systems.Because single-element mobile terminals are considered, these methods solve only the problem for multiuser multiple-input single-output (MISO) scenario.OSDM techniques for multiuser MIMO systems are recently proposed by several authors (e.g., [22,23,24,25,26,27]).In [22,23,24], by placing nulls at the antennas of all the unintended users, the downlink channel matrix is made block diagonal to eliminate the CCI.However, these methods fail to obtain the rich diversity of the channels and require an unnecessary larger number of transmit antennas at the base station when the mobile stations have multiple antennas.More recently in [25,26,27], iterative solutions that are able to optimize the receive antenna combining are presented.Among them, the iterative null-space-directed singular value decomposition (iterative Nu-SVD) proposed in [27] emerges as the most general method that is able to tradeoff between diversity and multiplexing [28] and requires the least possible number of transmit and receive antennas.The drawback, however, is that its complexity grows roughly with the number of base station antennas to the fourth-to-fifth power (see Section 3.2 for details).This greatly limits the scalability of the system when many users are to be served simultaneously. In this paper, our aim is to devise a reduced-complexity linear codec for OSDM in broadcast MIMO channels and study the diversity and multiplexing behavior of the proposed system.It is assumed (as in [22,23,24,25,26,27]) that the channel state information (CSI) is known to both the transmitter and the receivers.By considering only singlestream (or single-mode) communication for each user, we derive a stepwise optimal iterative solution to obtain downlink OSDM.Surprisingly, we will show that the steady state solution has a straightforward interpretation, which ends up every user with a maximal-ratio combiner (MRC) under the ZF constraint.This intuition is then used to render a method that requires much less overall computational complexity.Simulation results demonstrate that the overall complexity of the proposed method is at least a factor of the number of base station antennas smaller than that of the iterative Nu-SVD, yet achieving the same error probability performance. The proposed scheme is analyzed by intensive computer simulations.In summary, results will reveal that the proposed scheme promises to provide multiuser diversity in addition to user separation in the spatial domain (i.e., both diversity and multiplexing can be obtained at the same time; consistent with single-user MIMO antenna systems [28]).The diversity is not diminishing with the number of users if the number of base station antennas is kept at least the same as the number of users.In addition, the system performance improves with the number of receive antennas at the mobile stations (unlike [22,23,24]), showing the importance of collapsing the receive antennas to release the degree of freedom available at the transmitter.Furthermore, the performance degradation is mild even in the presence of spatial correlation as high as 0.4, easily achievable with current antenna design technologies. The remainder of the paper is organized as follows.In Section 2, we introduce the system model of a multiuser MIMO antenna system in downlink.Section 3 presents the optimality conditions for single-mode OSDM and proposes the iterative method that leads to the solution.Simulation results will be provided in Section 4. Finally, we conclude the paper in Section 5. Throughout this paper, we use italic letters to denote scalars, boldface capital letters to denote matrices, and boldface lowercase letters to denote vectors.For any matrix A, A † denotes the conjugate transpose of A and A T denotes the transpose of A, and a n,m or [A] n,m refers to the (n, m)th entry of A. In addition, I denotes the identity matrix, 0 denotes the zero matrix, • denotes the Frobenius norm, and N (0, σ 2 ) is the complex Gaussian distribution function with zero mean and variance σ 2 . Linear signal processing at transmitter and receiver The system configuration of a multiuser MIMO system in downlink is shown in Figure 1, where the n T base station antennas, is postmultiplied by a complex antenna vector: where t (m) k represents the transmit antenna weight of the symbol z m at the kth base station antenna.The weighted symbols of all users at the kth antenna are then summed up to produce a signal x k , which is finally transmitted from the antenna.Defining the transmitted signal vector as x [x1 x 2 • • • x nT ] T and the multiuser transmit weight matrix as T [t1 t 2 • • • t M ], the transmitted signal vector can be expressed as where z T is defined as the multiuser symbol vector.Note that single signal-stream (or singlemode) communication has been assumed for each user. Given a flat fading channel, at the mth mobile receiver, the signal at each receive antenna is a noisy superposition of the n T transmitted signals perturbed by fading.As a result, we have where nR m ] T is the received signal vector with element y (m) denoting the received signal at the th antenna of the mth mobile station, n m is the noise vector with elements assumed to have distribution N (0, N 0 ), and H m denotes the channel matrix from the base station to the mth mobile station, given by where h (m) ,k denotes the fading coefficient from the base station antenna k to the receive antenna of the mth mobile station.We model h (m) ,k 's statistically by spatial correlated zeromean complex Gaussian random variables with unit variance (i.e., E[|h (m) ,k | 2 ] = 1), so the amplitudes are Rayleigh distributed and their phases are uniformly distributed from 0 to 2π.Detailed description of spatial correlated multiuser MIMO channel model will be presented in the next subsection. An estimate of the transmitted symbol, z m , can be obtained by combining the received signal vector at the mth mobile station.This is done by where nR m ] T is the receive antenna weight vector of the mth mobile station.Consequently, we can write the multiuser MIMO antenna system as [19,25] If we further define T , the entire system can be written as The definition of ( 7) will become useful when we introduce the spatial correlation model next. Spatially correlated multiuser MIMO channel model Provided the channels are spatially uncorrelated, then if To model the spatial correlation among the antenna elements at the transmitter and receivers, we use the separable correlation model [29], which assumes that the correlation among receiver and transmitter array elements is independent from one another.An intuitive justification is that in most situations, only immediate surroundings of the antenna array impose the correlation between array elements and have no impact on correlations observed between the elements of the array at the other end of the link.With this assumption, spatial correlation can be introduced by postmultiplying the transmitter correlation matrix, Γ 1/2 T and premultiplying the receiver correlation matrix, Γ 1/2 R so that where H is an independent and identically distributed (i.i.d.) channel matrix satisfying (9).Furthermore, as the distance between different mobile stations is generally large enough, it is much reasonable to assume that the correlation between antennas of different mobile stations is zero.Following this, a matrix of the receiver correlation coefficients can be constructed as The values of the correlation coefficients may vary according to different communication environments and are usually determined empirically.In order to make our analysis tractable, the single-parameter correlation model proposed in [30] is used to determine Γ T and Γ R as a function of only parameters, γ T and γ Rm , respectively.Therefore, Optimization of the linear processors In this section, our objective is to determine the transmit and receive antenna weights, (T, R), that can project the multiuser signals onto orthogonal subspaces (see (14) defined later) and at the same time maximize the sum-gain metric (or the sum of the squared resultant channel responses of the spatial modes).Mathematically, this can be written as where β m is considered as the resultant channel response for user m.Without loss of optimality, hereafter, we will assume that t m = r m = 1.According to ( 13) and ( 14), it is clear that the optimal solution of T and R will depend on each other.In order to be able to solve this optimization, we will begin by first assuming that all the receive vectors are already fixed and known, and later, consider the optimization over all possible receive vectors.By doing so, the overall system can be reduced to a multiuser MISO system with an equivalent multiuser channel matrix, H e , as Following ( 13) and ( 14), we are thus required to find the optimal transmit antenna weight vectors t m 's so that ) Now, we define another set of weight vectors Then, the optimization problem ( 16) and ( 17) can be rewritten as respectively.Further, by defining a matrix (20) can be concisely expressed as In order for (21) to exist, we must have rank(H e ), rank(G) ≥ rank(I) = M.As a result, OSDM is possible only when n T ≥ M and this constitutes one necessary condition for OSDM in multiuser MISO/MIMO channels [25,27]. When n T = M, the optimal solution for the weights, G, is simply where the superscript −1 denotes inversion of a matrix.Note that this is the one and only one solution for (21). When n T > M, there are generally infinitely many possible solutions for G.Among these possible solutions, we need to select the one that performs the minimization of (19), and hence (16).This problem can be recognized as a typical least squares problem for an underdetermined linear system [31] and this can be solved by the following. Decomposing the equivalent channel matrix as H e = UΛV † , where is the right unitary matrix, and Λ = diag(λ 1 , λ 2 , . ..) ∈ R M×nT whose elements are the singular values of H e , the optimal solution for g m (in the sense of ( 19) and (20) jointly) is then given by [31] More importantly, it can be shown that the solution ( 23) can be rewritten in a more easy-to-compute form, as the pseudoinverse of H e , that is, where the superscript + denotes the Moore-Penrose pseu-doinverse of a matrix [31].Accordingly, we can find the optimal transmit antenna weights by Thus far, we have maximized the resultant channel gain based on fixed-value receive vectors.Now, we will further optimize it over all possible receive vectors.Given the set of the "optimal" transmit vectors, the problem remains to solve the receive weight vector that best balances the CCI and noise at each mobile station (relaxing the ZF constraint for the moment).Apparently, the minimum mean square error (MMSE) solution gives the optimum: where 25) and ( 26) jointly compose the optimality conditions for our problem. To find the antenna weights that satisfy the conditions, an iterative updating process is necessary to tune the transmit and receive vectors because when using (26) for a given (generally not optimal) T, the orthogonality between different mobiles may be lost due to the mismatch.The details of the algorithm are given as follows. (5) Compute If | i | satisfies a certain condition (will be described next), the convergence is said to be achieved.Otherwise, go back to step (2). We refer to this method as iterative pseudoinverse MMSE (iterative Pinv-MMSE).By changing the rule for convergence, the iterative algorithm can be used to achieve either OSDM (i.e., ZF) or SINR balancing.For example, if we require that | i | ≤ 0 for all i, where 0 is a preset value (typically less than 10 −6 ), it ends up ZF.Alternatively, we can have where p n denotes the transmit power for the nth mobile station, and γ 0 is the preset SINR for ensuring certain link reliability.The above criterion leads to SINR balancing.As stated before, the SINR balancing method involves joint tuning of power distribution, p n 's and the weight vectors, so it will suffer high complexity and sometimes may not converge.Therefore, we concentrate on the ZF method only.According to (24) and (26), it is obvious that the optimal solution of T can be expressed as a function of the noise level N 0 , that is, However, it can be proved (see the appendix) that with the ZF constraint, the optimum MMSE receiver (26) can be simplified as which is essentially an MRC receiver.This actually reveals that the optimal solution is independent of N 0 .What is important here is that the MMSE solution (26) in step (4) can be replaced by the MRC solution (30) to greatly reduce the computational complexity of the iterative algorithm (to be discussed in Section 3.2).We refer to the method using (30) as iterative Pinv-MRC. Here, it is worth pointing out two facts.First of all, although iterative Pinv-MRC and iterative Pinv-MMSE converge to the same point, for each iteration, MRC and MMSE receivers do give different updates.As a matter of fact, the two methods may have different convergent properties.Figure 2 shows the number of iterations for convergence versus the preset threshold 0 , for a system with 4 transmit antennas communicating to 2 mobile stations each with 2 receive antennas, and at signal-to-noise ratio (SNR) of 20 dB.As can be seen, the number of required iterations for iterative Pinv-MMSE is much larger than that for iterative Pinv-MRC. Secondly, although the iterative process described before involves the computation of receive vectors, they are only temporary variables in the process to optimize the transmit vectors.In other words, the optimal transmit vectors can be computed solely at the transmitter without the need of coordination with the receivers.This can be made apparent by combining the optimality conditions ( 24) and (30) together, to yield where µ m 's are real constants to ensure t m = 1 for all m.Accordingly, we have the following fixed point iteration: where the superscript ν denotes the νth iterate, and f indicates the updating procedure stated in (31).The updating equation alone will solve the optimization at the transmitter.As for each mobile receiver, (30) can be used to capture the optimized spatial mode. Complexity analysis Iterative Pinv-MRC offers a linear codec for OSDM at an affordable complexity compared to existing schemes.To highlight this, the complexity requirements per iteration in terms of the number of floating point operations (flops) for the proposed method and the iterative Nu-SVD method in [27] are listed in Table 1, where n Rm = n R for all m has been assumed.Further, it is assumed that recursive SVD [31] is used for computing SVD and null-space while matrix inversion is performed using Gaussian elimination.Note that in most cases, n T ≥ M n R .The dominant factors which determine the computational complexity are M and n T .It follows that iterative Nu-SVD algorithm needs roughly O(11n 3 T M + 2n 2 T M 2 ) flops per iteration, while the proposed method requires only O(4n T M 2 ) flops per iteration.Therefore, for each iteration, complexity reduction by a factor of at least n T can be achieved.On the other hand, the complexity is also determined by the number of iterations required for convergence and it will be shown that iterative Pinv-MRC in general requires similar or in some cases a slightly greater number of iterations than iterative Nu-SVD.A more detailed discussion will be provided in Section 4.2 where examples are considered. SIMULATION RESULTS AND DISCUSSION Monte Carlo simulations have been carried out to assess the system performance of the proposed multiuser MIMO antenna system.Results on average bit error rate (BER) for various SNR are presented.In order to assess how effective the transmit powers are transformed into received power, the SNR used here is the average transmit energy per branchto-branch versus the power of noise.Perfect CSI is assumed to be available at the base station and all mobile stations. Preprocessing The channel model is assumed to be quasistatic flat Rayleigh fading so that the channel is fixed during one frame and changes independently between frames.The fading coefficients among transmit and receive antenna pairs are spatially correlated and modelled by (10).The frame length is set to be 128 symbols and 4-and 16-QAM (quadrature amplitude modulation) will be used.More than 100 000 independent channel realizations are used to obtain the numerical results for each simulation.For convenience, we will use the notation {n T , [n R1 , . . ., n RM ]} to denote a multiuser MIMO antenna system, which has n T transmit antennas at the base station and M mobile users each with n Rm receive antennas. Comparison with previous OSDM schemes [22, 23, 24, 25, 26, 27] In Figure 3, we provide the average BER results for the proposed iterative Pinv-MRC and the approach in [22,23,24] (referred to as preprocessing-SVD) for various SNRs assuming no spatial correlation (i.e., γ T , γ R = 0).The system configurations we consider are: (a) {4, [2,2]} and (b) {4, [3,3]}.As can be seen in this figure, the performance of iterative Pinv-MRC is significantly better than that of [22,23,24].Specifically, more than an order of magnitude reduction in BER is possible for {4, [2,2]} systems and even more improvement is achieved for {4, [3,3]} systems.Most importantly, for the method in [22,23,24], the performance gets worse if the number of mobile station antennas increases since more degrees of freedom need to be consumed for nullification of signals at the receive antennas.However, this is not true for our proposed method, whose performance is shown to improve by increasing the number of receive antennas at the mobile station.This can be explained by the fact that for iterative Pinv-MRC, only one degree of freedom is needed at the transmitter for CCI suppression while the method in [22,23,24] requires n R (= 2 or 3) degrees of freedom.The remaining degrees of freedom left at the base station can be utilized for diversity enhancement.In Figure 4, the average BER results for the proposed iterative Pinv-MRC, the iterative Nu-SVD [27], and the Jacobilike approach in [25] are plotted against the average SNR for the configuration {2, [3,3]}.Results indicate that the three OSDM approaches perform nearly the same.This is further confirmed by other results (which are not included in this paper because of limited space) that the three methods have nearly the same performance with inappreciable difference for the scenarios when all of them obtain downlink OSDM.However, it is worth emphasizing that the method in [25] requires for every mobile station one additional antenna for interference space while the iterative Nu-SVD requires a much higher computational complexity than the proposed iterative Pinv-MRC (see results in Section 4.2). BER results versus the number of receive antennas at the mobile station In Figure 5, we investigate the impact on the performance of one user (say, user 1) by varying the number of antennas at another mobile receiver (say, user 2).[1]}, {2, [2]} and a 2-user system {4, [1,1]} are also included for comparisons.When n R2 increases, the Iterative Nu-SVD [27] for {2, [3,3]} Iterative Pinv-MRC for {2, [3,3]} Jacobi-like [25] for {2, [3,3] MRC method with the iterative Nu-SVD [27] and the method in [25].BER performances of user 1 for all three configurations reduce and eventually settle to certain error rates.Intriguingly, for {2, [1, n R2 ]}, if n R2 is large, its performance becomes a single-user system {2, [1]}.Similarly, {2, [2, n R2 ]} and {4, [1, n R2 , 1]} converge to, respectively, {2, [2]} and {4, [1,1]} systems when n R2 is large.In other words, by increasing the number of antennas at mobile station 2, user 2 will appear to be invisible to user 1.The reason is that with sufficiently large number of antennas at mobile station 2, little is needed to be done at the base station for suppressing the CCI to mobile station 2. Consequently, the optimization will be performed as if mobile station 2 does not exist. BER results versus the number of users In Figure 6, we study the impact of the number of mobile users in the iterative Pinv-MRC system.In this study, transmissions are 4-QAM with 8 dB of average SNR.Making OSDM possible, the number of transmit antennas n T must be equal to or greater than the number of mobile users M (i.e., n T ≥ M) [27].In this figure, we set n T = M to see if BER performance depends on the number of users in the system.Results are plotted for various n R (from 1 to 4).When n R = 1, the BER performance remains unchanged as M increases.This can be explained by the fact that for multiuser MISO antenna systems, the system performance of each mobile station is the same as that of a single-user MISO system with n T −M +1 = 1 transmit antennas.When n R > 1, the BER performance improves significantly as the number of receive antennas increases and more diversity can be achieved for a system with more users.The reason is that on having more users in the system, more base station antennas need to be employed for user separation.The increase in the degree of freedom contributes partly to maintain the orthogonalization and partly to obtain diversity.Therefore, if the number of transmit antennas keeps matching with the number of users, supporting more users in the system is beneficial, rather than detrimental.Hence, both diversity and multiplexing can be achieved at the same time not only for single-user [28] but also multiuser MIMO antenna systems as well. BER performances versus number of iterations Compared to some existing closed-form solutions for multiuser MIMO system [22,23,24], the drawback of our method is the need of an iterative process which sometimes may induce unpredictable computational complexity.The investigation of the iteration number needed for convergence will be presented in the next subsection.Here we show that, in most cases, after a few number of iterations, the system performance will be very close to the steady state solution.Figure 7 gives the average BER performance versus the iteration number under four different system configurations.In this figure, the average SNR is fixed to 8 dB and 4-QAM is used; the dash lines with filled symbols are the steady state performance of the corresponding configurations.It is worth mentioning that the BER performances at 0 iteration are actually the performances of the scheme proposed in [23].With respect to this point, we can see that our scheme can have significant performance improvement compared to [23] with just a few iterations.Specifically, for {2, [2,2]} and {3, [2,2]}, results illustrate that the performance with 1 iteration makes a very significant improvement and converges to the steady state result after only 3 iterations.In addition, results also indicate that the iteration process is not very sensitive to the number of transmit antennas.However, when we increase the number of users M or the number of receive antennas n R per user, the number of iterations required to give close to the best performance will increase.For instance, for systems {4, [2, 2, 2, 2]} and {2, [3,3]}, more than 5 iterations would be required to have comparable performance as the steady state result. Complexity results Tables 2 and 3 demonstrate the complexity of the iterative Nu-SVD [27] and the proposed method.Four receive antennas at every mobile station (i.e., n Rm = n R = 4 for all m) is assumed.Results for the average number of iterations for convergence and the number of flops for each iteration are given, respectively, in Tables 2 and 3. A close observation of Table 2 reveals that the average number of iterations required grows almost linearly with the number of users, M, for both methods.Note, however, that for any fixed M, the average number of iterations required slightly decreases with the number of antennas at the base station, n T , for iterative Nu-SVD.This does not occur for the proposed iterative Pinv-MRC system where the average number of iterations required increases with the number of base station antennas.Notice also that, in general, the proposed system requires higher number of iterations than that of iterative Nu-SVD, but the difference becomes smaller as the number of users increases.In addition, when n T = M, both systems require more or less the same number of iterations for convergence. From Table 3, it is apparent that iterative Nu-SVD requires much larger number of flops for each iteration compared with iterative Pinv-MRC.Though the number of flops per iteration for both systems increases with the number of users and the number of base station antennas, the complexity of iterative Nu-SVD is much more sensitive to the increase of the number of base station antennas.In particular, an increase by about a factor of two is observed for an addition of a base station antenna.Results in Table 3 also demonstrate that a reduction by at least a factor of n T in the number of flops for each iteration can be obtained using the proposed iterative Pinv-MRC.More reduction can be achieved for large M or n T .For example, in the case of M = 4 and n T = 8, reduction by a factor of more than 32 is achieved. Comparisons of the overall complexity of the two methods are given by the examples in Table 4.As can be seen, reduction by more than an order of magnitude is always realized when n T > M. Specifically, for the {5, [2,2]} system, iterative Pinv-MRC can reduce the overall complexity by a factor of about 18 as compared to iterative Nu-SVD.Note also that for the examples under investigation, more reduction can be obtained if the difference n T − M is larger.To summarize, for any values of n T , M, n R , iterative Pinv-MRC can significantly reduce the complexity of performing OSDM when compared to iterative Nu-SVD, a recently published OSDM system [27], while maintaining the error probability performances as have been demonstrated in Section 4.1. Impact of spatial correlation In this subsection, we investigate the correlation between the number of iterations for convergence and the spatial correlation of the channels.A {4, [4,4]} system using iterative Pinv-MRC is studied and the results are provided in Figure 8.We can observe that when γ R is fixed to zero, increasing γ T almost has no effect on the number of iterations.This is not the case when γ T is fixed to zero; as γ R increases the number of iterations will decrease.This can be reasoned by the following.The role of receive vector is to combine the channel matrix H m and form the "effective" channel vector r † m H m .Based on the ZF criterion, iteration is required only when the change of receive antenna weights destroys the orthogonality provided by the transmit weights.The iterative process is thus largely dependent on the receive spatial correlation.When the receive spatial correlation is low, even a small adjustment of receive weights will result in dramatic change of the channel vector, leading to large number of iterations irrespective of the transmit spatial correlation.On the contrary, when the receive spatial correlation is high, any updating of the receive antenna weights results in only small change of effective channel vector and the number of iterations required will be small.In the extreme case that the receive antennas are entirely correlated (i.e., γ R = 1), the multiuser MIMO system will degenerate to a multiuser MISO system which has a closed-form solution and no iteration is needed. Results in Figure 9 are provided for illustrating the sensitivity of the BER performance on the spatial correlation of the channel.In this figure, the SNR is set to 16 dB and 4-QAM is assumed.Analysis is done by varying one value of spatial correlation coefficient γ T (γ R ) while the other γ R (γ T ) is fixed.As expected, results show that the BER is getting worse for higher spatial correlation (either γ T or γ R ).In-triguingly, the performance degradation is more severe on the transmit correlation factor than the receive correlation factor.It is worth noting that this is contrary to the known results of the single-user MIMO system where the transmit and receive correlation factors have the same effect on the system performance.In particular, when γ T approaches 0.99 (perfectly correlated in space), BER becomes 0.5 indicating that the multiuser system actually breaks down.Otherwise, however, the BER performance degrades considerably, but is still able to give BER of 10 −3 .The reason is that the orthogonality of the system is largely provided by the difference (or rank) of the channels seen by the transmit antenna array.Therefore, when γ T increases, the channels of the users quickly become nondistinguishable while the effect of increasing γ R goes only to the loss of receive diversity at the users.Overall, the system performance does not degrade a lot when the spatial correlation is as high as 0.4. CONCLUSIONS This paper has revisited the OSDM problem in multiuser MIMO downlink channels.A linear codec called iterative Pinv-MMSE, which is stepwise optimal, is proposed to obtain the multiuser antenna weights satisfying the optimality conditions.We have shown analytically that at the optimal point at convergence, we can do iterative Pinv-MRC, which is computationally simpler, yet achieves the same solution.Remarkably, the proposed scheme has been shown by simulation results to yield the same performance as a recently published method [27] with much lower processing complexity.Further, our simulation results have revealed several important findings: (1) performance improves as the number of receive antennas at the mobile station increases (unlike the systems in [22,23,24]), (2) more diversity gain can be achieved for a system with more users if the number of base station antennas keeps matching with the number of users (so both diversity and multiplexing can be obtained at the same time), (3) less number of iterations is required for channels with higher receive spatial correlation, (4) system performances do not degrade a lot when spatial correlation is as high as 0.4 which is achievable with current antenna design technologies. APPENDIX EQUIVALENCE OF MMSE RECEIVER AND MRC RECEIVER AT THE OPTIMUM POINT As we know, multiplying a scalar value to the receive vector will not affect the final SNR.Therefore, we will ignore the normalization factor (i.e., the denominator) in ( 26) and (30) in this proof.We will show that, under ZF condition (r † m H m Tm = 0), the MMSE receiver will have the same form as MRC receiver. Before we proceed to the proof, a result on the computation of matrix inversion will be useful.For matrices that have the form as A −1 + BB † , the inverse matrix can be computed as A − (AB(I + B † AB) −1 )B † A. This can be verified by the following: Given r † m H m Tm = 0, r m = (1/N 0 )H m t m will always be satisfied.As in this case, the second part of above equation will be zero because (H m Tm ) † H m t m = N 0 (H m Tm ) † r m = 0. Figure 2 : Figure 2: Number of iterations versus the preset threshold 0 . Figure 4 : Figure4: Performance comparison of the proposed iterative Pinv-MRC method with the iterative Nu-SVD[27] and the method in[25]. Figure 5 : Figure 5: Average BER performance of user 1 with increasing number of antennas of user 2 at SNR = 12 dB. Figure 6 : Figure 6: Average BER performance of the proposed iterative Pinv-MRC method with various number of users, n T = M, and at SNR = 8 dB. Figure 7 : Figure 7: Average BER performance of the proposed iterative Pinv-MRC method for various number of iterations at SNR = 8 dB and 4-QAM. − 1 + 1 B † AH m t m = 1 1 B † H m t m = 1 1 H BB † A − AB I + B † AB −1 B † A = I + BB † A − B + BB † AB I + B † AB −1 B † A = I + BB † A − B I + B † AB I + B † AB −1 B † A = I + BB † A − BB † A = I. (A.1)With the above result, and by considering A = I/N 0 and B = H m Tm , we can compute the MMSE receiver asr m = BB † + A −1 −1 H m t m = A − AB I + B † AB −1 B † A H m t m = AH m t m + AB I + B † AB −N 0 H m t m + 1 N 0 AB I + B † AB −m Tm † H m t m .(A.2) signals are transmitted from one base station to M mobile stations, n T antennas are located at the base station; and n Rm antennas are located at the mth mobile station.The data symbol, z m , of the mth mobile user, before being transmitted from all of Table 3 : [27]er of flops required for each iteration of the iterative Nu-SVD[27]/the proposed iterative Pinv-MRC method when n R = 4. Table 4 : Comparisons of the computational complexity and the required number of iterations.
2017-06-30T01:53:54.432Z
2004-12-15T00:00:00.000
{ "year": 2004, "sha1": "4cebdfd732ca4d386ca67f65e48ae24fedce63b9", "oa_license": "CCBY", "oa_url": "https://jwcn-eurasipjournals.springeropen.com/counter/pdf/10.1155/S1687147204406045", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4cebdfd732ca4d386ca67f65e48ae24fedce63b9", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
256959484
pes2o/s2orc
v3-fos-license
Long-term response of forest productivity to climate change is mostly driven by change in tree species composition Climate change affects ecosystem functioning directly through impacts on plant physiology, resulting in changes of global productivity. However, climate change has also an indirect impact on ecosystems, through changes in the composition and diversity of plant communities. The relative importance of these direct and indirect effects has not been evaluated within a same generic approach yet. Here we took advantage of a novel approach for disentangling these two effects in European temperate forests across a large climatic gradient, through a large simulation-based study using a forest succession model. We first showed that if productivity positively correlates with realized tree species richness under a changed climate, indirect effects appear pivotal to understand the magnitude of climate change impacts on forest productivity. We further detailed how warmer and drier conditions may affect the diversity-productivity relationships (DPRs) of temperate forests in the long term, mostly through effects on species recruitment, ultimately enhancing or preventing complementarity in resource use. Furthermore, losing key species reduced the strength of DPRs more severely in environments that are becoming climatically harsher. By disentangling direct and indirect effects of climate change on ecosystem functioning, these findings explain why high-diversity forests are expected to be more resilient to climate change. Forests are of critical importance globally; they cover ca. 30% of the world's land surface, harbor most of terrestrial biodiversity 1 , are an important carbon sink 2 , have a pivotal role for climate regulation 3 and provide many other ecosystem services 4 . Climate change affects forests and their functioning directly (Fig. 1a), including key aspects such as net productivity 2 , via altered abiotic conditions (e.g. climate and atmospheric CO 2 concentration). As a result, a general productivity increase has been observed in forests over the last decades 5 , providing that water was not limiting. However, projections for 2050 suggest that negative impacts of climate change on forest functioning are likely to increase in frequency and intensity, mostly due to severe droughts 6 . Climate change can also affect ecosystem functioning indirectly, through impacts of pests and pathogens 7,8 or modifications of local communities composition caused by shifts in species distributions 9,10 (Fig. 1a). Many range shifts of tree species have recently been reported, either in latitude 11 or elevation 12 , and several examples of local extinctions caused by more severe drought events have been documented, especially at the rear edge of species distributions 13 . Such shifts, which are anticipated to become even stronger in the future 14,15 , lead to changes in local community composition 16 , and possibly affect species interactions 17 . Yet, plant diversity and community composition have been shown to influence ecosystem productivity 18 , although the magnitude of these effects appears to be context-dependent and is not fully understood 19,20 . ΔP refers to the difference in productivity between anticipated (i.e. corresponding to those simulated by one of the RCMs considered here) and baseline climatic conditions. ΔP can be decomposed into three components. ΔP 1−1 is the difference in productivity between anticipated and baseline climate conditions of species present in both conditions (which may be either positive or negative as represented by the black arrows), ΔP 1−0 decrease in productivity due to species unable to grow in anticipated conditions (red trees in left panel; and ΔP 1−0 is necessarily negative), and ΔP 0−1 increase in productivity due to species able to grow only in anticipated conditions (blue trees in right panel; and ΔP 0−1 is necessarily positive). Scientific RepoRTS | (2018) 8:5627 | DOI: 10.1038/s41598-018-23763-y For terrestrial ecosystems, diversity-productivity relationships (DPRs) have first been shown experimentally in artificial grasslands 21,22 . The same type of experiments have been set up for forests, but most of them are still young, which limits the relevance of their outcomes 23 . Therefore, DPRs in forests are generally inferred from rather short-term observations using forest inventories [24][25][26] , showing an overall positive effect of tree diversity on productivity 20 . Yet, empirical studies necessarily include multiple sites that are subject to different environmental conditions, possibly leading to biased results 27 . This is especially true regarding climatic conditions, as climate appears to strongly modulate DPRs in forests 28 . Recently, novel approaches have been proposed to depict and quantify DPRs in tree communities, involving simulations with process-based forest succession models (FSMs) 29,30 . These approaches have the advantage that they can shed light on the mechanisms linking diversity and productivity also in mature forest systems. Furthermore, modelling studies make it possible to test a vast number of combinations of climatic conditions and diversity levels, which is very difficult based on observations due to confounding factors (e.g. mixed forests growing on fertile soils), and practically infeasible in experiments. While understanding the interaction between climate change and the loss of biological diversity represents a crucial challenge to forecast ecosystem functioning in the future 19,31 , the relative importance of the direct (i.e., through species response to abiotic conditions) and indirect (i.e., through changes in community composition) effects of climate change has not been yet evaluated within the same general approach. As FSMs take both abiotic (climatic, soil) and biotic (competition) factors into account, studies with such models are particularly relevant to disentangle these direct and indirect effects. In other words, owing to such a simulation approach, the changes of productivity can be partitioned into the direct effect of climate change on forest growth vs. the indirect effect through modified community composition. In this study, we therefore used a FSM to quantify the potential relative importance of these direct and indirect effects on forest productivity, and to test how DPRs are theoretically affected by climate change, considering combinations of 30 European tree species and a wide range of environmental conditions in Central Europe (Table S1). To do so, following the approach by Morin et al. 29,30 , we simulated virtual forest biodiversity experiments with the FSM ForClim, with various original species richness (1 to 30 European species, with 7,431 original community composition tested) in 11 sites, using either baseline (i.e., current climate) conditions, or future conditions. More specifically, through a large set of simulations, we aimed at: i. Assessing how climate change affects forest productivity depending on initial environmental conditions; ii. Quantifying the direct and indirect effects of climate change on forest productivity; iii. Testing whether the effect of tree diversity (species richness and functional diversity) on forest productivity holds under climate change; and how site-level DPRs would be affected by climate change. We expect direct effects of climate change to be stronger in the most productive sites, assuming that there is a greater number of species in these sites, with high levels of functional redundancy in terms of functional traits affecting tree growth 32 that could compensate for decreases or losses of species 29 . In contrast, indirect effects should be prevalent in the less productive sites, where tree species diversity is lower and forest functioning more likely to strongly depend on a few key species 29,33 . Results The simulated impacts of climate change on forest productivity varied strongly across the gradient of current site conditions, irrespective of their composition. Forests on sites with the coldest conditions experienced, on average, an increase in productivity (hereafter named "P+ sites"), while forests on the warmest sites showed a productivity decrease (hereafter "P− sites") ( Fig. 2a). For instance, productivity increased by 0.68 (±0.47) t.ha −1 . yr −1 at Davos (site with a mean annual temperature (MAT) of 3.0 °C), but decreased by 0.48 (±0.39) t.ha −1 .yr −1 at Basel (9.2 °C MAT). It appeared that forest productivity was enhanced by the increase in mean temperature in the coldest sites, while it dropped at the other sites because of the decrease in precipitation. However, the interaction between changes in temperature and rainfall could have impacted changes in productivity. For instance, the forests simulated in the site of Bever are on average more productive under future conditions, because this site is the second coldest of the gradient, but this effect is weak because Bever is also one of the driest sites (Fig. 2a). It is also noticeable that the sites with the simulated forests showing the lowest productivity under current conditions were not necessarily P+ sites under new conditions (eg. forests simulated in the site of Sion experienced a decreased in their productivity, while they have the weakest productivity under current conditions). Furthermore, the number of tree species communities predicted to experience improved productivity decreased with higher mean annual temperature ( Fig. 2b; Slope = −8.70, r 2 = 0.77, P < 0.001). This finding is consistent with already reported temperature-dependent changes in physiology and productivity 2 . Differences in climate between baseline and RCM 'future' conditions led to different realized forest communities (with different tree species richness) and productivity for the same initial species combination (Fig. 1b). Changes in species richness followed the same pattern as changes in productivity (Fig. 2a), and were positively correlated with each other across all simulations (r = 0.54, n = 81741 (i.e. 11 × 7431), p < 0.001, Pearson correlation). We also tested how climate change affects functional diversity by calculating changes in functional dispersion 34 between baseline and RCM conditions. A weaker trend than for species richness was found as all sites experienced, on average, a decrease in FDis (r = 0.27, n = 81741, p < 0.001). However, the strongest decreases in FDis were predicted to occur at P− sites, demonstrating that community composition and productivity remain tightly related also in a changed climate (Fig. 2a). To confirm these findings without considering the intra-site variability, we used the median of all simulations at the site level to calculate Spearman correlations across all sites. Changes in richness and changes of productivity were strongly correlated (r = 0.94, P < 0.0001), as well as changes in FDis and changes of productivity (r = 0.64, P > 0.0001). The partitioning of the change in productivity (ΔP) between current and future climate showed that the relative importance of ΔP 1−1 , ΔP 1−0 and ΔP 0−1 varied strongly across sites. At P+ sites, the response of forest productivity to climate change depended primarily on species only able to grow under future conditions (highly Fig. 3), which overcompensated the loss in productivity of species present under both conditions (always negative ΔP 1−1 , black bars). The response of productivity at P− sites was equally determined by local extinctions (negative ΔP 1−0 , red bars) and by the loss in productivity of species present under both conditions (mostly negative ΔP 1−1 , black bars). Therefore, at P+ sites the productivity increase under climate change was mainly due to warmer conditions allowing for the recruitment and growth of new species, while at P− sites communities were affected by local extinctions leading to a decrease in productivity, provided that no colonization by "new" species (i.e. allowed by the new climate conditions) was occurring. The partitioning of the net diversity effect showed that changes in productivity between current and future climate were mostly driven by changes in complementarity effects and depended on site conditions (Fig. 4a). At sites where productivity was reduced under climate change (P−), we observed weak changes, with a relative increase of selection effects and a relative decrease of complementarity effects. At sites benefitting from climate change (P+), the changes in complementarity were strong, especially for 3 sites (Fig. 4a). In these P+ sites, i.e. those notably receiving "new" species (i.e. species able to grow and survive under RCM conditions but not under baseline conditions), the increase of productivity was thus mainly triggered by stronger complementarity effects (Fig. 4). We found that forest productivity strongly increased with realized species richness under both baseline and RCMs conditions (see Fig. 5 for the detailed DPR diagram for Adelboden [P+ site] and Schwerin [P− site]; and Fig. S1 for all sites). The simulations further indicated that DPRs were affected by climate change but only in terms of magnitude, and according to predicted change in site productivity under future conditions. DPRs largely strengthened at P− sites, but showed variable responses at P+ sites, regardless of the RCM considered (Tables 1 and S4, and Fig. 5). The P− sites mostly showed stronger DPRs, with steeper slopes (Table 1 and Fig. S1). Among the 21 DPRs evaluated at P− sites (7 sites x 3 RCMs), 15 showed significantly larger slope estimates than the corresponding DPR under baseline conditions; one was significantly weaker and five were non-significant (Tables 1 and S4). Thus, these results showed that under more stressful conditions (i.e. in P− sites), losing one tree species from the forest community would generally be more detrimental than under current conditions. Discussion Based on an extensive simulation design, our results first confirm that climate change is likely to strongly affect the productivity of temperate European forests 2,5 , but they demonstrate that its impact varies greatly in direction and also in magnitude across sites. Second, our results demonstrate that forest productivity positively correlates with realized tree species richness even under a changed climate, i.e. as it is getting warmer and drier, for a large panel of baseline conditions. Third, the outcomes of the simulations reveal that although both direct and indirect effects of climate change significantly affect forest productivity, indirect effects related to changes in community composition are pivotal. On average, indirect effects amounted to 71.3% of the changes in productivity across all simulations (site-level averages ranged from 50.3 to 90.2%). This occurred mostly because new climatic conditions promote the recruitment of new species at the coldest sites, whereas they cause local extinctions at other sites. Overall, this study is the first aiming at quantifying both direct and indirect effects within the same integrated approach, and it emphasizes the importance of considering the role of biodiversity when assessing climate change impacts on ecosystem productivity 9,19 . The strong effect of climate change on site-level slope of DPRs notably leads to a greater effect of tree species richness on productivity at P− sites (i.e., at sites expected to experience a decrease in productivity under climate change irrespective of forest composition). Three main causes may interact to generate this effect in our simulations, as discussed below. The first one relates to changes in the strength and direction of species interactions along the lines of the stress gradient hypothesis 35,36 , which predicts that interactions between species will shift from negative (competition) to positive (facilitation) along stress gradients. Extending this hypothesis, we posit that interspecific competitive interactions will decrease in intensity with increasing abiotic stress. Such a change leads to a stronger dependence of ecosystem functioning to species richness in P− sites compared to P+ sites, because interspecific competition becomes relatively less detrimental for growth in P− sites, which is consistent with our results. In fact, at P− sites, forest productivity was more reduced by the loss of a tree species under new climate conditions than under baseline conditions, mostly because such a loss led to a relatively larger decrease in complementarity between species. Adding a species to a community should increase interspecific competition for light. Yet, at sites experiencing strong, ecologically detrimental climate change, the increase in competition was, on average, relatively lower than under baseline conditions. This is consistent with National Forest Inventory data showing that overyielding is stronger at low-productivity sites when comparing monospecific with mixed stands 24,37 , as well as with a recent study highlighting that the slope of DPRs was steeper in sites with harsher climatic conditions across a latitudinal gradient of natural forests 28 -which was also the case in our simulations 29 . However we should acknowledge that no positive interactions (e.g. facilitation) are included in the model, although they may be of some importance under dry conditions 38 . Furthermore, our partitioning of the net diversity effect (Fig. 4) confirmed that the decreased diversity experienced by the simulated forests at P− sites under future conditions led, on average, to a decrease in the strength of the complementarity effect and to a slight increase in the selection effect. This is probably due to the fact that the reduction of the species pool made these simulated forests more sensitive to the presence of a few key productive species. Second, our simulations showed that forest communities experienced major changes in their species composition across the entire gradient under climate change (Fig. 2a), which appeared to be the strongest driver of changes in productivity in either P+ or P− sites. Climate is actually a main determinant of the local species pool through its effect on environmental filtering and biotic interactions 33 . Regarding species interactions, we have already discussed above how the change in the strength of species interactions may interact with the change in (Fig. 3) is strongly related to the effect of climate on environmental filtering. Therefore, our findings illustrate how changes in tree recruitment could play a key role on forest composition and functioning under climate change 39,40 . Third, our findings may partially be explained by the change in forest structure along climatic gradients 41-43 , notably an increase or decrease in stand-level basal area. However, as already mentioned, this study focused on long-term dynamics and thus did not consider the transient phase, because our approach to depict DPRs relies on forests at pseudo-equilibrium 29 . Therefore, the simulated pair-wise forests, i.e. with the same original composition for baseline and future conditions, generally show similar structures. However, in the case of strong changes in composition, changes in stand structure can occur and interact with changes in the strength of species interactions. It is noteworthy that indirect effects mostly arose from the possibility that during a simulation, a species may go extinct or colonize a site because of the effect of climatic conditions on the ability of the species to establish seedlings in this site (regeneration) and on its competitive ability (that depends on tree growth) relative to the co-existing species. Therefore, this study was, to our knowledge, the first to explore the relative weight of direct and indirect effects. Relying on forests at pseudo-equilibrium in terms of biomass, these simulations should obviously not be interpreted as short-term predictions of climate change impact, but rather as an assessment of impacts of changing climate conditions on forest productivity and underlying processes in the long term. The goal of this study was to test how climate change may affect the DPRs without biogeographical implications (e.g. species migration), through a fair and robust comparison between baseline ("current") and anticipated ("future") climate conditions. This could only be obtained by using the same simulation design for both situations (i.e. starting with bare ground and with a runtime of 2,000 years). Table S4, for all RCMs conditions. Mean changes in productivity for each RCMs are shown in Table S3. To complement these findings on long-term dynamics and basic processes, future work should focus on transient dynamics, to include extinction and colonization with greater accuracy, for instance using a spatial-explicit framework at the regional scale 44 , e.g. projections of species distribution models 45 . However, considering changes in community composition also related to species range shifts was not relevant in the present study because doing so would have supposed to precisely focus on transient dynamics at the regional scale, which was impossible to do with sufficient robust confidence with this kind of model 46 or with certain caution 47 . Furthermore, it would have necessitated a completely different study design, by considering a restricted species pool, different for each site, i.e. including only the species biogeographically present in the region to which each site belongs. Doing so would necessarily affect the between-site comparisons as the simulations would not rely on the same species composition in each site. Finally, this would have required to parameterize the model for other species (like Mediterranean trees and shrubs). However, it is noteworthy that our simulations took into account climate-induced extinctions, although coarsely, as well as colonization by species embedded in the model. Furthermore, regarding the relative strength of direct and indirect effects, coupling our simulations with predictions of species range shifts to also take into account the possible colonization by other species would have necessarily increased the relative importance of indirect effects, and would have thus probably strengthened our main conclusion. One limitation of our study is that simulations tested the impacts of climate change on community composition mainly through higher temperatures and more severe droughts. However, regarding biotic interactions, the current version of the model focuses on competition for light (although mediated by climatic and soil factors), and did not consider competition for nutrients nor competition for water, which may also be a key process affecting future species assemblages under climate change, particularly under a drier climate 6,48 . For instance, we may expect that the trend depicted here regarding the importance of indirect effects could be amplified by considering competition for water in the driest sites because of a larger number of species disappearing. This study also did not consider the impact of abiotic disturbances in the climate change scenarios because we chose to focus on the dual impact of climate change through the direct and indirect effects. However, incorporating this other kind of impact through increased mortality events may be possible in a modelling study using the same kind of models than here. In addition, our model is not accounting for changes in trophic interactions, such as insect herbivory or pathogen attack, which may also change under future climate 8,49 and which affect DPR 50,51 . This study shows how tree species interactions and community composition may change under climate change in the long term, which can in turn strongly affect ecosystem functioning, across a wide range of site conditions. These results represent a baseline for predicting change in mean productivity and DPRs in response to climate change in the long term. It could be supplemented by the inclusion of other processes, like species range shifts 44 or biotic (e.g. pest outbreaks) or abiotic (e.g. windstorms) disturbances. However, model complexification comes at the cost of lower precision and robustness whereas our model works with only few parameters and is applicable to a large range of species and environmental conditions. Relying on the emergent properties of the simulations, our results illustrate how new climatic conditions may affect forest productivity in the long term. The strong evidence we found regarding the strength of biotic indirect effects of climate change on ecosystem functioning, through both species loss and recruitment of new species, stresses the key role of biodiversity in promoting ecosystem resistance and resilience to climate change 30 , and thus highlights the need to better understand how species interactions and coexistence processes may shape the link between community composition and ecosystem functioning 33,52 . Methods Forest succession model. We used ForClim v2.9.6 53,54 , which had been developed for simulating the longterm dynamics of temperate forests over a wide range of environmental conditions. The model is based on a minimum number of ecological assumptions, with few parameter requirements. ForClim follows the standard approach of gap models 46,55 simulating the establishment, growth, and mortality of trees on multiple forest patches, and deriving forest stand properties by averaging the properties simulated at the patch scale 54 . More precisely, (i) the forest stand is abstracted as a composite of many small patches of land (800 m 2 ), each patch having its own dynamics; (ii) patches are horizontally homogeneous, i.e. tree position within a patch is not considered; (iii) the leaves of each tree are located in an indefinitely thin layer at the top of the stem; and (iv) successional processes can be described on each of those patches separately, i.e. there are no interactions between patches. It considers abiotic and biotic limitations to tree establishment and growth, specifically growing degree-days, soil moisture and nitrogen status as well as light availability at the height of tree crown, i.e. outco15mes of inter-and intraspecific competition. The accuracy of ForClim in Europe has been shown, among others, by its ability to reproduce vegetation patterns and forest biomass along a broad climatic gradient 53,54,56 spanning 11 sites in central Europe. We have thus focused on these same 11 sites, i.e. sites with very different forest types (representative of central Europe forests) and with mean annual temperature ranging from 1.2 to 9.7 °C, and annual precipitation sums from 573 to 1,350 mm (see Table S1). Trees get established with a diameter at breast height of 1.27 cm as a function of species-specific responses to winter temperature, light availability at the forest floor, growing degree-days and browsing pressure 53 . Patches being set to be horizontally homogeneous, there is no intra-patch variation in light availability at a given height, and thus at the ground level. The rationale for doing so is that the patch size is usually small (800 m 2 in this case), representing the area impacted by a single dominant tree. Browsing only affects seedlings in the model. Browsing sensitivity varies across species, and each site has a specific browsing pressure that is kept constant over time. Principally, all species (from the species pool chosen) are available for establishment, i.e. there is no dispersal limitation in the model and the trees are supposed to come from surrounding forests. Growth (i.e., stem diameter increment at breast height) is modeled using an empirical equation derived for optimally growing trees 57 . Actual tree growth is calculated by modifying the optimum rate to the extent that Scientific RepoRTS | (2018) 8:5627 | DOI:10.1038/s41598-018-23763-y abiotic or biotic conditions are limiting. Specifically, these limiting conditions are defined by growing degree-days, soil moisture and nitrogen status, crown length, as well as light availability at the height of tree crown, i.e. leading to inter-and intraspecific competition and thus changes in species composition. In the version we used, the model concentrates on competition for light. The amount of light available to each tree depends on self-shading as well as shading by taller trees within the patch, thus rendering tree height an important variable. Light availability across canopy is calculated using the Beer-Lambert law for the absorption of light travelling through the leaf layers of each patch, as follows: where LA h is the light availability at height h in the canopy, k is a coefficient of attenuation (with k = 0.25), LAI i is the leaf area index of the tree i, and N h is the number of trees with a height superior to h. Other resources, such as nitrogen and water, are affecting species performance and vary across sites. While nitrogen availability is a constant at the site level, water availability can vary across years and there is no explicit competition for water between co-existing trees. To calculate weather-dependent factors, mean monthly temperatures and monthly precipitation sums are used. The model is further constrained by soil water holding capacity to calculate a hydric budget for each year in each site. From diameter at breast height, the sizes of other tree compartments (e.g. foliage, roots) and total aboveground biomass are estimated using allometric equations that partly respond to changing competition and thus to diversity changes 53,54 . Species coexistence in forest gap models occurs from two main mechanisms: first, trade-offs evident from life-history strategies, such as high rates of colonization often being tied to low shade tolerance, or typically short lifespans of early successional, fast-growing trees; and second, the fact that cyclical succession is occurring on each individual patch, such that species with different properties are able to dominate during different parts of the cycle. Tree mortality is stochastic and has a background and a growth-related components. The former depends on species maximum longevity, whereas the latter is an integral proxy for stress conditions, i.e. tree vigor; since competition affects individual tree growth, it also has an indirect effect on simulated mortality rates via growth-related mortality 53 . Species parameters are provided in Table S2. To summarize the role of climate in ForClim, climatic conditions and annual weather variability directly control tree establishment (in addition to other factors), strongly influence tree growth, and have an indirect effect on survival (i.e., via growth). Regarding growth, cold and dry conditions limit individual productivity (with an intensity that depends on species characteristics -see Table S2), whereas trees are close to their optimal growth under moist and warm conditions (obviously without taking into account the effect other factors such as light availability across the canopy). A more detailed description of the model and its development over time can be found in several publications 53,54 . ForClim has evolved from a simulator of forests in the Swiss Alps to a general model that is applicable to temperate forests of central Europe 53,58 , eastern North America 58 , the Pacific Northwest of the US 59 , northeastern China 60 and the Colorado Front Range of the Rocky Mountains 61 . To our knowledge ForClim is a forest succession model that has been demonstrated to be applicable "out of the box", i.e. without any re-parameterization, across widely different climates while still keeping a species resolution, which supports its generality. Note that using a succession model to explore the diversity-productivity relationship differs from previous modelling studies 62 because: (i) we used a multi-trait model that takes into account observed trade-offs in species biology (e.g. growth/shade tolerance), as the ForClim parameters are mostly derived from observed/ measured traits; and (ii) the model has originally not been developed to study diversity-productivity questions, and can thus be viewed as an independent tool, as illustrated in former studies 29,30 . Climate data. To represent the anticipated change in climate in the 11 sites (see Table S1 for a detailed description of the site conditions), climate data for the 21 st century were obtained from the Landscape Dynamics Unit at the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL). Data were spatially interpolated at 1 km resolution. Three Regional Climate Models (RCMs) nested in the General Circulation Model ECHAM5 were used to derive climate data based on the IPCC 63 64 as baseline periods and the period 2090-2100 as representative for future conditions. Averaged changes in mean annual temperature and annual precipitation sums between the two periods are shown in Table S3. Note that we chose to use one SRES scenario but several RCMs because variability was greater between RCMs than between scenarios from IPCC 63 . Similarly to Rasche et al. 65 , we averaged the nine cells covering and surrounding each of our 11 locations. We used simulated time series data with stationary climatic conditions (i.e. no trend across time, but including inter-annual variability) for both baseline and anticipated climate, to run the simulations over 2,000 years because the main objective of this study was to explore how climate change affects the DPR from a theoretical point of view, and not to produce robust predictions of changes in DPR in the next century. From the transient RCM data, we thus only used the years 2090-2100, repeating these years randomly over a 2,000-yr period to consider anticipated climatic conditions that were also assumed to be stationary. Simulations. Virtual diversity experiments. Following the approach presented in Morin et al. 29 , we performed simulations with ForClim, with various original species richness (1 to 30 European species) for the 11 sites. For each site, we tested 7,431 original species combinations (containing n = 1 to 30 species, with all odd numbers n between 3 and 27; see below), under both current (baseline) and future conditions based on output Scientific RepoRTS | (2018) 8:5627 | DOI:10.1038/s41598-018-23763-y from three Regional Climate Models (RCMs). A total of 326,964 simulations were carried out (4 climate conditions x 11 sites x 7,431 species combinations). Each simulation was run over a time period of 2,000 years, starting from bare ground to avoid any effect due to transient dynamics, and over 200 patches (patch size 1 / 12 ha, i.e. stand of 16 ha in total per simulation) (Fig. 1b). As the results were very similar across RCMs, we illustrate the results mostly based on the KNMI RCM. To warrant that the simulated forests at the end of the simulation run were at pseudo-equilibrium (i.e. a state in which total biomass varies weakly across years, but where gaps continue to occur randomly in the forest), we considered the last 1,000 years of each simulation run 29 . For each simulation, we collected the realized species richness (i.e. final richness at the end of the simulation), relative abundance and mean productivity of each species. Mean productivity values were calculated by averaging the yearly productivity (newly accumulated biomass) over the years 1100, 1200, …, 2000 (i.e., 10 samples at decadal intervals over 100 years) in order to minimize temporal autocorrelation. Number of simulations with various tree diversity. At each site we ran simulations that differed in their original species composition, ranging from 1 to 30 European tree species for which ForClim had been parameterized (see list of species and parameters in Table S2). However it was not feasible to simulate all possible combinations of species, as this would represent ( ) Then (ii) for k > 2 and k < 28, we chose to only consider odd numbers to reduce the number of simulations. Therefore, for richness levels k = {3,5, …, 25,27}, as the total number of possible combinations was too large, we ran 500 simulations randomly drawn from all possible combinations of species, respectively. Thus, overall we ran 30 + 435 + 13 × 500 + 435 + 30 + 1 = 7,431 simulations for each site and for one climate dataset, differing in their initial species composition. Analyses. First, we tested how climate change affected mean forest productivity through direct and indirect effects by comparing the simulations between baseline and RCM conditions with the same original tree species composition. To do so, we partitioned the change in productivity (ΔP) as follows: where ΔP c-f denotes the difference in productivity between current (c) and future (f) climate scenarios according to the absence (c, f = 0) and presence (c, f = 1) of species. Thus, ΔP 1−1 is the difference in productivity between future and baseline climate conditions of those species present under both conditions, ΔP 1−0 is the loss in productivity due to species unable to grow under future conditions (ΔP 1−0 < 0), and ΔP 0−1 is the gain in productivity due to species able to colonize the patch and grow under future conditions only (ΔP 0−1 > 0; Fig. 1b). Second, to test whether DPRs varied with climate change and site conditions, linear regression between productivity and species richness were used to quantify DPRs (as the slope of the regression is a simple way to assess the strength of the relationship, as typically done) 21,22 , for both baseline and anticipated climates. In these analyses, we considered final species richness, i.e. at the end of the simulations. However, because of climate change, the range of species richness may vary between baseline and future conditions, which may create a bias in the comparison of slopes. Therefore we also performed linear regression between productivity and species richness along the same species richness range (i.e. between n = 1 and the lowest maximum richness reached over all simulations in either baseline or anticipated conditions). The estimates calculated for each slope varied very weakly in comparison to the original calculations, and these additional analyses thus led to the same results. Linear regressions between productivity and functional dispersion were also carried-out at the site level for baseline conditions and all RCM conditions. Normality of the residuals was checked using Q-Q plots. Functional diversity was calculated from all species parameters (considered as traits) using the functional dispersion index 34 as in Morin et al. 29 . Linear regressions between productivity and either realized species richness or functional dispersion were calculated at the site level for baseline conditions and all RCM conditions. Normality of the residuals was checked using Q-Q plots. Furthermore, the net effects of tree diversity on forest productivity can be explained by two basic classes of mechanisms, referring either to the presence of particular species (selection effects) or to a more efficient use of resources in diverse communities due to niche differentiation or facilitation (complementarity effects) 66 . To test for their relative role, we first quantified the net biodiversity effect (in all simulations with more than one species in the community at the beginning of the simulation) as the difference between the simulated productivity of a Scientific RepoRTS | (2018) 8:5627 | DOI:10.1038/s41598-018-23763-y multi-species forest and its expected productivity under the null hypothesis of additivity (no diversity effect), based on the simulated productivity of monospecific forests of component species, according to their final relative abundance (in terms of biomass) at the end of the simulation. Then we partitioned the net biodiversity effect into selection (SE) and complementarity (CE) effects 66 , and further divided SE and CE by the expected forest productivity based on component monocultures to allow for inter-site comparisons. Data availability. The data that support the findings of this study are available from the corresponding author upon reasonable request.
2023-02-18T14:50:11.018Z
2018-04-04T00:00:00.000
{ "year": 2018, "sha1": "21025f68bb5f438a3f892cd2da74a7d5f2e0a949", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-23763-y.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "21025f68bb5f438a3f892cd2da74a7d5f2e0a949", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
119488879
pes2o/s2orc
v3-fos-license
Measurement of dijet azimuthal decorrelations in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector and determination of the strong coupling A measurement of the rapidity and transverse momentum dependence of dijet azimuthal decorrelations is presented, using the quantity $R_{\Delta \phi}$. The quantity $R_{\Delta \phi}$ specifies the fraction of the inclusive dijet events in which the azimuthal opening angle of the two jets with the highest transverse momenta is less than a given value of the parameter $\Delta \phi_\mathrm{max}$. The quantity $R_{\Delta \phi}$ is measured in proton--proton collisions at $\sqrt{s}=$8 TeV as a function of the dijet rapidity interval, the event total scalar transverse momentum, and $\Delta \phi_\mathrm{max}$. The measurement uses an event sample corresponding to an integrated luminosity of 20.2 fb$^{-1}$ collected with the ATLAS detector at the CERN Large Hadron Collider. Predictions of a perturbative QCD calculation at next-to-leading order in the strong coupling with corrections for non-perturbative effects are compared to the data. The theoretical predictions describe the data in the whole kinematic region. The data are used to determine the strong coupling $\alpha_{\mathrm{S}}$ and to study its running for momentum transfers from 260 GeV to above 1.6 TeV. An analysis that combines data at all momentum transfers results in $\alpha_{\mathrm{S}}(m_{Z}) = 0.1127^{+0.0063}_{-0.0027}$. Introduction In high-energy particle collisions, measurements of the production rates of hadronic jets with large transverse momentum p T relative to the beam direction can be employed to test the predictions of perturbative quantum chromodynamics (pQCD).The results can also be used to determine the strong coupling α S , and to test the pQCD predictions for the dependence of α S on the momentum transfer Q (the "running" of α S ) by the renormalization group equation (RGE) [1,2].Previous tests of the RGE through α S determinations in hadronic final states have been performed using data taken in ep collisions (5 < Q < 60 GeV) [3][4][5], in e + e − annihilation (10 < Q < 210 GeV) [6,7], in p p collisions (50 < Q < 400 GeV) [8,9], and in pp collisions (130 < Q < 1400 GeV) [10][11][12][13][14].The world average value is currently α S (m Z ) = 0.1181 ± 0.0011 [15]. Recent α S results from hadron collisions are limited by theoretical uncertainties related to the scale dependence of the fixed-order pQCD calculations.The most precise α S (m Z ) result from hadron collision data is α S (m Z ) = 0.1161 +0.0041 −0.0048 [8], obtained from inclusive jet cross-section data, using pQCD predictions beyond the next-to-leading order (NLO).However, using the cross-section data in α S determinations, the extracted α S results are directly affected by our knowledge of the parton distribution functions (PDFs) of the proton, and their Q dependence.The PDF parameterizations depend on assumptions about α S and the RGE in the global data analyses in which they are determined.Therefore, in determinations of α S and its Q dependence from cross-section data the RGE is already assumed in the inputs.Such a conceptual limitation when using cross-section data can largely be avoided by using ratios of multi-jet cross sections in which PDFs cancel to some extent.So far, the multi-jet cross-section ratios R ∆R [9] and R 3/2 [10] have been used for α S determinations at hadron colliders.In this article, α S is determined from dijet azimuthal decorrelations, based on the multi-jet cross-section ratio R ∆φ [16].The RGE predictions are tested up to Q = 1.675TeV. The decorrelation of dijets in the azimuthal plane has been the subject of a number of measurements at the Fermilab Tevatron Collider [17] and the CERN Large Hadron Collider (LHC) [18,19].The variable ∆φ dijet investigated in these analyses is defined from the angles in the azimuthal plane (the plane perpendicular to the beam direction) φ 1,2 of the two highest-p T jets in the event as ∆φ dijet = |φ 1 − φ 2 |.In exclusive high-p T dijet final states, the two jets are correlated in the azimuthal plane with ∆φ dijet = π.Deviations from this (∆φ dijet < π) are due to additional activity in the final state, as described in pQCD by processes of higher order in α S .Due to kinematic constraints, the phase space in 2 → 3 processes is restricted to ∆φ dijet > 2π/3 [20] and lower ∆φ dijet values are only accessible in 2 → 4 processes.Measurements of dijet production with 2π/3 < ∆φ dijet < π (∆φ dijet < 2π/3) therefore test the pQCD matrix elements for three-jet (four-jet) production. The quantity R ∆φ is defined as the fraction of all inclusive dijet events in which ∆φ dijet is less than a specified value ∆φ max .This quantity can be exploited to extend the scope of the previous analyses towards studies of the rapidity dependence of dijet azimuthal decorrelations.Since R ∆φ is defined as a ratio of multi-jet cross sections for which the PDFs cancel to a large extent, it is well-suited for determinations of α S and for studies of its running. The quantity R ∆φ has so far been measured in p p collisions at a center-of-mass energy of √ s = 1.96TeV at the Fermilab Tevatron Collider [21].This article presents the first measurement of R ∆φ in pp collisions, based on data at √ s = 8 TeV taken with the ATLAS detector during 2012 at the LHC, corresponding to an integrated luminosity of 20.2±0.4 fb −1 [22].The data are corrected to "particle level" [23], and are used to extract α S and to study its running over a range of momentum transfers of 262 < Q < 1675 GeV. Definition of R ∆φ and the analysis phase space The definitions of the quantity R ∆φ and the choices of the variables that define the analysis phase space are taken from the proposal in Ref. [16].Jets are defined by the anti-k t jet algorithm as implemented in [24,25].The anti-k t jet algorithm is a successive recombination algorithm in which particles are clustered into jets in the E-scheme (i.e. the jet four-momentum is computed as the sum of the particle four-momenta).The radius parameter is chosen to be R = 0.6.This is large enough for a jet to include a sufficient amount of soft and hard radiation around the jet axis, thereby improving the properties of pQCD calculations at fixed order in α S , and it is small enough to avoid excessive contributions from the underlying event [26].An inclusive dijet event sample is extracted by selecting all events with two or more jets, where the two leading-p T jets have p T > p Tmin .The dijet phase space is further specified in terms of the variables y boost and y * , computed from the rapidities, y 1 and y 2 , of the two leading-p T jets as y boost = (y 1 + y 2 )/2 and y * = |y 1 − y 2 |/2, respectively.1In 2 → 2 processes, the variable y boost specifies the longitudinal boost between the dijet and the proton-proton center-of-mass frames, and y * (which is longitudinally boost-invariant) represents the absolute value of the jet rapidities in the dijet center-of-mass frame.The dijet phase space is restricted to |y boost | < y max boost and y * < y * max .The variable H T is defined as the scalar sum of the jet p T for all jets i with p Ti > p Tmin and |y i − y boost | < y * max .Furthermore, the leading-p T jet is required to have p T1 > H T /3.The values of the parameters p Tmin , y max boost , and y * max ensure that jets are well-measured in the detector within |y| < 2.5 and that contributions from non-perturbative corrections and pileup (additional proton-proton interactions within the same or nearby bunch crossings) are small.The requirement p T1 > H T /3 ensures (for a given H T ) a well-defined minimum p T1 which allows single-jet triggers to be used in the measurement.It also reduces the contributions from events with four or more jets, and therefore pQCD corrections from higher orders in α S .The values of all parameters are specified in Table 1.The quantity R ∆φ is defined in this inclusive dijet event sample as the ratio where the denominator is the inclusive dijet cross section in the phase space defined above, in bins of the variables H T and y * .The numerator is given by the subset of the denominator for which ∆φ dijet of the two leading-p T jets obeys ∆φ dijet < ∆φ max .The measurement of the y * dependence of R ∆φ allows a test of the rapidity dependence of the pQCD matrix elements.The value of ∆φ max is directly related to the hardness of the jet(s) produced in addition to the two leading-p T jets in the event.The transverse momentum sum H T is one possible choice that can be related to the scale at which α S is probed.The measurement is made as a function of H T in three different y * regions and for four different values of ∆φ max (see Table 2). Theoretical predictions The theoretical predictions in this analysis are obtained from perturbative calculations at fixed order in α S with additional corrections for non-perturbative effects. E+p z E−p z , and the pseudorapidity in terms of the polar angle θ as η = − ln tan(θ/2).The pQCD calculations are carried out using N ++ [27,28] interfaced to F [29,30] based on the matrix elements for massless quarks in the MS scheme [31].The renormalization and factorization scales are set to µ = µ = µ 0 with µ 0 = H T /2.In inclusive dijet production at leading order (LO) in pQCD this choice is equivalent to other common choices: µ 0 = p T = (p T1 + p T2 )/2 and µ 0 = p T1 .The evolution of α S is computed using the numerical solution of the next-to-leading-logarithmic (2-loop) approximation of the RGE. The pQCD predictions for the ratio R ∆φ are obtained from the ratio of the cross sections in the numerator and denominator in Eq. ( 1), computed to the same relative order (both either to NLO or to LO).The pQCD predictions for the cross section in the denominator by N ++ are available up to NLO.For ∆φ max = 7π/8, 5π/6, 3π/4 (2π/3) the numerator is a three-jet (four-jet) quantity for which the pQCD predictions in N ++ are available up to NLO (LO) [20]. The PDFs are taken from the global analyses MMHT2014 (NLO) [32,33], CT14 (NLO) [34], and NNPDFv2.3 (NLO) [35].2For additional studies, the PDF sets ABMP16 (NNLO) [37]3 and HERA-PDF 2.0 (NLO) [38] are used, which were obtained using data from selected processes only.All of these PDF sets were obtained for a series of discrete α S (m Z ) values, in increments of ∆α S (m Z ) = 0.001 (or ∆α S (m Z ) = 0.002 for NNPDFv2.3).In all calculations in this article, the PDF sets are consistently chosen to correspond to the value of α S (m Z ) used in the matrix elements.The extraction of α S from the experimental R ∆φ data requires a continuous dependence of the pQCD calculations on α S (m Z ).This is obtained by cubic interpolation (linear extrapolation) for α S (m Z ) values inside (outside) the ranges provided by the PDF sets.The central predictions that are compared to the data use α S (m Z ) = 0.118, which is close to the current world average, and the MMHT2014 PDFs.The MMHT2014 PDFs also provide the largest range of α S (m Z ) values (0.108 ≤ α S (m Z ) ≤ 0.128).For these reasons, the MMHT2014 PDFs are used to obtain the central results in the α S determinations. The uncertainties of the perturbative calculation are estimated from the scale dependence (as an estimate of missing higher-order pQCD corrections) and the PDF uncertainties.The former is evaluated from independent variations of µ and µ between µ 0 /2 and 2µ 0 (with the restriction 0.5 ≤ µ /µ ≤ 2.0).The PDF-induced uncertainty is computed by propagating the MMHT2014 PDF uncertainties.In addition, a "PDF set" uncertainty is included as the envelope of the differences of the results obtained with CT14, NNPDFv2.3,ABMP16, and HERAPDF 2.0, relative to those obtained with MMHT2014. The pQCD predictions based on matrix elements for massless quarks also depend on the number of quark flavors, in gluon splitting (g → q q), n f , which affects the tree-level matrix elements and their real and virtual corrections, as well as the RGE predictions and the PDFs obtained from global data analyses.The central results in this analysis are obtained for a consistent choice n f = 5 in all of these contributions.Studies of the effects of using n f = 6 in the matrix elements and the RGE, as documented in Appendix A, show that the corresponding effects for R ∆φ are between −1% and +2% over the whole kinematic range of this measurement.Appendix A also includes a study of the contributions from the t t production process, concluding that the effects on R ∆φ are less than 0.5% over the whole analysis phase space. The corrections due to non-perturbative effects, related to hadronization and the underlying event, were obtained in Ref. For this analysis, the central results are taken to be the average values obtained from P with tunes AMBT1 and DW.The corresponding uncertainty is taken to be half of the difference (the numerical values are provided in Ref. [46]).The results obtained with P tunes A and S-Global as well as H are used to study systematic uncertainties. ATLAS detector ATLAS is a general-purpose detector consisting of an inner tracking detector, a calorimeter system, a muon spectrometer, and magnet systems.A detailed description of the ATLAS detector is given in Ref. [47].The main components used in the R ∆φ measurement are the inner detector, the calorimeters, and the trigger system. The position of the pp interaction is determined from charged-particle tracks reconstructed in the inner detector, located inside a superconducting solenoid that provides a 2 T axial magnetic field.The inner detector, covering the region |η| < 2.5, consists of layers of silicon pixels, silicon microstrips, and transition radiation tracking detectors. During 2012, for pp collisions, the ATLAS trigger system was divided into three levels, labeled L1, L2, and the Event Filter (EF) [48,49].The L1 trigger is hardware-based, while L2 and EF are software-based and impose increasingly refined selections designed to identify events of interest.The jet trigger identifies electromagnetically and hadronically interacting particles by reconstructing the energy deposited in the calorimeters.The L1 jet trigger uses a sliding window of ∆η × ∆φ = 0.8 × 0.8 to find jets and requires these to have transverse energies E T above a given threshold, measured at the electromagnetic scale.Jets triggered by L1 are passed to the L2 jet trigger, which reconstructs jets in the same region using a simple cone jet algorithm with a cone size of 0.4 in (η, φ) space.Events are accepted if a L2 jet is above a given E T threshold.In events which pass L2, a full event reconstruction is performed by the EF.The jet EF constructs topological clusters [50] from which jets are then formed, using the anti-k t jet algorithm with a radius parameter of R = 0.4.These jets are then calibrated to the hadronic scale.Events for this analysis are collected either with single-jet triggers with different minimum E T requirements or with multi-jet triggers based on a single high-E T jet plus some amount of H T (the scalar E T sum) of the multi-jet system.The trigger efficiencies are determined relative to fully efficient reference triggers, and each trigger is used above an H T threshold where it is more than 98% efficient.The triggers used for the different H T regions in the offline analysis are listed in Table 3. Single-jet triggers select events if any jet with |η| < 3.2 is above the E T thresholds at L1, L2, and the EF. Due to their high rates, the single-jet triggers studied are highly prescaled during data-taking.Multi-jet triggers select events if an appropriate high-E T jet is identified and the H T value, summed over all jets at the EF with |η| < 3.2 and E T > 45 GeV, is above a given threshold.The additional H T requirement significantly reduces the selected event rate, and lower prescales can be applied.The integrated luminosity of the data sample collected with the highest threshold triggers is 20.2±0.4 fb −1 . The detector response for the measured quantities is determined using a detailed simulation of the ATLAS detector in G 4 [51,52].The particle-level events, subjected to the detector simulation, were produced by the P event generator [53] (version 8.160) with CT10 PDFs.The P parameters were set according to the AU2 tune [54].The "particle-level" jets are defined based on the four-momenta of the generated stable particles (as recommended in Ref. [23], with a proper lifetime τ satisfying cτ > 10 mm, including muons and neutrinos from hadron decays).The "detector-level" jets are defined based on the four-momenta of the simulated detector objects. Measurement procedure The inclusive dijet events used for the measurement of R ∆φ were collected between April and December 2012 by the ATLAS detector in proton-proton collisions at √ s = 8 TeV.All events used in this measurement are required to satisfy data-quality criteria which include stable beam conditions and stable operation of the tracking systems, calorimeters, solenoid, and trigger system.Events that pass the trigger selections described above are included in the sample if they contain at least one primary collision vertex with at least two associated tracks with p T > 400 MeV, in order to reject contributions due to cosmic-ray events and beam background.The primary vertex with highest p 2 T of associated tracks is taken as the event vertex. Jets are reconstructed offline using the anti-k t jet algorithm with a radius parameter R = 0.6.Input to the jet algorithm consists of locally calibrated three-dimensional topological clusters [50] formed from sums of calorimeter cell energies, corrected for local calorimeter response, dead material, and out-of-cluster losses for pions.The jets are further corrected for pileup contributions and then calibrated to the hadronic scale, as detailed in the following.The pileup correction is applied to account for the effects on the jet response from additional interactions within the same proton bunch crossing ("in-time pileup") and from interactions in bunch crossings preceding or following the one of interest ("out-of-time pileup").Energy is subtracted from each jet, based upon the energy density in the event and the measured area of the jet [55].The jet energy is then adjusted by a small residual correction depending on the average pileup conditions for the event.This calibration restores the calorimeter energy scale, on average, to a reference point where pileup is not present [56].Jets are then calibrated using an energy-and η-dependent correction to the hadronic scale with constants derived from data and Monte Carlo samples of jets produced in multi-jet processes.A residual calibration, based on a combination of several in situ techniques, is applied to take into account differences between data and Monte Carlo simulation.In the central region of the detector, the uncertainty in the jet energy calibration is derived from the transverse momentum balance in Z+jet, γ+jet or multi-jet events measured in situ, by propagating the known uncertainties of the energies of the reference objects to the jet energies.The energy uncertainties for the central region are then propagated to the forward region by studying the transverse momentum balance in dijet events with one central and one forward jet [57].The energy calibration uncertainty in the high-p T range is estimated using the in situ measurement of the response to single isolated hadrons [58].The jet energy calibration's total uncertainty is decomposed into 57 uncorrelated contributions, of which each is fully correlated in p T .The corresponding uncertainty in jet p T is between 1% and 4% in the central region (|η| < 1.8), and increases to 5% in the forward region (1.8 < |η| < 4.5). The jet energy resolution has been measured in the data using the bisector method in dijet events [59][60][61] and the Monte Carlo simulation is seen to be in good agreement with the data.The uncertainty in the jet energy resolution is affected by selection parameters for jets, such as the amount of nearby jet activity, and depends on the η and p T values of the jets.Further details about the determinations of the jet energy scale and resolution are given in Refs.[58,59,62]. The angular resolution of jets is obtained in the Monte Carlo simulation by matching particle-level jets with detector-level jets, when their distance in ∆R = (∆y 2 + ∆φ 2 ) is smaller than the jet radius parameter.The jet η and φ resolutions are obtained from a Gaussian fit to the distributions of the difference between the detector-level and particle-level values of the corresponding quantity.The difference between the angular resolutions determined from different Monte Carlo simulations is taken as a systematic uncertainty for the measurement result, which is about 10-15% for p T < 150 GeV and decreases to about 1% for p T > 400 GeV.The bias in jet η and φ is found to be negligible. All jets within the whole detector acceptance, |η| < 4.9, are considered in the analysis.Data-quality requirements are applied to each reconstructed jet according to its properties, to reject spurious jets not originating from hard-scattering events.In each H T bin, events from a single trigger are used and the same trigger is used for the numerator and the denominator of R ∆φ .In order to test the stability of the measurement results, the event sample is divided into subsamples with different pileup conditions.The R ∆φ results for different pileup conditions are compatible within the statistical uncertainties without any systematic trends.The measurement is also tested for variations resulting from loosening the requirements on the event-and jet-data-quality conditions, and the observed variations are also consistent within the statistical uncertainties. The distributions of R ∆φ (H T , y * , ∆φ max ) are corrected for experimental effects, including detector resolutions and inefficiencies, using the simulation.To ensure that the simulation describes all relevant distributions, including the p T and y distributions of the jets, the generated events are reweighted, based on the properties of the generated jets, to match these distributions in data, and to match the H T dependence of the observed inclusive dijet cross section as well as the R ∆φ distributions and their H T dependence.To minimize migrations between H T bins due to resolution effects, the bin widths are chosen to be larger than the detector resolution.The bin purities, defined as the fraction of all reconstructed events that are generated in the same bin, are 65-85% for ∆φ max = 7π/8 and 5π/6, and 50-75% for ∆φ max = 3π/4 and 2π/3.The bin efficiencies, defined as the fraction of all generated events that are reconstructed in the same bin, have values in the same ranges as the bin purities.The corrections are obtained bin by bin from the generated P events as the ratio of the R ∆φ results for the particle-level jets and the detector-level jets.These corrections are typically between 0% and 3%, and never outside the range from −10% to +10%.Uncertainties in these corrections due to the modeling of the migrations by the simulation are estimated from the changes of the correction factors when varying the reweighting function.In most parts of the phase space, these uncertainties are below 1%.The results from the bin-by-bin correction procedure were compared to the results when using a Bayesian iterative unfolding procedure [63], and the two results agree within their statistical uncertainties. The uncertainties of the R ∆φ measurements include two sources of statistical uncertainty and 62 sources of systematic uncertainty.The statistical uncertainties arise from the data and from the correction factors.The systematic uncertainties are from the correction factors (two independent sources, related to variations of the reweighting of the generated events), the jet energy calibration (57 independent sources), the jet energy resolution, and the jet η and φ resolutions.To avoid double counting of statistical fluctuations, the H T dependence of the uncertainty distributions is smoothed by fitting either linear or quadratic functions in log(H T /GeV).From all 62 sources of experimental correlated uncertainties, the dominant systematic uncertainties are due to the jet energy calibration.For ∆φ max = 7π/8 and 5π/6 the jet energy calibration ) pQCD + non-pert.correct.)uncertainties are typically between 1.0% and 1.5% and always less than 3.1%.For smaller values of ∆φ max they can be as large as 4% (for ∆φ max = 3π/4) or 9% (for ∆φ max = 2π/3).A comprehensive documentation of the measurement results, including the individual contributions due to all independent sources of uncertainty, is provided in Ref. [46]. Measurement results The measurement results for R ∆φ (H T , y * , ∆φ max ) are corrected to the particle level and presented as a function of H T , in different regions of y * and for different ∆φ max requirements.The results are listed in Appendix B in Tables 6-9, and displayed in Figure 1, at the arithmetic center of the H T bins.At fixed (y * , ∆φ max ), R ∆φ (H T , y * , ∆φ max ) decreases with increasing H T and increases with increasing y * at fixed (H T , ∆φ max ).At fixed (H T , y * ), R ∆φ decreases with decreasing ∆φ max . Theoretical predictions based on NLO pQCD (for ∆φ max = 7π/8, 5π/6, and 3π/4) or LO (for ∆φ max = 2π/3) with corrections for non-perturbative effects, as described in Section 3, are compared to the data.The ratios of data to the theoretical predictions are displayed in Figure 2. To provide further information about the convergence of the pQCD calculation, the inverse of the NLO K-factors are also shown (defined as the ratio of predictions for R ∆φ at NLO and LO, K = R NLO ∆φ /R LO ∆φ ).In all kinematical regions, the data are described by the theoretical predictions, even for ∆φ max = 2π/3, where the predictions are only based on LO pQCD and have uncertainties of about 20% (dominated by the dependence on µ and µ ).The data for ∆φ max = 7π/8 and 5π/6 allow the most stringent tests of the theoretical predictions, since for these ∆φ max values the theoretical uncertainties are typically less than ±5%. Selection of data points for the α S extraction The extraction of α S (Q) at different scales Q = H T /2 is based on a combination of data points in different kinematic regions of y * and ∆φ max , with the same H T .The data points are chosen according to the following criteria. 1. Data points are used only from kinematic regions in which the pQCD predictions appear to be most reliable, as judged by the renormalization and factorization scale dependence, and by the NLO K-factors. 2. For simplicity, data points are only combined in the α S extraction if they are statistically independent, i.e. if their accessible phase space does not overlap. 3. The preferred data points are those for which the cancellation of the PDFs between the numerator and the denominator in R ∆φ is largest. 4. The experimental uncertainty at large H T is limited by the sample size.If the above criteria give equal preference to two or more data sets with overlapping phase space, the data points with smaller statistical uncertainties are used to test the RGE at the largest possible momentum transfers with the highest precision. Based on criterion (1), the data points obtained for ∆φ max = 2π/3 are excluded, as the pQCD predictions in N ++ are only available at LO. Furthermore, it is observed that the points for ∆φ max = 3π/4 have a large scale dependence, which is typically between +15% and −10%.For the remaining data points with ∆φ max = 7π/8 and 5π/6 at larger y * (1 < y * < 2), the NLO corrections are negative and (with a size of 5-23%) larger than those at smaller y * , indicating potentially larger corrections from not yet calculated higher orders.The conclusion from criterion (1) is therefore that the pQCD predictions are most reliable in the four kinematic regions 0 < y * < 0.5 and 0.5 < y * < 1, for ∆φ max = 7π/8 and ∆φ max = 5π/6, where the NLO K-factors are typically within ±5% of unity. The requirement of statistically independent data points according to criterion (2) means that the data points from different y * regions can be combined, but not those with different ∆φ max .The choice whether to use the data with ∆φ max = 7π/8 or 5π/6 (in either case combining the data for 0 < y * < 0.5 and 0.5 < y * < 1) is therefore based on criteria (3) and ( 4). The cancellation of the PDFs, as addressed in criterion (3), is largest for those data points for which the phase space of the numerator in Eq. ( 1) is closest to that of the denominator.Since the numerator of R ∆φ is a subset of the denominator, this applies more to the data at larger values of ∆φ max .For those points, the fractional contributions from different partonic subprocesses (gg → jets, gq → jets, qq → jets), and the ranges in the accessible proton momentum fraction x are more similar for the numerator and denominator, resulting in a larger cancellation of PDFs in R ∆φ .This argument, based on the third criterion, leads to the same conclusion as the suggestion of criterion (4), to use the data set with smallest statistical uncertainty. Based on the four criteria, α S is therefore extracted combining the data points in the rapidity regions 0 < y * < 0.5 and 0.5 < y * < 1 for ∆φ max = 7π/8.Extractions of α S from the data points in other kinematical regions in y * and ∆φ max are used to investigate the dependence of the final results on those choices. Determination of α S The R ∆φ measurements in the selected kinematic regions are used to determine α S and to test the QCD predictions for its running as a function of the scale Q = H T /2.The α S results are extracted by using [64], to minimize the χ 2 function specified in Appendix C. In this approach, the experimental and theoretical uncertainties that are correlated between all data points are treated in the Hessian method [65] by including a nuisance parameter for each uncertainty source, as described in Appendix C. The only exceptions are the uncertainties due to the PDF set and the µ , dependence of the pQCD calculation.These uncertainties are determined from the variations of the α S results, when repeating the α S extractions for different PDF sets and for variations of the scales µ , as described in Section 3. Results of α S (Q) (with Q = H T /2, taken at the arithmetic centers of the H T bins) are determined from the R ∆φ data for ∆φ max = 7π/8, combining the data points in the two y * regions of 0 < y * < 0.5 and 0.5 < y * < 1.0.Nine α S (Q) values are determined in the range 262 < Q ≤ 1675 GeV.A single χ 2 minimization provides the uncertainties due to the statistical uncertainties, the experimental correlated uncertainties, the uncertainties due to the non-perturbative corrections, and the MMHT2014 PDF uncertainty.Separate χ 2 minimizations are made for variations of µ and µ (in the ranges described in Section 3), and also for the CT14, NNPDFv2.3,ABMP16, and HERAPDF 2.0 PDF sets.The largest individual variations are used to quantify the uncertainty due to the scale dependence and the PDF set, respectively.The so-defined PDF set uncertainty may partially double count some of the uncertainties already taken into account by the MMHT2014 PDF uncertainties, but it may also include some additional systematic uncertainties due to different approaches in the PDF determinations.The α S (Q) results are displayed in Figure 3 and listed in Table 4. In addition, assuming the validity of the RGE, all 18 data points in 0 < y * < 0.5 and 0.5 < y * < 1.0 for ∆φ max = 7π/8 are used to extract a combined α S (m Z ) result.The combined fit (for MMHT2014 PDFs at the default scale) gives χ 2 = 21.7 for 17 degrees of freedom and a result of α S (m Z ) = 0.1127 (the uncertainties are detailed in Table 5).The fit is then repeated for the CT14, NNPDFv2.3,ABMP16, and HERAPDF 2.0 PDF sets, for which the α S (m Z ) results differ by +0.0001, +0.0022, +0.0026, and +0.0029, respectively.Fits for various choices of µ R and µ F result in variations of the α S (m Z ) results between −0.0019 and +0.0052. Further dependence of the α S results on some of the analysis choices is investigated in a series of systematic studies. • Changing the ∆φ max requirement Based on the criteria outlined in Section 7 it was decided to use the data for ∆φ max = 7π/8 in the α S analysis.If, instead, the data with ∆φ max = 5π/6 are used, the α S (m Z ) result changes by +0.0052 to α S (m Z ) = 0.1179, with an uncertainty of +0.0065 and −0.0045 due to the scale dependence. • Extending the y * region For the central α S results, the data points with 1 < y * < 2 are excluded.If α S (m Z ) is determined only from the data points for 1 < y * < 2 (with ∆φ max = 7π/8) the α S (m Z ) result changes by −0.0018, with an increased scale dependence, to α S (m Z ) = 0.1109 +0.0071 −0.0031 with χ 2 = 13.8 for seven degrees of freedom.If the data points for 1 < y * < 2 are combined with those for 0 < y * < 0.5 and 0.5 < y * < 1, the result is α S (m Z ) = 0.1135 +0.0051 −0.0025 . • Smoothing the systematic uncertainties In the experimental measurement, the systematic uncertainties that are correlated between different data points were smoothed in order to avoid double counting of statistical fluctuations.For this purpose, the systematic uncertainties were fitted with a linear function in log (H T /GeV).If, alternatively, a quadratic function is used, the central α S (m Z ) result changes by −0.0006, and the experimental uncertainty is changed from +0.0018 −0.0017 to +0.0017 −0.0016 . • Stronger correlations of experimental uncertainties The largest experimental uncertainties are due to the jet energy calibration.These are represented by contributions from 57 independent sources.Some of the correlations are estimated on the basis of prior assumptions.In a study of the systematic effects these assumptions are varied, resulting in an alternative scenario with stronger correlations between some of these sources.This changes the combined α S (m Z ) result by −0.0004, while the experimental correlated uncertainty is reduced from +0.0018 −0.0017 to +0.0012 −0.0013 . • Treatment of non-perturbative corrections The central α S results are obtained using the average values of the non-perturbative corrections from P tunes ABT1 and DW, and the spread between the average and the individual models is taken as a correlated uncertainty, which is treated in the Hessian approach by fitting a corresponding nuisance parameter.Alternatively, the α S (m Z ) result is also extracted by fixing the values for the non-perturbative corrections to the individual model predictions from H (default) and P with tunes AMBT1, DW, S Global, and A, and to unity (corresponding to zero non-perturbative corrections).The corresponding changes of the α S (m Z ) result for the different choices are between −0.0004 and +0.0011. • Choice of n The choice of n f = 6 corresponds to the rather extreme approximation in which the top quark is included as a massless quark in the pQCD calculation.The effect of using n f = 6 instead of n f = 5 in the pQCD matrix elements and the RGE and the corresponding impact on R ∆φ are discussed in Appendix A. The effects on the extracted α S results are also studied and are found to be between +1.3% (at low H T ) and −1.1% (at high H T ) for the nine α S (Q) results.The combined α S (m Z ) result changes by −0.0006 from 0.1127 (for n f = 5) to 0.1121 (for n f = 6). • A scan of the renormalization scale dependence Unlike all other uncertainties which are treated in the Hessian approach, the uncertainty due to the renormalization and factorization scale dependence is obtained from individual fits in which both scales are set to fixed values.To ensure that the largest variation may not occur at intermediate values, a scan of the renormalization scale dependence in finer steps is made.For each of the three variations of µ by factors of x µ = 0.5, 1.0, 2.0, the renormalization scale is varied by nine logarithmically equal-spaced factors of x µ = 0.5, 0.596, 0.708, 0.841, 1.0, 1.189, 1.413, 1.679, and 2.0. It is seen that the largest upward variation (of +0.0052) is obtained for the correlated variation x µ = x µ = 2.0.The lowest variation (of −0.0027) is obtained for the anti-correlated variation x µ = 0.5 and x µ = 2.0, which is, however, outside the range 0.5 ≤ x µ /x µ ≤ 2. The lowest variation within this range (−0.0014) is obtained for x µ = 0.5 and x µ = 1.0. • Effects of the Hessian method In the Hessian approach, a fit can explore the multi-dimensional uncertainty space to find the χ 2 minimum at values of the nuisance parameters associated to the sources of systematic uncertainties, that do not represent the best knowledge of the corresponding sources.While in this analysis the shifts of the nuisance parameters are all small, it is still interesting to study their effects on the α S fit results.Therefore, the α S (m Z ) extraction is repeated, initially including the uncorrelated (i.e.statistical) uncertainties only.Then, step by step, the experimental correlated uncertainties, the uncertainties of the non-perturbative corrections, and the PDF uncertainties are included.These fits produce α S (m Z ) results that differ by less than ±0.0004 from the central result. These systematic studies show that the α S results are rather independent of the analysis choices and demonstrate the stability of the α S extraction procedure.These variations are not treated as additional uncertainties because their resulting effects are smaller than the other theoretical uncertainties.The largest variation of the α S (m Z ) result, by +0.0052, is obtained when using the data with ∆φ max = 5π/6 instead of ∆φ max = 7π/8.This difference may be due to different higher-order corrections to the NLO pQCD results for different ∆φ max values.This assumption is consistent with the observed scale dependence of the α S (m Z ) results, within which the results for both choices of ∆φ max agree (0.1127 + 0.0052 versus 0.1179 − 0.0045 for ∆φ max = 5π/6 and 7π/8, respectively).It is therefore concluded from the systematic studies that no further uncertainties need to be assigned. The final result from the combined fit is α S (m Z ) = 0.1127 +0.0063 −0.0027 with the individual uncertainty contributions given in Table 5.This result and the corresponding RGE prediction are also shown in Figure 3.For all α S results in Tables 4 and 5, the uncertainties are dominated by the µ dependence of the NLO pQCD calculation. The individual α S (Q) results are compared in Figure 4 with previously published α S results obtained from jet measurements [4][5][6][7][8][9][10][11][12][13][14] and with the RGE prediction for the combined α S (m Z ) result obtained in this analysis.The new results agree with previous α S (Q) results in the region of overlap, and extend the pQCD tests to momentum transfers up to 1.6 TeV, where RGE predictions are consistent with the α S (Q) results, as discussed in Appendix E. Summary The multi-jet cross-section ratio R ∆φ is measured at the LHC.The quantity R ∆φ specifies the fraction of the inclusive dijet events in which the azimuthal opening angle of the two jets with the highest transverse momenta is less than a given value of the parameter ∆φ max .The R ∆φ results, measured in 20.2 fb −1 of pp collisions at √ s = 8 TeV with the ATLAS detector, are presented as a function of three variables: the total transverse momentum H T , the dijet rapidity interval y * , and the parameter ∆φ max .The H T and y * dependences of the data are well-described by theoretical predictions based on NLO pQCD (for ∆φ max = 7π/8, 5π/6, and 3π/4), or LO pQCD (for ∆φ max = 2π/3), with corrections for non-perturbative effects.Based on the data points for ∆φ max = 7π/8 with 0 < y * < 0.5 and 0.5 < y * < 1, nine α S results are determined, at a scale of Q = H T /2, over the range of 262 < Q < 1675 GeV.The α S (Q) results are consistent with the predictions of the RGE, and a combined analysis results in a value of α S (m Z ) = 0.1127 The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers.Major contributors of computing resources are listed in Ref. [66]. Appendix A Effects of top quark contributions on the pQCD predictions There are two ways in which contributions from top quarks affect the pQCD predictions for R ∆φ .Firstly, the pQCD predictions based on matrix elements for massless quarks also depend on the number of quark flavors in gluon splitting (g → q q), n f , which affects the tree-level matrix elements and their real and virtual corrections, as well as the RGE predictions.The pQCD predictions for the central analysis are obtained for n f = 5.The effects for the measured quantity R ∆φ for the choice n f = 6 are computed in this appendix.Secondly, since the decay products of hadronically decaying (anti-)top quarks are sometimes reconstructed as multiple jets, the O(α 2 S ) t t production process also contributes to three-jet topologies.Since this contribution is of lower order in α S as compared to the pQCD O(α 3 S ) three-jet production processes, it is a "super-leading" contribution, which is formally more important.This potentially large contribution and the corresponding effects for R ∆φ are also estimated in this appendix. In a pQCD calculation in which quark masses are properly taken into account, the contributions from the massive top quark arise naturally at higher momentum transfers, according to the available phase space.In calculations based on matrix elements for massless quarks, n f is a parameter in the calculation.For jet production at the LHC, the alternatives are n f = 5, i.e. ignoring the contributions from g → t t processes (which is the central choice for this analysis), or n f = 6, i.e. treating the top quark as a sixth massless quark.The relative difference between the two alternatives is evaluated from the effects due to the RGE and the matrix elements.For this purpose, the 2-loop solution of the RGE for n f = 5 is replaced by the 2-loop solutions for n f = 5 and n f = 6 with 1-loop matching [67] at the pole mass of the top quark m pole top , assuming that m pole top is equal to the world average of the measured "Monte Carlo mass" of 173.21 GeV [15].In addition, the matrix elements are recomputed for n f = 6.For a fixed value of α S (m Z ) = 0.118, the corresponding effects for the pQCD predictions for R ∆φ are in the range of −1% to +2%. The effects on R ∆φ due to the contributions from hadronic decays of t t final states are estimated using P -B [68] (for the pQCD matrix elements) interfaced with P (for the parton shower, underlying event, and hadronization) and CTEQ6L1 PDFs [69].It is seen that the t t process contributes 0.003-0.2% to the denominator of R ∆φ (the inclusive dijet cross section), and 0.006-0.5% to the numerator (with ∆φ max = 7π/8).The effects for the ratio R ∆φ are 0-0.5% in the analysis phase space, and there are no systematic trends in the considered distributions within the statistical uncertainties of the generated P -B event sample.Since this effect is about four to eight times smaller than the typical uncertainty due to the renormalization scale dependence, the corresponding effects on α S are not investigated further. C Definition of χ 2 Given is a set of experimental measurement results in bins i of a given quantity with central measurement results d i with statistical and uncorrelated systematic uncertainties σ i,stat and σ i,uncorr , respectively.The experimental measurements are affected by various sources of correlated uncertainties, and δ i j ( j ) specifies the uncertainty of measurement i due to the source j, where j is a Gaussian distributed random variable with zero expectation value and unit width.The δ i j ( j ) specify the dependence of the measured result i on the variation of the correlated uncertainty source j by j standard deviations, where j = 0 corresponds to the central value of the measurement (i.e.δ i j ( j = 0) = 0), while the relative uncertainties corresponding to plus/minus one standard deviation are given by δ i j ( j = ±1) = ∆d ± i j .From the central measurement result and the relative uncertainties ∆d ± i j , the continuous j dependence of δ i j ( j ) can be obtained using quadratic interpolation The theoretical prediction t i (α S ) for bin i depends on the value of α S .Furthermore, the theoretical predictions are also affected by sources of correlated uncertainties; δ ik (λ k ) specifies the relative uncertainty of t i due to the source k.Like the j , the λ j are also treated as Gaussian distributed random variables with zero expectation value and unity width.It is assumed that the theoretical predictions can be obtained with statistical uncertainties which are negligible as compared to the statistical uncertainties of the measurements. The continuous dependence of the relative uncertainty δ ik (λ k ) can be obtained through quadratic interpolation between the central result t i and the results t ± ik obtained by variations corresponding to plus/minus one standard deviation due to source k The χ 2 used in the α S extraction is then computed as where i runs over all data points, j runs over all sources of experimental correlated uncertainties, and k over all theoretical correlated uncertainties.The fit result of α S is determined by minimizing χ 2 with respect to α S and the "nuisance parameters" j and λ k . D On the compatibility of the R ∆φ data and the world average of α S (m Z ) The α S (m Z ) result in Table 5 is lower than the world average value by approximately one standard deviation.In this appendix, the consistency of the world average of α S (m Z ) and the R ∆φ data is investigated using the χ 2 values.The χ 2 values are computed according to Appendix C, using the 18 data points with ∆φ max = 7π/8, and 0.0 < y * < 0.5 and 0.5 < y * < 1.0.The theoretical predictions are computed for the fixed value of α S (m Z ) = 0.1181.The computation of χ 2 uses the Hessian method for the treatment of all uncertainties except for the PDF set uncertainty and the scale dependence, so the χ 2 values do not reflect these theoretical uncertainties.Therefore, a series of χ 2 values is computed for possible combinations of variations of µ and µ around the central choice µ = µ = µ 0 = H T /2.The results are displayed in Table 10 and compared to the χ 2 values obtained when α S (m Z ) is a free fit parameter. When α S (m Z ) is fixed to the world average, the χ 2 value for the central scale choice is slightly higher than the one obtained for a free α S (m Z ), and also higher than the expectation of χ 2 = N dof ± √ 2 • N dof , where N dof = 18 when α S (m Z ) is fixed or 17 when it is a free fit parameter.However, the χ 2 definition does not take into account the theoretical uncertainty due to the scale dependence.When the renormalization scale is increased by a factor of two, to µ = 2µ 0 , lower χ 2 values are obtained, which are similar in size to the ones obtained for a free α S (m Z ), and close to the expectation (the dependence on the factorization scale is rather small).Since these χ 2 values are well within the range of the expectation, it is concluded that, within their uncertainties, the theoretical predictions for the world average value of α S (m Z ) are consistent with the R ∆φ data. E On the compatibility of the RGE and the slope of the α S (Q) results It is natural to ask whether the observed Q dependence (i.e. the running) of the α S (Q) results shown in Figure 3 is described by the RGE or instead exhibits significant deviations at the highest Q values, possibly indicating signals of physics beyond the Standard Model.The consistency of the RGE predictions with the observed slope is investigated in this appendix.The RGE prediction would be in agreement with the observed Q dependence of the α S (Q) results if the latter, when evolved to m Z , give α S (m Z ) values that are independent of Q.For this purpose, a linear function in log 10 (Q/1 GeV), f (Q) = c +m •log 10 (Q/1 GeV), is fitted to the nine α S (m Z ) points in Figure 3 (bottom) and their statistical uncertainties.Here the correlated systematic uncertainties are not taken into account as their correlations are non-trivial since the individual α S (Q) results are obtained in separate fits, with different optimizations for the nuisance parameters.The fit results for the slope parameter m and its uncertainty are displayed in Table 11 for a fit to the α S (m Z ) points at all nine Q values, and also for fits to different subsets of the α S (m Z ) points, omitting points either at lower or higher Q.As documented in Table 11, a fit to all nine α S (m Z ) points gives a slope that differs from zero by more than its uncertainty.Fits to groups of data points, however, show that the significance of this slope arises from the two points at lowest Q. Omitting the α S (m Z ) point at lowest Q (fitting points # 2-9), or the two points at lowest Q (fitting points # 3-9), both give fit results for which the slope parameter is more consistent with zero, while the α S (m Z ) results change by less than ±0.0001.On the other hand, omitting the α S (Q) points at highest Q (fitting points # 1-8 or # 1-7) does not affect the significance of the slope.It is therefore concluded that the high-Q behavior of the α S (Q) results is consistent with the RGE and that the small differences at lowest Q do not affect the combined α S (m Z ) result. Figure 1 : Figure1: The measurement of R ∆φ (H T , y * , ∆φ max ) as a function of H T in three regions of y * and for four choices of ∆φ max .The inner error bars indicate the statistical uncertainties, and the sum in quadrature of statistical and systematic uncertainties is displayed by the total error bars.The theoretical predictions, based on pQCD at NLO (for ∆φ max = 7π/8, 5π/6, and 3π/4) and LO (for ∆φ max = 2π/3) are shown as solid and dashed lines, respectively.The shaded bands display the PDF uncertainties and the scale dependence, added in quadrature. Figure 2 : Figure 2: The ratios of the R ∆φ measurements and the theoretical predictions obtained for MMHT2014 PDFs and α S (m Z ) = 0.118.The ratios are shown as a function of H T , in different regions of y * (columns) and for different ∆φ max (rows).The inner error bars indicate the statistical uncertainties and the sum in quadrature of statistical and systematic uncertainties is displayed by the total error bars.The theoretical uncertainty is the sum in quadrature of the uncertainties due to the PDFs and the scale dependence.The inverse of the NLO K-factor is indicated by the dashed line. Figure 3 : Figure 3: The α S results determined from the R ∆φ data for ∆φ max = 7π/8 in the y * regions 0 < y * < 0.5 and 0.5 < y * < 1.0 in the range of 262 < Q < 1675 GeV.The inner error bars indicate the experimental uncertainties and the sum in quadrature of experimental and theoretical uncertainties is displayed by the total error bars.The α S (Q) results (top) are displayed together with the prediction of the RGE for the α S (m Z ) result obtained in this analysis.The individual α S (Q) values are then evolved to Q = m Z (bottom). Table 1 : The values of the parameters and the requirements that define the analysis phase space for the inclusive dijet event sample. Table 3 : The triggers used to select the multi-jet events in the different H T ranges in the offline analysis, and the corresponding integrated luminosities. Table 6 : The R ∆φ measurement results for ∆φ max = 7π/8 with their relative statistical and systematic uncertainties. Table 7 : The R ∆φ measurement results for ∆φ max = 5π/6 with their relative statistical and systematic uncertainties. Table 8 : The R ∆φ measurement results for ∆φ max = 3π/4 with their relative statistical and systematic uncertainties. Table 9 : The R ∆φ measurement results for ∆φ max = 2π/3 with their relative statistical and systematic uncertainties. Table 10 : The χ 2 values between the 18 data points and the theoretical predictions when α S (m Z ) is fixed to the world average value of α S (m Z ) = 0.1181 (third column) and when it is a free fitted parameter (fourth column) for variations of the scales µ R and µ F around the central choice µ R = µ F = µ 0 = H T /2. Table 11 : Fit of a linear function in log 10 (Q/GeV) to the nine extracted α S (Q) results with their statistical uncertainties. [17] D0 Collaboration, Measurement of dijet azimuthal decorrelations at central rapidities in p p collisions at √ s = 1.96TeV, Phys.Rev. Lett.94 (2005) 221801, arXiv: hep-ex/0409040.[18] CMS Collaboration, Dijet Azimuthal Decorrelations in pp Collisions at √ s = 7 b c Also at CERN, Geneva; Switzerland.d Also at CPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille; France.e Also at Departament de Fisica de la Universitat Autonoma de Barcelona, Barcelona; Spain.f Also at Departamento de Fisica Teorica y del Cosmos, Universidad de Granada, Granada (Spain); Spain.g Also at Departement de Physique Nucléaire et Corpusculaire, Université de Genève, Geneva; Switzerland.h Also at Department of Financial and Management Engineering, University of the Aegean, Chios; Greece.i Also at Department of Physics and Astronomy, University of Louisville, Louisville, KY; United States of America.j Also at Department of Physics, California State University, Fresno CA; United States of America.k Also at Department of Physics, California State University, Sacramento CA; United States of America.l Also at Department of Physics, King's College London, London; United Kingdom.m Also at Department of Physics, Nanjing University, Jiangsu; China.n Also at Department of Physics, St. Petersburg State Polytechnical University, St. Petersburg; Russia.o Also at Department of Physics, Stanford University, Stanford CA; United States of America.p ab Also at Institut de Física d'Altes Energies (IFAE), The Barcelona Institute of Science and Technology, Barcelona; Spain.
2018-05-12T09:53:45.000Z
2018-05-12T00:00:00.000
{ "year": 2018, "sha1": "5911f666f2e29222ee5f29f299997d56693ba86a", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.98.092004", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "5911f666f2e29222ee5f29f299997d56693ba86a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
29208
pes2o/s2orc
v3-fos-license
MiR-103a-3p targets the 5′ UTR of GPRC5A in pancreatic cells There has been incremental evidence of miRNA binding sites not only being restricted to the 3′ UTR of mRNAs, but being consistently present in both coding regions and fewer examples in 5′ UTR regions. In this manuscript, the authors use a predictive algorithm to identify two putative miR-103a-3p in the 5′ UTR of the GPRC5A mRNA. The target mRNA is of interest in cancer progression, acting as either a tumor suppressor or an oncogene in different cancers. They clearly demonstrate that ectopic expression of this miRNA leads to a decrease in the levels of GPRC5A mRNA and protein in pancreatic cells. Interestingly, the authors show that miR-103a-3p targeting of the 5′ UTR but not the CDS of GPRC5A results in decreased levels of the corresponding protein. INTRODUCTION MiRNAs comprise a group of short noncoding RNAs that post-transcriptionally regulate gene expression in multicellular organisms in a sequence-dependent manner (Bartel 2004). The "seed" region of a miRNA, defined as the sequence spanning bases 2 through 7 inclusive from the 5 ′ end of the miRNA, determines a miRNA's spectrum of targets (Miranda et al. 2006;Bartel 2009;Rigoutsos and Tsirigos 2010;Xia et al. 2012). So far, more than 17,000 mature miRNA sequences from 140 different species have been identified (Kozomara and Griffiths-Jones 2011). With regard to target cardinality, a single miRNA can simultaneously target multiple mRNAs, thusly decreasing, to varying degrees, the abundance of the corresponding protein (Miranda et al. 2006;Baek et al. 2008;Selbach et al. 2008). MiRNA research began more than 20 years ago (Lee et al. 1993;Wightman et al. 1993;Hamilton and Baulcombe 1999;Reinhart et al. 2000) and efforts since then have revealed that the identification of miRNA targets is an inherently difficult problem (Rigoutsos and Tsirigos 2010). Nonetheless, the field has made great advances during this time and nu-merous miRNA targets have been described in the literature to date with the majority of these targets being located in the 3 ′ UTR of the targeted mRNAs (Bartel 2009). In recent years, others and we have shown that miRNAs can also target mRNAs within their CDS and decrease the corresponding protein's abundance (Duursma et al. 2008;Forman et al. 2008;Lal et al. 2008;Shen et al. 2008;Tay et al. 2008;Rigoutsos 2009;Brest et al. 2011;Hao et al. 2011;Nelson et al. 2011;Sauna and Kimchi-Sarfaty 2011;Gartner et al. 2013;Hausser et al. 2013;Radhakrishnan et al. 2013;Shabalina et al. 2013). In contrast, identifying targets in the 5 ′ UTR of mRNAs has proven more difficult. In early work, use of artificial constructs containing multiple copies of known miRNA targets showed that from a mechanistic standpoint miRNAs can repress mRNAs through 5 ′ UTR binding just as efficiently as through 3 ′ UTR binding (Lytle et al. 2007;Devlin et al. 2010;Moretti et al. 2010). For naturally occurring targets, two subsequent studies reported examples whereby 5 ′ UTR targeting by the miRNA did not down-regulate the mRNA but instead enhanced protein translation and increased protein levels (Henke et al. 2008;Orom et al. 2008;Tsai et al. 2009;Da Sacco and Masotti 2012). Four subsequent reports described a few examples of 5 ′ UTR binding sites that led to the down-regulation of the targeted transcript (Jopling et al. 2005;Lee et al. 2009;Grey et al. 2010;Dewing et al. 2012). More recently, a C. elegans study discussed the possibility of a miRNA target in the 5 ′ UTR of CBP-1's mRNA (Vora et al. 2013). Below, we report on our validation of two human miR-103a-3p targets in the 5 ′ UTR of the human GPRC5A gene (ENSG00000013588/ENST00000014914). GPRC5A encodes an orphan G-protein-coupled receptor that was originally reported to be overexpressed in normal lung tissue and underexpressed in lung cancer; since then, GPRC5A's dysregulation has been associated with multiple cancer types: In some cancers, GPRC5A can act as a tumor suppressor whereas in others it can act as an oncogene (Tao et al. 2007;Acquafreda et al. 2009;Cheng et al. 2012). MiR-103a-3p is a notable miRNA in that it is evolutionarily conserved and involved in regulating multiple cellular processes such as cell division, cellular metabolism and stress, angiogenesis, etc. (Finnerty et al. 2010). MiR-103a-3p's dysregulation has been associated with many human diseases including several cancers, Alzheimer's disease, and diabetes (Martello et al. 2010;Yao et al. 2010;Trajkovski et al. 2011). RESULTS We studied the interactions of miR-103a-3p and GPRC5A, both of which are endogenous to pancreatic cell lines and tissue (both normal and cancer). We focused on two candidate miR-103a-3p targets in the 5 ′ UTR of GPRC5A. The first putative miR-103a-3p MRE (site S11) is located between nucleotides 117 and 140 inclusive, whereas the second putative MRE (site S12) is located between nucleotides 330 and 353 inclusive (Fig. 1A). Overexpression of the 5 ′ UTR MRE can increase GPRC5A mRNA and protein levels To further corroborate the targeting of GPRC5A's 5 ′ UTR by miR-103a-3p we made use of the concept of "sponging" or "decoying" (Ebert and Sharp 2010a,b;Poliseno et al. 2010;Tay et al. 2011), which was recently demonstrated to be able FIGURE 2. MiR-103a-3p directly targets two sites in the 5 ′ UTR of GPRC5A. (A) The scheme indicates the sequences of the predicted miR-103a-3p binding sites (S11) within the 5 ′ UTR of GPRC5A and the sequences of S11 wild-type (WT, top) and mutant (MT, bottom) used in this study. (B) Luciferase activity in MIA PaCa-2 cells upon transfection of indicated reporter constructs and pre-miR-103a-3p were compared with cells transfected with indicated reporter constructs and pre-miR-scramble. (C) Luciferase activity in MIA PaCa-2 cells upon transfection of indicated reporter constructs and miR-103a-3p inhibitors was compared with cells transfected with indicated reporter constructs and Anti-miR-scramble. (D) Luciferase activity in MIA PaCa-2 cells upon transfection of indicated reporter constructs and pre-miR-103a-3p were compared with cells transfected with the indicated reporter constructs and pre-miR-scramble. (E) Luciferase activity in MIA PaCa-2 cells upon transfection of the indicated reporter constructs and miR-103a-3p inhibitors was compared with cells transfected with the indicated reporter constructs and Anti-miR-scramble. All numerical data are mean ± SD. ( * ) P < 0.05; ( * * ) P < 0.01; ( * * * ) P < 0.001; n = 3. S11WT, psiCHECK-2 vector containing miR-103a-3p binding site 1; S11MT, psiCHECK-2 vector containing mutant miR-103a-3p binding site 1. to induce observable functional effects Tay et al. 2011;Ala et al. 2013). We focused on the first of the two 5 ′ UTR MREs (i.e., site S11), which was more responsive to miR-103a-3p/anti-miR-103a-3p treatment than the site S12, and assessed the ability of a sponge comprising 10 tandem copies of the S11 MRE to act as a decoy for GPRC5A. The sponge vector was labeled GPRC5A-S11WTL (Supplemental Fig. 4A,B), whereas the control sponge vector, which contained 10 tandem copies of the mutant miR-103a-3p MRE, was labeled GPRC5A-S11MTL (Supplemental Fig. 4C). First, we verified that transfection of MIA PaCa-2 cells with the true sponge GPRC5A-S11WTL reduced the expression level of the endogenous mature miR-103a-3p (19% ± 2.6%-P < 0.001) compared with transfection with the control sponge GPRC5A-S11MTL (Fig. 4A). More importantly, transfection with the true sponge GPRC5A-S11WTL upregulated GPRC5A mRNA (data not shown) and to a larger extent GPRC5A protein (Fig. 4B,C) compared with transfection with the control GPRC5A-S11MTL. We were able to recapitulate the same observations in HPNE, a second pancre-as cell line: In HPNE cells, transfection with the GPRC5A-S11WTL sponge up-regulated GPRC5A mRNA (61.6%-P < 0.001) and GPRC5A protein (56%-P < 0.01) compared with transfection with the control sponge GPRC5A-S11MTL (Supplemental Fig. 5A-C). We also tested with a luciferase reporter construct containing the second (S12WT) 5 ′ UTR binding site of miR-103a-3p and found that a single copy of it suffices to up-regulate GPRC5A mRNA (23%-P < 0.001)-see Supplemental Figure 5D. We repeated the above experiments, this time cotransfecting MIA PaCa-2 and HPNE cells with Pre-miR-103a-3p in addition to cotransfecting with GPRC5A-S11WTL or GPRC5A-S11MTL. The level of miR-103a-3p decreased in MIA PaCa-2 cells that were cotransfected with GPRC5A-S11WTL by 43% ± 2.1% compared with cells cotransfected with control GPRC5A-S11MTL (P < 0.001) (Fig. 4D). This translated to an increase of both GPRC5A mRNA (P < 0.001) and protein levels in MIA PaCa-2 and HPNE cells that were cotransfected with Pre-miR-103a-3p and GPRC5A-S11WTL compared with MIA PaCa-2 and HPNE cells cotransfected with Pre-miR-103a-3p and control GPRC5A-S11MTL ( Lastly, we tested a third pancreas cell line (Panc-1) as well as a nonpancreas one (HEK-293T) and were able to recapitulate the above findings in both (Supplemental Fig. 7A-C). In particular, we transfected HEK-293T cell with the GPRC5A-S11WTL and the control sponge GPRC5A-S11MTL: We found that, just like with the HPNE and MIA PaCa-2 cell lines, GPRC5A-S11WTL up-regulated GPRC5A protein compared with control, albeit somewhat modestly. These experiments provide additional evidence that miR-103a-3p regulates GPRC5A by directly interacting with the latter's 5 ′ UTR. Moreover, they demonstrate that GPRC5A's 5 ′ UTR can potentially function as a decoy for other miR-103a-3p targets. DISCUSSION The potential of miRNA regulation of mRNAs through binding sites that occur in 5 ′ UTRs was demonstrated early on (Lytle et al. 2007;Moretti et al. 2010). However, only a few validated examples of naturally occurring 5 ′ UTR miRNA targets exist in the literature to date (Jopling et al. 2005;Lytle et al. 2007;Orom et al. 2008;Lee et al. 2009;Grey et al. 2010;Vora et al. 2013). For two of these few examples, the seeddriven constitutive miRNA interaction with the 5 ′ UTR of the targeted mRNA promoted protein translation and thus led to an increase (instead of a decrease) of protein levels (Orom et al. 2008;Tsai et al. 2009). The described work and findings represent one more data point in support of 5 ′ UTR targeting by endogenous miRNAs whereby the targeting reduced the abundance of both the mRNA and corresponding protein. In particular, using luciferase assays, we provided initial evidence that the putative MREs in the 5 ′ UTR of GPRC5A were in fact targeted by (D) Taqman miRNA assay was performed to test mature miR-103a-3p expression in MIA PaCa-2 cells cotransfected with pre-miR-103a-3p and S11WTL. Cells cotransfected with pre-miR-103a-3p and S11MTL were used as controls. (E) GPRC5A mRNA expression was tested by RT-PCR in MIA PaCa-2 cells treated with Pre-miR-103a-3p in addition to cotransfecting with GPRC5A-S11WTL or GPRC5A-S11MTL. (F ) GPRC5A protein expression was tested by Western blots in MIA PaCa-2 cells treated with Pre-miR-103a-3p in addition to cotransfecting with GPRC5A-S11WTL or GPRC5A-S11MTL. (G) Quantification result of F. All numerical data are mean ± SD. ( * ) P < 0.05; ( * * * ) P < 0.001, n = 3. GAPDH, glyceraldehyde-3phosphate dehydrogenase; GAPDH and Actin are internal controls. S11WTL, pcDNA vector containing 10 tandem copies of miR-103a-3p binding site 1; S11MTL, pcDNA vector containing 10 tandem copies of mutant miR-103a-3p binding site 1. miR-103a-3p. Additionally, we designed two constructs, GPRC5A-5 ′ UTR-CDS and GPRC5A-CDS, and demonstrated that GPRC5A-5 ′ UTR-CDS, but not GPRC5A-CDS, responded to overexpression of miR-103a-3p, thereby further supporting the finding that the miR-103a-3p MREs were located in GPRC5A's 5 ′ UTR. By overexpressing a sponge that we constructed to contain 10 tandem copies of the most responsive (site S11) of the two 5 ′ UTR MREs we were able to reduce the endogenous levels of miR-103a-3p and to up-regulate both GPRC5A mRNA and protein levels. We also demonstrated that the S11 5 ′ UTR MRE could function as a decoy of miR-103a-3p in vitro and was able to reduce miR-103a-3p levels and increase GPRC5A mRNA and protein levels. We established these findings in three pancreatic cell lines: the normal epithelial HPNE cell line and the MIA PaCa-2 and the Panc-1 cancer cell lines. These findings have the following important ramification. MiR-103a-3p has been shown to play important roles in cellular processes such as DNA repair, metabolism, cell cycle progression, and cell differentiation (Liu et al. 2009;Yang et al. 2009;Finnerty et al. 2010;Liao and Lonnerdal 2010;Polster et al. 2010) and to be dysregulated in multiple diseases (e.g., cancers) and conditions (e.g., diabetes, Alzheimer's disease, etc.) (Roldo et al. 2006;Xie et al. 2009;Yao et al. 2010). To date only a few targets are known for miR-103a-3p. In light of our decoying finding and given miR-103a-3p's involvement in so many settings it follows that GPRC5A, through its 5 ′ UTR MRE for miR-103a-3p, could potentially regulate indirectly processes such as DNA repair, metabolism, the cell cycle, etc., by modulating the expression of other mRNAs, in complete analogy to what was recently shown for PTEN . When one considers that in some cancers GPRC5A has been shown to act as an oncogene, whereas in others as a tumor suppressor, the potential of GPRC5A to act through miR-103a-3p as a competing endogenous RNA or ceRNA Tay et al. 2011;Ala et al. 2013) for other protein-coding transcripts suggests that GPRC5A may be involved in previously unsuspected, currently uncharacterized, and presumably complex gene networks. Studying these possible roles of GPRC5A is currently the topic of ongoing research activity in our laboratory. Cell culture The HEK-293T, MIA PaCa-2, HPNE, and Panc-1 cell lines were obtained from the American Type Culture Collection. All of these cells were grown in DMEM medium (Fisher Scientific) supplemented with 10% fetal bovine serum (Life Technologies), 1% Penicillin and Strep (Fisher Scientific), and 1% glutamine (Fisher Scientific), at 37°C in a humidified atmosphere containing 5% CO 2 . Cell transfection The cells were transfected with 50 nM Pre-miR-103a-3p or 50 nM Anti-miR-103a-3p (Ambion) by the reverse transfection method us-ing the X-tremeGENE siRNA transfection reagent (Roche). Cells transfected with only a scrambled sequence, either Pre-miR-scramble or Anti-miR-scramble (Ambion), were examined in parallel as controls. Cells were then subjected to further assays or to RNA/protein extraction after 2 d. Lipofectamine 2000 (Life Technologies) was used for transfection of the psiCHECK-2 reporter vector (Promega) and pcDNA-3.1 overexpression vector (Life Technologies) and for cotransfection of vectors and Pre-miRs. RNA isolation and real-time quantitative polymerase chain reaction analysis Total RNA was extracted using TRIzol reagent (Life Technologies). For the detection of GPRC5A mRNA, first-strand complementary DNA was synthesized from 1000 ng of total RNA in the presence of oligo-dT (12-18) primer (Promega) and MMLV reverse transcriptase according to the manufacturer's instructions (Promega). Human glyceraldehyde 3-phosphate dehydrogenase RNA was amplified in parallel as an internal control. Real-time quantitative polymerase chain reaction (qPCR) was performed with SYBR Green PCR Master Mix (Life Technologies) and 20 ng of templates using a StepOnePlus Real-Time PCR System (Life Technologies). For miR-103a-3p detection, TaqMan MicroRNA Assay is performed with the miR-103a-3p probe (Life Technologies) following the manufacturer's instructions. Human U6 is used as internal control. Eight nanograms of total RNA is used in the RT reaction with 5X RT primers. All primer sequences used for GPRC5A mRNA and miR-103a-3p detection are listed in Supplemental Table 1 (available online). PCRs were performed at 95°C for 5 min, followed by 40 cycles of 95°C for 15 sec and 60°C for 1 min. ΔCt was calculated by subtracting the Ct of U6 or glyceraldehyde 3-phosphate dehydrogenase mRNA from the Ct of the mRNA of interest. ΔΔCt was then calculated by subtracting the ΔCt of the negative control from the ΔCt of the sample. The fold change in mRNA or miRNA was calculated according to the equation 2 ΔΔCt . Computational prediction of putative targets Using the rna22 algorithm that we published previously (Miranda et al. 2006) and that has been used by us and others to identify many miRNA targets beyond the 3 ′ UTR of genes (Duursma et al. 2008;Lal et al. 2008Lal et al. , 2009Tay et al. 2008;Rigoutsos 2009;Marin-Muller et al. 2013), we identified two candidate targets for miR-103a-3p in GPRC5A's mRNA. In what follows, we will be using the terms "miRNA binding site" and "miRNA response element" (MRE) interchangeably. DNA vectors The coding region of the GPRC5A mRNA with and without the 5 ′ UTR was amplified by PCR from MIA PaCa-2 cDNA. The DNA sequence with 10 tandem repeats of the predicted miR-103a-3p binding sites and the control DNA sequence with 10 tandem repeats of seed-region mutant miR-103a-3p-binding sites were synthesized as fragments (Life Technologies). The fragments were inserted into the pcDNA-3.1 vector between the NheI and NotI sites. The vectors were labeled GPRC5A-5 ′ UTR-CDS, GPRC5A-CDS, GPRC5A-S11WTL, and GPRC5A-S11MTL, respectively. Luciferase assay Each psiCHECK-2 vector containing a reporter construct was cotransfected into HPNE and MIA PaCa-2 cells with Pre-miR-103a-3p or anti-miR-103a-3p by using Lipofectamine 2000 according to the manufacturer's protocol for cotransfection of DNA and pre-miRs. In parallel, each psiCHECK-2 vector containing a reporter construct was also cotransfected into HPNE and MIA PaCa-2 cells with pre-miR-scramble or Anti-miR-scramble as control. Cells were harvested at 48 h after transfection, and the Renilla and Firefly luciferase activities in the cellular lysate were assayed by using the Dual-Glo Luciferase Assay (Promega) according to the manufacturer's protocol. Light intensity for each sample was measured by using Synergy 2 Multi-Mode Microplate Reader (BioTek), and each value from Renilla luciferase was normalized by Firefly luciferase. Western blots Transfected cells were lysed on ice in Pierce IP lysis buffer (Thermo Scientific) containing 1X complete protease inhibitor (Roche). Debris was pelletized by centrifugation at 13,200 rpm for 15 min, and protein concentrations were determined using Pierce BCA assay (Thermo Scientific). Lysates were heat-denatured at 100°C for 10 min before separation in 10% sodium dodecyl sulfate-polyacrylamide gels and transferred to nitrocellulose membrane (GE Healthcare). Membranes were blocked with 5% bovine serum albumin (Sigma-Aldrich) in Tris-buffered saline Tween-20 buffer (10 mM Tris at pH 7.6, 150 mM NaCl, and 0.1% Tween-20) and probed with primary antibody in Tris-buffered saline Tween-20 with 5% bovine serum albumin at the recommended dilutions at 4°C. Primary antibodies included GPRC5A antibody (Sigma-Aldrich), β-actin antibody (Cell Signaling Technology), and GFP antibody (Santa Cruz Biotechnology Inc.). Membranes were incubated with secondary antibody (Cell Signaling Technology) diluted in Tris-buffered saline Tween-20 with 5% bovine serum albumin for 1 h at room temperature. The signal was detected with Pierce ECL Western Blotting Substrate (Thermo Scientific) and GE ImageQuant LAS 4000 (GE Healthcare). Statistical analysis Statistical analysis was performed using Excel (Microsoft) and SPSS (IBM). Unless otherwise indicated, the level of significance for the difference between data sets was assessed using one-way analysis of variance. Data are expressed as the means ± SD. P-values ≤0.05 were considered statistically significant. SUPPLEMENTAL MATERIAL Supplemental material is available for this article.
2018-04-03T03:04:48.724Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "5e0f5ae3a572c351ff4e06d0be949f41e4412321", "oa_license": "CCBYNC", "oa_url": "http://rnajournal.cshlp.org/content/20/9/1431.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1276643f3723e62bebe784c86ebb714c310b90e9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
121792275
pes2o/s2orc
v3-fos-license
Description of the ground state of axially deformed nuclei within the Relativistic Hartree–Fock–Bogoliubov model The Relativistic Hartree–Fock–Bogoliubov model for axially deformed nuclei (RHFBz) is presented. The model involves a phenomenological Lagrangian with density-dependent meson–nucleon couplings in the particle-hole channel and the central part of the Gogny force in the particle–particle channel. The RHFBz equations are solved by expansion in the basis of a deformed harmonic oscillator. Illustrative RHFBz calculations are performed for Neon isotopes. Introduction Nuclear energy density functionals (EDF) represent a tool of choice for the microscopic description of both static and dynamic properties of nuclei over the whole nuclide chart. They subsum nucleonic short-range in-medium correlations, whereas static long-range correlations are taken into account by allowing a single-determinant state to break the symmetries of the nuclear Hamiltonian [1]. Many structure phenomena in both stable and exotic nuclei have successfully been described by EDFs involving the non-relativistic Gogny and Skyrme [2] effective interactions, as well as relativistic phenomenological Lagrangian densities [3]. The Relativistic Mean Field (RMF) framework [3] is an example of a covariant EDF. The corresponding phenomenological Lagrangians provide a quantitative description of a variety of ground-state data. However, it does not include explicitly the Fock term, but implicitly takes it into account through the fit of model parameters to structure data. A more involved approach, the Relativistic Hartree-Fock (RHF) theory [4], treats the exchange contributions explicitly. The first RHF models used to predict under-bound nuclei in comparison to experimental data. This problem originated from the lack of a medium dependence in the corresponding effective nucleonic interaction [4]. To overcome this problem, an explicit nucleon-density dependence of the nucleon-meson couplings was included [5]. The resulting improvement brought the current RHF models to a similar level of accuracy as the standard RMF approach for a quantitative description of nuclear structure phenomena [6]. In particular, recent studies by W.H. Long et. al. [5,6,7,8] and H. Liang et. al. [9,10] have emphasized that, compared to the RMF approach, the explicit treatment of Fock terms can improve the description of nuclear matter and finite nuclei. Moreover, it explicitly includes the tensor contributions to the inter-nucleon interaction generated by the exchange of the π and ρ mesons. So far, the RHF framework has been limited to the description of spherical nuclei. We consider an extension of this approach to deformed, axially-symmetric nuclei: the Relativistic Hartree-Fock-Bogoliubov model with density-dependent meson-nucleon couplings (RHFBz) [11,12]. In Sec. 2 the general formalism of the RHFBz model is briefly presented. In Sec. 3 we present and discuss applications of the RHFBz model to ground-state properties of neon isotopes. Finally, a short summary and discussion of possible future studies are displayed in Sec. 4. Formalism of the RHFBz model The RHFBz approach is based on a phenomenological Lagrangian density involving the relevant degrees of freedom for nuclear structure, namely nucleons and mesons: Vectors in isospin space are denoted by arrows. The Dirac spinor ψ denotes the nucleon with mass M . m σ , m ω , m ρ , and m π are the masses of the σ-meson, the ω-meson, the ρ-meson and the π-meson, respectively. g σ , g ω , g ρ and f π are the meson-nucleon coupling constants. A µ stands for the electromagnetic 4-potential. e 2 /4π = 1/137.036. The (density-dependent) coupling constants and meson masses are parameters, adjusted to reproduce nuclear matter properties and ground-state properties of finite nuclei. Ω µν , R µν , and F µν are the field tensors of the vector fields ω, ρ, and of the photon [4]. A nucleon-density dependence of the meson-nucleon couplings accounts for medium polarisation and three-body correlations [13,14,5]. Results and discussion This section presents an application of the RHFBz model in the calculation of ground-state properties of neon isotopes. The RHFBz model is used with the PKO2 and PKO3 effective interactions [15,16] in the particle-hole channel, and the central part of the Gogny D1S force [17] in the particle-particle channel. The PKO3 effective interaction is related to a covariant EDF including explicitly the pion degree of freedom, which is one actor of the tensor force. On the contrary, the PKO2 effective interaction corresponds to a covariant EDF where the pion degree of freedom is not treated explicitly. The two-neutron separation energy S 2n ≡ E tot (Z, N ) − E tot (Z, N − 2) of Ne isotopes, calculated with PKO2 and DD-ME2, are compared to data in Fig. 1. In general, the RHFBz results obtained with the PKO2 parameter set are closer to the experimental two-neutron separation energies. Both PKO2 and DD-ME2 predict 32 Ne to be the last bound isotope. Figure 1: Two-neutron separation energy in the neon isotopic chain. The relativistic meanfield results: RHFBz with PKO2 [15,16], and RHB with DD-ME2 [18], are compared to data (Audi-Wapstra [19]). The evolution of the axial deformation parameter β in the neon isotopic chain is illustrated in Fig. 2. In general, the deformation predicted by PKO3 is larger than that calculated with PKO2 and, therefore, closer to the results obtained with the DD-ME2 and Gogny D1S effective interactions. PKO3 predicts an oblate shape for 24 Ne (quasi degenerate in energy with a prolate solution at β = 0.3) whereas a prolate ground-state shape for this nucleus is obtained with PKO2, DDME2, Gogny D1S (quasi degenerate in energy with an oblate solution at β = −0.15) and Skyrme SLy4 interactions. Moreover, all these interactions, except PKO3 that predicts a prolate ground-state, give no deformation for 26 Ne and 28 Ne. The comparison between the two PKO3 curves where the pion coupling is switched on and off shows that the prolate shape of 26,28 Ne is driven by the pion. Conclusion The relativistic Hartree-Fock-Bogoliubov model for axially deformed nuclei (RHFBz) is based on an effective Lagrangian with density-dependent meson-nucleon couplings. In this work RHFBz calculations have been performed for neon isotopes. Results obtained with the RHF effective forces PKO2 and PKO3 have been compared to experimental S 2n . In addition, ground-state deformation has been shown in comparison with the predictions of the relativistic DD-ME2 effective interaction, as well as with the results calculated with the non-relativistic Gogny D1S and Skyrme SLy4 interactions. The effect of explicitly including the pion field has been investigated for deformation parameters. The inclusion of the tensor ρ-nucleon coupling will complete the model and thus enable studies of the role of tensor components of the effective inter-nucleon interaction in the evolution of shell structures in deformed nuclei. The calculated values correspond to the PKO2 and PKO3 [15,16], DD-ME2 [18], Gogny D1S [17], Skyrme SLy4 [20] and Skyrme SGII [21] effective interactions. PKO3 calculations with f π (ρ) set to 0 are also shown.
2019-04-19T13:08:59.137Z
2011-09-16T00:00:00.000
{ "year": 2011, "sha1": "4f7c7665938c6a3253fad333677a8f032d3322fd", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/321/1/012054", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "acb3489ea2b13ceb9e519ee945f945494969e4ac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
116887319
pes2o/s2orc
v3-fos-license
Intraoperative Graft Flow Measurement in Off-Pump Coronary Artery Bypass Grafting Indicating Graft Revision : Our Experience of 1203 Grafts Background: This study was undertaken to evaluate the use of Transit time flowmetry (TTFM) to identify the malfunctioning graft for the need of graft revision or intervention while performing Off-pump coronary artery bypass grafting (OPCAB). Methods: From July 2014-July 2018 transit time flowmetry was performed on 1206 grafts in 424 patients who underwent OPCAB. The grafts were taken as patent and acceptable if the mean graft flow was more than 20 ml/minute, Pulsatility index (PI) of < 5 and Diastolic flow (DF) more than 50% with a minimal systolic spike. The grafts which did not fulfill the above criteria were revised/appropriate intervention done after identifying the cause for graft malfunction. Results: A total of 1203 grafts were measured in 424 patients who underwent OPCAB. Out of 1203 grafts measured, 51 grafts in fours nine patients showed abnormal flowmetry reading requiring graft revision or intervention. The cause for graft malfunction was graft twisting, anastomosis stenosis, graft kinking, Lima spasm, coronary dissection, reversed vein, graft anastomosis thrombosis and retained coronary shunt. All fifty-one grafts flow returned to normal after graft revision or intervention. We had one-mortality out of forty-nine patients who had grafts revised and the mortality was not attributed to graft malfunction. Conclusion: Intraoperative evaluation of the graft flow with TTFM promptly helps in identifying the abnormal grafts before the patient becomes hemodynamically unstable. Correcting the abnormal grafts prior to chest closure leads to a reduction in mortality and morbidity which will help in improving the patient’s outcome. INTRODUCTION The Off-pump coronary artery bypass grafting (OPCAB) is the commonly performed surgery for coronary revascularization in developing countries when compared to western countries.Intraoperative evaluation of graft flow is very important to have a good patient outcome.TTFM is an effective method for evaluation of graft patency during the intraoperative period.The intraoperative monitoring of graft flow by TTFM is the most commonly applied technique and is suggested by the European Association for Cardio-Thoracic Surgery (EACTS) 2018 guidelines. 1 Specific cut-off values for TTFM have been recommended to avoid graft occlusion postoperatively by few studies as, mean graft flow of more than 20 ml/min and PI of < 5. 2,3 Few studies have shown that PI of < 3 is a desirable value. 4,5The objective of this study is to assess the intraoperative graft flow to identify the graft patency and quality, to rule out surgical technical problems requiring immediate graft revision. MATERIALS AND METHODS All 424 patients who underwent OPCAB in a single surgical unit from July 2014 to July 2018, had graft flow measured intraoperative using VeriQTM Transit time flow measurement (TTFM) device.A total of 1203 grafts were measured in these patients.The grafts were taken as patent and acceptable if pulsatility index (PI) of was less than 5 with mean graft flow of more than 20 ml/min and Diastolic flow (DF) more than 50%.The grafts which did not fulfill the above criteria were revised after identifying the cause for graft dysfunction.Ethical committee clearance was taken from our institute before staring the study. Surgical techniques All patients had median sternotomy and Left internal mammary artery (LIMA), Left or right greater saphenous vein was harvested.Coronary stabilization was done using Medtronic stabilizers during grafting.LIMA was always anastomosed to LAD and SVG used for another coronary grafting.Once the anastomosis was completed, the graft flows were measured and again after completing protamine infusion.If TTFM results are not satisfactory, then the cause is identified and graft anastomotic revision or the reason causing graft dysfunction if rectified.The systolic BP of more than 100 mmHg was maintained during TTFM measurement. TTFM measurement The measurements of the graft flow were made with TTFM device VeriQTM (Medistim, Norway).The data collected during the graft flow measurement were mean flow, DF and PI.The shape of the waveform was analyzed in correlation with ECG to look for systolic spike if any.Different size TTFM probes were used for measurement of the graft flow depending on the diameter of the conduct.Partial skeletonization of LIMA was done to facilitate the probe placement for measurement.2mm probe was used for LIMA and 4 mm or 3 mm probe was used for venous grafts.The flow flows were always compared with ECG to identify the systolic and diastolic flow.Physiologically diastolic flow is more with minimal systolic peak flow.The DF of more than 50%, mean graft flow of more than 20 ml/min and PI of less than 5 was taken as an acceptable value.The TTFM values were correlated with hemodynamics and ECG.TTFM measurement was done immediately after completing the anastomosis and again after completing protamine infusion.If in case grafts did not fulfill the above criteria, then grafts were revised after identifying the cause for graft dysfunction and again TTFM measurement was done. Statistics Continuous variables are reported as mean ± standard deviation and were compared using the t-test for normal distributions.The t-test for testing of one mean and comparison of mean, of TTFM grafts reading before and after graft intervention was done.Reported p-values of p<0.05 was considered statistically significant.Statistical analysis was performed in MedCalc statistical software. RESULTS The data was collected from 424 numbers of patients and total grafts measured in these patients were 1203 number of graft measurements.Demographically there was no difference between the intervention group and nonintervention group (Table 1).A total of 51 grafts (51 /1203) were revised in forty-nine patients.In these forty-nine patients, two patients had two grafts revised and rest forty-seven patients had one graft revised.Of the total grafts revised, 5 where to LAD, 13 to OM, 8 to diagonal, 3 to RAMUS, 9 to RCA and 13 to PDA.Forty-seven grafts were revised off-pump and four grafts were revised on Cardiopulmonary bypass (CPB).The decision to revise the graft was taken when the TTFM measurement showed low flow, high PI and predominantly systolic flow.The causes for which revision of the grafts was done were, 6 grafts had twisted, 6 grafts had kinking, 3 grafts had Lima spasm of which did not respond to papaverine hence it was replaced with SVG to LAD, 1 graft had reversed vein, 8 grafts had coronary dissection, 16 grafts had anastomotic stenosis, 9 grafts had thrombus, 1 graft had a retained coronary shunt and in 1 graft, proximal anastomosis block was the cause.Twentyfour patients had intraoperative ST elevation with hemodynamic instability.All Twenty-four patient's ECG settled to a normal range once the dysfunctional grafts were revised.ECG Graft revision/intervention was according to the cause for graft malfunction (Table 2).Post revision grafts were accepted as good, once our criteria were fulfilled.Pre and post revision or intervention TTFM readings of all grafts are tabulated (Table 3) and t-test for mean and comparison of the mean was significant with p<0.001.The TTFM, graft flow values before and after graft revision/intervention (Table 4) and TTFM, PI values before and after graft revision/intervention (Table 5) were tested statistically and found to be statistically significant with p<0.001.The OM and PDA were the grafts which were more revised when compared to other grafts.We had mortality of one patient out of forty-nine patients who had grafts revised and this mortality was due to pneumonia leading to sepsis. DISCUSSION TTFM is advised by European Association for Cardio-Thoracic Surgery EACTS-2018 guidelines for myocardial revascularization for evaluation of graft patency intraoperatively during CABG. 1 Patency of graft is measured and accepted as fine when DF is >50% and PI is 5 or < 5 with good MGF. 2,3TTFM is the most commonly used method for intraoperative evaluation of grafts. 2 Different techniques have been used before like electromagnetic flowmeters, but now they are all replaced by TTFM.Intraoperative measurement of graft flow for its patency is more important in OPCAB.The sensitivity of detecting less than critical stenosis remains to a major concern.It is evident that less than critical stenosis cannot be detected by TTFM due to the fact that no modifications in the hemodynamic performances of the grafts happen at this level.The surgeon usually acquires the experience to interpret the TTFM readings in these types of situations. 68][9] With our experience of more than 1200 graft measurements we are now capable of identifying the graft which requires revision.The TTFM readings have to be correlated and supported by hemodynamics and ECG changes before taking a decision to revise the graft.The predominant forward flow through the graft occurs during the diastolic phase, whereas systolic flow through the graft is due to backward flow due to stenosis anastomosis or because of competitive flow in the native coronary vessel.This is true for all coronary arteries except right coronary artery as it will have minimal epicardial coronary compression. 10This has to be kept in mind while interpreting contrary PI values with mean graft, flow is more accurate in knowing the actual status of the graft anastomosis. Limitation Radial artery graft conducts were not included in our study.Epicardial ultrasound along with TTFM has to be done to further validate these techniques.These are the limitations in our study. CONCLUSION We want to conclude that TTFM is a reliable method in detecting technical errors of the graft conducts during OPCAB.The graft should be revised or the reason causing grafts dysfunction if rectified.Revision of the grafts leads to improvement in graft flow and patency.Hence the operative outcomes can be improved by the use of TTFM in OPCAB. Correcting the abnormal grafts prior to chest closure leads to a reduction in mortality and morbidity.The patient outcome can be improved if TTFM is used as a standard tool during coronary artery bypass graft surgery. Journal of Cardiovascular Disease Research, Vol 10, Issue 1, Jan-Mar, 2019 Cite this article :
2019-06-19T22:17:22.140Z
2019-04-10T00:00:00.000
{ "year": 2019, "sha1": "e1905003834ea94a7c7b60f9ecf02a8032c3d480", "oa_license": "CCBY", "oa_url": "https://www.jcdronline.org/sites/default/files/JCardVascDisRes-10-1-27_0.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e1905003834ea94a7c7b60f9ecf02a8032c3d480", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269508937
pes2o/s2orc
v3-fos-license
Surface Modification of Nano-Hydroxyapatite/Polymer Composite for Bone Tissue Repair Applications: A Review Nano-hydroxyapatite (n-HA) is the main inorganic component of natural bone, which has been widely used as a reinforcing filler for polymers in bone materials, and it can promote cell adhesion, proliferation, and differentiation. It can also produce interactions between cells and material surfaces through selective protein adsorption and has therefore always been a research hotspot in orthopedic materials. However, n-HA nano-particles are inherently easy to agglomerate and difficult to disperse evenly in the polymer. In addition, there are differences in trace elements between n-HA nano-particles and biological apatite, so the biological activity needs to be improved, and the slow degradation in vivo, which has seriously hindered the application of n-HA in bone fields, is unacceptable. Therefore, the modification of n-HA has been extensively reported in the literature. This article reviewed the physical modification and various chemical modification methods of n-HA in recent years, as well as their modification effects. In particular, various chemical modification methods and their modification effects were reviewed in detail. Finally, a summary and suggestions for the modification of n-HA were proposed, which would provide significant reference for achieving high-performance n-HA in biomedical applications. Introduction Bone defects caused by trauma, infections, and bone tumors are very common.Conventional autologous or allogeneic bone grafting has its own limitations, while artificial bone grafting is currently the most popular treatment for all types of bone defect repair, including dental bone implantation and cranial defect repair in neurosurgery.As is known to us, bone matrix is an extracellular matrix in bone tissue that has undergone calcification, consisting of 65% inorganic phase and 35% organic phase [1].Nano-hydroxyapatite (Ca 10 (PO 4 ) 6 (OH) 2 , n-HA) is the major inorganic constituent in human hard tissue, and its chemical composition and structure are very similar to those of biological bones and enamel, so n-HA is known as a highly biocompatible, bioactive, osteoconductive, non-toxic, non-inflammatory, and non-immunogenic agent [2][3][4].The preparation methods of n-HA mainly include the hydrothermal method, chemical precipitation method, microwave solid phase method, sol-gel method, spontaneous combustion method, and electrochemical deposition method [5][6][7].Among the preparation methods, the chemical precipitation method is the most commonly used preparation method, which is a mild experimental approach without expensive equipment.However, the synthesized n-HA is used alone in orthopedic materials; it exhibits some inherent defects in clinical applications, such as high brittleness and in vivo difficulty in degradation, and its biological activity needs to be improved.To compensate for these deficiencies, it is simple to combine n-HA with degradable polymers so as to obtain high-performance bone materials [8].However, it has been shown that n-HA is difficult to disperse uniformly in the polymer due to its inherent agglomeration of n-HA nano-particles, and there exists poor interfacial bonding between nano-particles and polymers via physical blending, which could lead to poor mechanical properties.In addition, the biological apatite usually contains a small amount of carbonate, fluorine, silicon, magnesium, sodium, citric acid, etc., so there are some differences between the synthesized n-HA and the biological apatite, resulting in its insufficient osteogenic activity, so it is difficult to obtain the vascularized bone formation and achieve good bone integration.Moreover, conventional n-HA is difficult to biodegrade in vivo due to its perfect crystal structure of n-HA.Therefore, it is very necessary to carry out the modification of n-HA so as to obtain the n-HA/polymer nano-composite with the aim of expanding its application in the biomedical field.To enable readers to have a clearer understanding of the reasons and modification strategies for n-HA, the logic behind the surface modifications of n-HA is summarized (shown in Figure 1). Polymers 2024, 16, x FOR PEER REVIEW 2 of 20 used alone in orthopedic materials; it exhibits some inherent defects in clinical applications, such as high brittleness and in vivo difficulty in degradation, and its biological activity needs to be improved.To compensate for these deficiencies, it is simple to combine n-HA with degradable polymers so as to obtain high-performance bone materials [8].However, it has been shown that n-HA is difficult to disperse uniformly in the polymer due to its inherent agglomeration of n-HA nano-particles, and there exists poor interfacial bonding between nano-particles and polymers via physical blending, which could lead to poor mechanical properties.In addition, the biological apatite usually contains a small amount of carbonate, fluorine, silicon, magnesium, sodium, citric acid, etc., so there are some differences between the synthesized n-HA and the biological apatite, resulting in its insufficient osteogenic activity, so it is difficult to obtain the vascularized bone formation and achieve good bone integration.Moreover, conventional n-HA is difficult to biodegrade in vivo due to its perfect crystal structure of n-HA.Therefore, it is very necessary to carry out the modification of n-HA so as to obtain the n-HA/polymer nano-composite with the aim of expanding its application in the biomedical field.To enable readers to have a clearer understanding of the reasons and modification strategies for n-HA, the logic behind the surface modifications of n-HA is summarized (shown in Figure 1). Physical Modification The methods of physical modification include physical adsorption, electron induction, and laser irradiation, and the main purpose is to improve the stability and dispersion of nano-particles.Aronov D et al. [9] reported on surface free energy modulation of a hydroxyapatite-coated titanium femoral implant via low-energy electron irradiation.The selective bacterial adhesion in combination with the ability to define the surface energy properties suggests that this method opened an avenue for the protection of implants from bacterial infections.Queiroz AZ et al. [10] used a KrF excimer laser with a wavelength of 248 nm and a pulse duration of 30 ns to modify the surface of n-HA, and a series of characterizations showed that the surface modification with the laser could increase the surface area of n-HA, making it a promising technology with for improving reactivity and drug-delivery ability.Physical modification had been favored by researchers due to its advantages such as easy handling, low production cost, and no pollution.However, the organic molecules were bound to the surface of n-HA particles by a non-covalent bond, so the physically adsorbed organic molecules could be easily washed out with the body fluid.While the chemical modifier is bound to the surface of the HA particles via a chemical bond, it is more stable than physical modification, so the chemical modifier is used more frequently. Physical Modification The methods of physical modification include physical adsorption, electron induction, and laser irradiation, and the main purpose is to improve the stability and dispersion of nano-particles.Aronov D et al. [9] reported on surface free energy modulation of a hydroxyapatite-coated titanium femoral implant via low-energy electron irradiation.The selective bacterial adhesion in combination with the ability to define the surface energy properties suggests that this method opened an avenue for the protection of implants from bacterial infections.Queiroz AZ et al. [10] used a KrF excimer laser with a wavelength of 248 nm and a pulse duration of 30 ns to modify the surface of n-HA, and a series of characterizations showed that the surface modification with the laser could increase the surface area of n-HA, making it a promising technology with for improving reactivity and drug-delivery ability.Physical modification had been favored by researchers due to its advantages such as easy handling, low production cost, and no pollution.However, the organic molecules were bound to the surface of n-HA particles by a non-covalent bond, so the physically adsorbed organic molecules could be easily washed out with the body fluid.While the chemical modifier is bound to the surface of the HA particles via a chemical bond, it is more stable than physical modification, so the chemical modifier is used more frequently. Chemical Modification Chemical modification is preferred for the improvement of the morphology, crystal structure, and surface properties of n-HA via a chemical reaction.According to the different reaction mechanisms, it can be divided into several types: template method, doping method, surface grafting of small molecules or polymers, and hybrid macromolecules. The modification methods of n-HA was shown in Table 1, and the comparison of the different chemical modification methods was given in Table 2. Physical modification Physical adsorption, electron induction, and laser irradiation Improve the stability and dispersion of nano-particles [9,10] Chemical modification Template Method The template method involves the interaction between the precursor of the synthesized n-HA particles and an organic substance (template) with a certain morphology or structure, so that the generated n-HA is covered on the surface of the template or embedded inside it to form a composite, and the modified n-HA with different morphologies or structures is obtained by removing the template.Zhou H et al. [11] reported a one-step hydrothermal method to synthesize mesoporous HA with the assistance of a cost-effective template vitamin C. The mesoporous HA exhibited enhanced adsorption of the model drug doxorubicin compared to conventionally synthesized HA.Aguilar AEM et al. [12] synthesized n-HA via chemical precipitation using Euclea Natalensis root extract as a template.The results showed that n-HA from the green route presented a spherical-like shape with a smooth surface, and the surface of n-HA without the green template was covered with nanogrooves.Utara S et al. [13] successfully synthesized HA by means of the sol-gel method in the presence of ozonolyzed natural rubber latex templates of various molecular weights, and the formation mechanism of synthesized HA templated by ozonolyzed natural rubber latex is shown in Figure 2. The results showed that the molecular weight, as well as the functionality of the biomacromolecule template, influenced the phase crystallinity and morphology of synthesized HA.From the literature review, it can be concluded that the template had a great effect on the morphology of HA, which is suitable for obtaining HA nano-particles with various morphologies. Single Ion Doped Doping HA with foreign ions is becoming more and more popular as a chemical method to enhance its performance and endow it with new characteristics [14].Some cations, such as M 2+ , can be easily exchanged with Ca 2+ in HA to form an apatite-based solid solution.Some anions, such as Cl − and F − , can be easily replaced with OH − in HA to form a solid solution of chlorapatite or fluorapatite with HA, which can change the surface properties of n-HA.Therefore, ion doping is achieved in the preparation of n-HA by adding corresponding ions to the reactants, which improves the surface properties of n-HA.Ma P et al. [15] applied strontium-substituted hydroxyapatite (Sr-HA) nano-particles on the surface of polyethylene terephthalate (PET) artificial ligament, and the results showed that the prepared coating significantly improved surface hydrophilicity and promoted osteogenic differentiation and bone integration to repair ligament damage in rabbits, thus providing a potential method for the use of PET artificial ligaments modified with Sr biomaterials to reconstruct ACL.Besides strontium doping, magnesium doping also has similar effects because they belong to the same main group of elements.Zhao SF et al. [16] also confirmed that Mg-n-HA surface coating could better promote the differentiation of somatic cells before osteogenesis compared to n-HA coating in vitro, and it improved the osseointegration of implants more outstandingly at the early stage of bone healing in vivo.Garbo C et al. [17] synthesized a new porous HA (HAP-Zn) with zinc content ranging from 0.2 to 10 wt% by coprecipitation in the presence of the surfactant L-asparagine and found that its pore size distribution and morphology were controllable, which could be used in orthopedic surgery, especially in the treatment of osteoporosis and as a bone substitute, as well as in dentistry for the remineralization of tooth enamel. Multiple Ions Co-Doped To obtain better surface properties of n-HA, a multiple-ion co-doping method has been proposed.Yilmaz B et al. [18] investigated the co-doping of different ions and concluded that when two or more of these ions were doped together, the multiple effects would not be a simple combination of individual contributions as the doping elements directly changed the atomic structure of the doped HA.Predoi D et al. [19] doped a silver and zinc HA coating into a chitosan matrix composite (Ag-Zn-HAp/CS) via the dip-coating method, and the results demonstrated that the presence of Ag-Zn-HAp/CS composite suspension and coating did not affect the morphology of cells and showed good antibacterial performance.Lavanya P et al. [20] prepared copper and manganese mineral-substituted HA (Cu-Mn-HA) and Cu-Mn-HA/chitosan (CTS)-polyvinylpyrrolidone (PVD) via sol-gel and solvent casting techniques, respectively.The results showed that 30% Cu-Mn-HA in CTS-PVD had superior mechanical, physical, and chemical properties, and Ion Doped 3.2.1. Single Ion Doped Doping HA with foreign ions is becoming more and more popular as a chemical method to enhance its performance and endow it with new characteristics [14].Some cations, such as M 2+ , can be easily exchanged with Ca 2+ in HA to form an apatite-based solid solution.Some anions, such as Cl − and F − , can be easily replaced with OH − in HA to form a solid solution of chlorapatite or fluorapatite with HA, which can change the surface properties of n-HA.Therefore, ion doping is achieved in the preparation of n-HA by adding corresponding ions to the reactants, which improves the surface properties of n-HA.Ma P et al. [15] applied strontium-substituted hydroxyapatite (Sr-HA) nano-particles on the surface of polyethylene terephthalate (PET) artificial ligament, and the results showed that the prepared coating significantly improved surface hydrophilicity and promoted osteogenic differentiation and bone integration to repair ligament damage in rabbits, thus providing a potential method for the use of PET artificial ligaments modified with Sr biomaterials to reconstruct ACL.Besides strontium doping, magnesium doping also has similar effects because they belong to the same main group of elements.Zhao SF et al. [16] also confirmed that Mg-n-HA surface coating could better promote the differentiation of somatic cells before osteogenesis compared to n-HA coating in vitro, and it improved the osseointegration of implants more outstandingly at the early stage of bone healing in vivo.Garbo C et al. [17] synthesized a new porous HA (HAP-Zn) with zinc content ranging from 0.2 to 10 wt% by coprecipitation in the presence of the surfactant L-asparagine and found that its pore size distribution and morphology were controllable, which could be used in orthopedic surgery, especially in the treatment of osteoporosis and as a bone substitute, as well as in dentistry for the remineralization of tooth enamel. Multiple Ions Co-Doped To obtain better surface properties of n-HA, a multiple-ion co-doping method has been proposed.Yilmaz B et al. [18] investigated the co-doping of different ions and concluded that when two or more of these ions were doped together, the multiple effects would not be a simple combination of individual contributions as the doping elements directly changed the atomic structure of the doped HA.Predoi D et al. [19] doped a silver and zinc HA coating into a chitosan matrix composite (Ag-Zn-HAp/CS) via the dip-coating method, and the results demonstrated that the presence of Ag-Zn-HAp/CS composite suspension and coating did not affect the morphology of cells and showed good antibacterial performance.Lavanya P et al. [20] prepared copper and manganese mineralsubstituted HA (Cu-Mn-HA) and Cu-Mn-HA/chitosan (CTS)-polyvinylpyrrolidone (PVD) via sol-gel and solvent casting techniques, respectively.The results showed that 30% Cu-Mn-HA in CTS-PVD had superior mechanical, physical, and chemical properties, and promoted the deposition of bone-like apatite faster than the biological composites with 0, 10 and 20 wt% Cu-Mn-HA/CTS-PVD, so 30 wt% Cu-Mn-HA/CTS-PVD could be used for bone regeneration.Dittler ML et al. [21] have investigated bioactive glass (BG)-based scaffolds of 45S5 composition covered with hydroxyapatite nano-particles loaded with Mg 2+ , Zn 2+ and both Mg 2+ and Zn 2+ ions (noted as HA-BG, Zn-HA-BG, Mg-HA-BG, and Mg-Zn-HA-BG scaffolds).The results showed that nano-crystalline Mg-Zn-HA coatings enhanced the biological performance of standard scaffolds of 45S5 BG composition, suggesting that Mg-Zn-HA-coated scaffolds were attractive systems for bone tissue engineering. Based on the analysis of the results, we conclude that it would be a better strategy to incorporate multiple ions into n-HA, which would be more beneficial in improving the biological properties of bone materials. Adding Surfactants for Modification The molecules of surfactants have two functional groups with different solubilities or polarities, namely, lipophilic groups (non-polar groups) and hydrophilic groups (polar groups).The surfactants ensure that the nano-particles remain in a stable monodisperse state in the dispersion medium through the adsorption of their groups on the particle surface and changes the surface state of the nano-particles.The surface polarity of n-HA particles is of great importance.When they are modified with surfactant, the polar groups are prone to form strong bonds on their surface.Pang GH et al. [22] explored the surface of n-HA coated with polyethylene glycol, polyvinyl alcohol, and stearic acid and found that the type of surface modifier and the concentration of the active ingredient had a significant effect and a certain selectivity on the particle size, and polyethylene glycol with a concentration of 5% was the best modifier for HA, exhibiting the best dispersibility.Wang SH et al. [16] used stearic acid to coat the surface of HAp through a high-pressure reactor.After modification, the diameter of HAp particles increased and the interfacial compatibility between PLA and HAp was improved; it promoted crystallization, refined the particle size, and led to the evolution of PLA composite from brittle fracture to ductile fracture such that the thermal deformation temperature, tensile strength, and impact strength were significantly increased.Ma TY et al. [24] adopted the hydrothermal method to synthesize well-dispersed HA nano-rods with different morphologies in the reaction system of oleic acid, ethanol, and water and conducted a comparative study on the auxiliary modification effect of surfactants.It was found that the selected surfactants, such as cetyltriethylammonium bromide (CTAB) and sodium dodecyl sulfate (SDS), played an important role in the formation of uniform HA nano-rods.Wang WY et al. [25] prepared high-purity glycine-modified n-HA (HAP-Gly) powder via co-titration with calcium hydroxide, phosphoric acid, and Gly as raw materials because Gly had a certain influence on the crystallization performance of HAP, and the diffraction peak of the modified HAP was significantly broadened.HAP-Gly was a crystal cluster-rod structure with a length of about 50-130 nm and a diameter of about 5-15 nm.Cytotoxicity analysis revealed that it had no cytotoxicity.Yin YJ et al. [26] also compared the changes in the adsorption performance of n-HA after modification with surfactants.Anionic surfactant sodium dodecyl benzene sulfonate (SDBS) was selected for the modification of n-HAP, and it was found that the adsorption capacity of Cd 2+ after modification was significantly higher than before due to the inhibition of aggregation, which increased the specific surface area, and the introduction of new functional groups provided more sites for the adsorption of Cd 2+ .Lin DJ et al. [27] synthesized HA with the assistance of cationic, anionic, non-ionic, and zwitterion templates.It was found that the uncalcined rod-shaped HA synthesized with non-ionic templates at pH 4 showed excellent cell viability, while anionic, cationic, and non-ionic surfactants showed biocompatibility only after calcination.At pH 9, the non-ionic and un-calcined zwitterion-assisted rod HA showed excellent biocompatibility.Chen RG et al. [28] prepared HA crystals with an arambola-like structure via supersaturated urea-assisted solvothermal synthesis using a dual surfactant.By adjusting the dual surfactant of Na 2 EDTA and stearic acid and the reaction time, the product morphology could be well customized, including microhexagonal prisms, carambola-like structures, and microspheres.Na 2 EDTA had a slight inhibitory effect on the formation of HA, and stearic acid adsorbed onto the surface of HA to form a long chain layer and act as a mechanical barrier, indicating its excellent dispersibility.Zhang SH et al. [29] proposed a method to synthesize dandelion-like HAP cells using an environmentally friendly rosin-based phosphate diester surfactant DDPD as a new phosphorus source, template, and crystal growth control agent.The results showed that the prepared samples exhibited good cell compatibility.Ashraf FA et al. [30] synthesized hexagonal HAp nano-rods in the presence of licorice root extract (LE) via a microwave hydrothermal synthesis route at 125 • C, where LE was used as a green organic template (or biological template), and the crystals displayed uniform morphology and high crystallinity without containing carbonates, whose Ca/P atomic ratio was close to stoichiometric values, confirming that it was a new environmentally friendly green synthesis route (as shown in Figure 3).These HAp nano-rod products using licorice and LE as templates could be widely used in many biomedical fields, such as bone repair, drug delivery, and dental repair. Polymers 2024, 16, x FOR PEER REVIEW 6 of 20 including microhexagonal prisms, carambola-like structures, and microspheres.Na2EDTA had a slight inhibitory effect on the formation of HA, and stearic acid adsorbed onto the surface of HA to form a long chain layer and act as a mechanical barrier, indicating its excellent dispersibility.Zhang SH et al. [29] proposed a method to synthesize dandelion-like HAP cells using an environmentally friendly rosin-based phosphate diester surfactant DDPD as a new phosphorus source, template, and crystal growth control agent. The results showed that the prepared samples exhibited good cell compatibility.Ashraf FA et al. [30] synthesized hexagonal HAp nano-rods in the presence of licorice root extract (LE) via a microwave hydrothermal synthesis route at 125 °C, where LE was used as a green organic template (or biological template), and the crystals displayed uniform morphology and high crystallinity without containing carbonates, whose Ca/P atomic ratio was close to stoichiometric values, confirming that it was a new environmentally friendly green synthesis route (as shown in Figure 3).These HAp nano-rod products using licorice and LE as templates could be widely used in many biomedical fields, such as bone repair, drug delivery, and dental repair.Sezer D et al. [31] synthesized HAP using templates such as the modification of bromide CTAB, Pluronic hexadecyltrimethylammonium ® P-123 (P123), and Pluronic™ F-127 (F127) via the chemical precipitation method.The results showed that the surfactant modified with CTAB-HAP had the highest adsorption performance, which was suitable as an alternative carrier for ASA adsorption and controlled release.For this, it can be seen that the design of surfactant components should initially be based on the following two principles: firstly, the anchoring group with anchoring adsorption has an effect on the surface of HAP particles; second, a solvation chain with sufficient length could form stable n-HA particles through three-dimensional obstacles and form an affinity with the solvent.Based on the above design principles, it was important to select appropriate surfactants for the surface modification of n-HA particles. However, research has shown that the addition of surfactants, such as polyvinylpyrrolidone (PVP), chondroitin sulfate (ChS), aspartic acid (Asp), CTAB, SDS, and polyvinyl alcohol (PVA) as a binary system of surfactants, was usually only a stencil effect [32][33][34][35], which played a certain role in regulating the morphology and size of n-HA crystal growth.The effect on improving dispersion was poor, and the residues in the product were Sezer D et al. [31] synthesized HAP using templates such as the modification of bromide CTAB, Pluronic hexadecyltrimethylammonium ® P-123 (P123), and Pluronic™ F-127 (F127) via the chemical precipitation method.The results showed that the surfactant modified with CTAB-HAP had the highest adsorption performance, which was suitable as an alternative carrier for ASA adsorption and controlled release.For this, it can be seen that the design of surfactant components should initially be based on the following two principles: firstly, the anchoring group with anchoring adsorption has an effect on the surface of HAP particles; second, a solvation chain with sufficient length could form stable n-HA particles through three-dimensional obstacles and form an affinity with the solvent.Based on the above design principles, it was important to select appropriate surfactants for the surface modification of n-HA particles. However, research has shown that the addition of surfactants, such as polyvinylpyrrolidone (PVP), chondroitin sulfate (ChS), aspartic acid (Asp), CTAB, SDS, and polyvinyl alcohol (PVA) as a binary system of surfactants, was usually only a stencil effect [32][33][34][35], which played a certain role in regulating the morphology and size of n-HA crystal growth.The effect on improving dispersion was poor, and the residues in the product were difficult to remove.Langroudi MM et al. [36] used PVP as a template and SDS as a surfactant to synthesize bone-like n-HA via the bionic method.The results demonstrated that polymers and surfactants as polymer capsules could appropriately control the size, shape, morphol-ogy, and dispersion of HA crystals.All samples displayed biological activity because they could form carbonate apatite and grow HA on its surface, and the 3-(4,5)-dimethylthiahiazo (-z-y1)-3,5-di-phenytetrazoliumromide (MTT) test showed that the samples had good biocompatibility.Shanthi PMSL et al. [37] utilized the electrostatic interaction between surfactants to categorize them into two types: double anions (cetrimide and SDS) and double cations (cetrimide and CTAB), with a weight ratio of 1:1 and a total concentration of 0.28 g/100 mL.An effective morphological adjustment was performed on the samples.FTIR, XRD, FESEM, HRTEM, TGA/DTA, and BET analyses showed that the samples exhibited HAp phase with nano-scale and mesoporous properties.Anionic surfactants promoted the growth of the particles from spherical to hexagonal rods, while a mixture of double cations inhibited growth and led to disc-shaped HAp.The Ca 2+ ion release assay of the sample showed that the biological activity of disc-shaped HAp was better than that of commercial HAp.Tari NE et al. [38] used the mixture of CaCl 2 and H 3 PO 4 (aqueous phase), the cationic surfactant CTAB, and the anionic sodium dodecyl to prepare n-HA particles with various shapes via the precipitation method.These surfactants formed various aggregates as templates in a mixture of rich cation and anionic regions.The results indicated that the morphology of HAP nano-particles could be controlled by changing the ratio of cationic and anionic surfactants in the mixture to synthesize HA nano-particles with high crystallinity and minimal agglomeration.Shanthi PMSL et al. [39] reported a successful preparation of shell-shaped nano-HAp spheres with a well-defined morphology, a uniform size of approximately 200 nm, and a stoichiometric ratio of 1.7 using the surfactant tetradecyltrimethylammonium bromide (cetrometin).Ma XY et al. [40] accomplished the synthesis of spherical n-HA with outstanding uniformity and regularity via the water-in-oilmicroemulsion method at room temperature in a short duration, and span-80, cyclohexane, and Ca(NO 3 ) 2 •4H 2 O and (NH 4 ) 2 HPO 4 solution were used as surfactants, oil phase, and water phase, respectively.The effects of the water-oil ratio and water-surfactant ratio on the stability of the micro-lotion system were studied, and a stable reaction system was established with proposed growth mechanisms.Yang L et al. [41] proposed a simple and mass synthesis route of HA nano-crystals with no agglomeration, excellent crystallization, and low aspect ratio.An improved co-precipitation process was utilized, and non-toxic gelatinized starch was used as the matrix without any other surfactant.This synthetic pathway had the potential to expand production scale, and the product had the same biocompatibility and biological activity as conventional n-HA.It also had the capability to produce other precipitated ceramic nano-particles with significantly reduced agglomeration and aspect ratio.Suslu A et al. [42] investigated the effect of surfactant types on the biocompatibility of electrospun HAp/poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) composite nano-fibers, and non-ionic Tween 20 and 12-hydroxysteric acid (HSA), cationic dodecyl trimethyl ammonium bromide (DTAB), and anionic sodium deoxycholate and SDS surfactants were used for comparison.The results indicated that the incorporation of HAp and any of the surfactant types strongly activated the precipitation rate of the apatite-like particles and decreased the percentage crystallinity of the HAp/PHBV mats. According to the above-mentioned literature, it is evident that the introduction of surfactant into the preparation process of n-HA would improve the dispersion of n-HA. Surface Modification by Grafting Polymer In order to improve the interfacial adhesion between n-HA and polymers, researchers have used various organic small molecules to coat them to reduce the surface energy of n-HA, such as silane coupling agents, isocyanates, fatty acids, tartaric acid, etidronic acid, polyhedral oligomeric silsesquioxanes, etc. [43][44][45][46][47][48][49][50].Even various means of direct and indirect surface grafting or atom transfer radical polymerization grafting of polylactic acid were used to improve their interfacial adhesion [51], but the coating rate or grafting rate was relatively low, the modification effect was minimal when high-content n-HA was added, and the grafting process was cumbersome, toxic, and costly.In our previous studies [52][53][54][55], our research group also investigated various new methods for the combined grafting of lactide with some small organic molecules, such as stearic acid, citric acid, lysine, 3-amino propyltriethoxy silane (KH550), etc.The aforesaid modification methods had certain effects, which could improve the dispersion according to the TEM photographs of the before and after modification of n-HA, but the bending strength of the composite was significantly reduced compared to pure Poly(lactic-co-glycolic acid) (PLGA) when modified n-HA was added at the amount of over 15 wt% (as shown in Figure 4) because of the low grafting amount.Therefore, other modification methods need to be explored to solve the problem of n-HA/hydrophobic polymer composites. were used to improve their interfacial adhesion [51], but the coating rate or grafting rate was relatively low, the modification effect was minimal when high-content n-HA was added, and the grafting process was cumbersome, toxic, and costly.In our previous studies [52][53][54][55], our research group also investigated various new methods for the combined grafting of lactide with some small organic molecules, such as stearic acid, citric acid, lysine, 3-amino propyltriethoxy silane (KH550), etc.The aforesaid modification methods had certain effects, which could improve the dispersion according to the TEM photographs of the before and after modification of n-HA, but the bending strength of the composite was significantly reduced compared to pure Poly(lactic-co-glycolic acid) (PLGA) when modified n-HA was added at the amount of over 15 wt% (as shown in Figure 4) because of the low grafting amount.Therefore, other modification methods need to be explored to solve the problem of n-HA/hydrophobic polymer composites.Tang M et al. [56] used citric-acid-grafted surface-modified hydroxyapatite (SHA) to prepare SHA/GO composite materials via solution mixing and hydrothermal methods assisted by ultrasound.Before and after immersion in simulated body fluid (SBF) solution, the composite could effectively promote the mineralization of bone-like apatite.In vitro drug release tests showed that the change of GO content had a great influence on the adsorption performance of aspirin.It was expected that these SHA/GO composites could be used for biopharmaceutical loading.Polymer additives have recently been successfully applied to the surface modification and effective regulation of the morphology of various inorganic minerals.By encapsulating organic molecules to improve interfacial compatibility with polymers, one representative polymer type was an amphiphilic block copolymer, which was composed of two different hydrophilic segments.One segment exhibited strong interaction with inorganic ions or solids, whereas the other segments only played a dispersing or solubilizing role.The co-polymers could form a firm anchor on the surface of the inorganic compound, and the extended chain segments were stretched into the solvent.Liao L et al. [57] used surface grafting polymerization (γ-Benzyl-L-glutamic acid) on n-HA (PBLG-g-HA), and a novel PBLG-g-HA/Poly-L-lactic acid (PLLA) nano-composite was obtained.By chemically modifying the surface of HA, the uniform dispersion of n-HA in chloroform solution was effectively improved, achieving a transition from hydrophilicity to hydrophobicity on the surface of HA, thereby enhancing the interaction Tang M et al. [56] used citric-acid-grafted surface-modified hydroxyapatite (SHA) to prepare SHA/GO composite materials via solution mixing and hydrothermal methods assisted by ultrasound.Before and after immersion in simulated body fluid (SBF) solution, the composite could effectively promote the mineralization of bone-like apatite.In vitro drug release tests showed that the change of GO content had a great influence on the adsorption performance of aspirin.It was expected that these SHA/GO composites could be used for biopharmaceutical loading.Polymer additives have recently been successfully applied to the surface modification and effective regulation of the morphology of various inorganic minerals.By encapsulating organic molecules to improve interfacial compatibility with polymers, one representative polymer type was an amphiphilic block copolymer, which was composed of two different hydrophilic segments.One segment exhibited strong interaction with inorganic ions or solids, whereas the other segments only played a dispersing or solubilizing role.The co-polymers could form a firm anchor on the surface of the inorganic compound, and the extended chain segments were stretched into the solvent.Liao L et al. [57] used surface grafting polymerization (γ-Benzyl-L-glutamic acid) on n-HA (PBLG-g-HA), and a novel PBLG-g-HA/Poly-L-lactic acid (PLLA) nano-composite was obtained.By chemically modifying the surface of HA, the uniform dispersion of n-HA in chloroform solution was effectively improved, achieving a transition from hydrophilicity to hydrophobicity on the surface of HA, thereby enhancing the interaction between HA and the PLLA matrix.Pei F et al. [58] prepared modified n-HA by grafting polydopamine (PDA), which was added to the polycaprolactone (PCL) matrix to enhance their interface bonding in the bone scaffold via selective laser sintering (SLS).The tensile strength and compressive strength of the scaffold were increased by 10% and 16%, respectively.Additionally, the scaffold exhibited favorable biological activity and cell compatibility, which Polymers 2024, 16, 1263 9 of 20 could accelerate the formation of the apatite layer and promote cell adhesion, proliferation, and differentiation. Park SJ et al. [59] synthesized n-HA grafted with L-glutamic acid and fixed it on Ti disc implants modified by albumin.Compared with the original titanium implant, the modified titanium implant enhanced the adhesion, proliferation, and cell viability of MC3T3-E1 cells, and the enhancement would facilitate the bone integration between the Ti implant and the dental bone.Mehri A et al. [60] prepared tyramine-grafted hydroxyapatite through the hydrothermal reaction.Tyramine was in situ grafted onto the surface of HAp to inhibit crystal growth by forming organic-inorganic hybrid nano-particles, thereby developing a multi-functional surface to ensure good compatibility with the surface of cell-modified HAp.Timpu D et al. [61] used arginine (Arg) or polyethyleneimine (branched BPEI, or linear LPEI) as a cationic modifier and dispersant to obtain functionalized nHAp through wet chemical technology.The results demonstrated that the prepared nanoparticles had needle or plate shapes.Both Arg and PEI were successfully grafted onto nHAp, and LPEI-functionalized nHAp displayed good similarity with biological apatite and the best DNA binding capacity.When nHAp/LPEI nano-particles were incorporated into the porous matrix based on collagen/dimethylsilanediol hyaluronate, the compression modulus of the biological composite material was six times higher than that of the pure polymer matrix, and the composite sponge possessed high toughness in five consecutive compression tests without any permanent deformation and cracks.The aforementioned findings indicate that nHAp/LPEI nano-particles can be considered as promising materials for biomedical applications, functioning as gene carriers or reinforcing fillers with strong interfacial adhesion in bone engineering biological composites. Mirhosseini MM et al. [62] synthesized functional HA nano-particles (HA-F127) by fixing Planck F127 on HA nano-particles.The F127 graft chain on the surface of HA formed a core-shell structure, which reduced the agglomeration of the modified nano-particles and improved the dispersity.Due to the excellent chain entanglement and interfacial crystallization of the modified HA in the polymer substrate, HA-F127 and unmodified HA were introduced into the PCL/P123 electrospun substrate, resulting in a nano-composite containing 4 wt% nano-fillers.Based on the strong interfacial adhesion between the fillers and the matrix, the molecular dynamics simulations confirmed that the strong interfacial interactions between HA-F127 and PCL/P123, HA-F127/PCL/P123 secured excellent mechanical properties, crystallinity percentage, and thermal stability.Therefore, the HA-F127/PCL/P123 nano-fiber scaffold was considered as a promising candidate for tissue engineering applications.Ma R et al. [63] used the silane-coupling agent KH560 for the grafting modification of bioactive HA particles and prepared an HA/polyether ether ketone (PEEK) composite through hot pressing.The results indicated that KH-560 was successfully modified HA(m-HA), and the tensile strength of the m-HA/PEEK composite reached its maximum value when the HA content was 5 wt%, which was 23% higher than that of the pure PEEK sample.In vivo biomechanical testing of m-HA/PEEK revealed that the growth of bone tissue around the m-HA/PEEK composite with 5 wt% HA content was better than that of specimens with different HA contents.The above results indicated that the bioactive filler HA had a nano-scale effect in the PEEK matrix, which is clearly corroborated by the growth of surrounding bone tissue in the body. Kairalla EC et al. [64] modified the surface of hydroxyapatite nano-crystals (HAPN) by grafting a three-arm star poly(ε-caprolactone) (SPCL).The results of albumin (HSA) and fibrinogen (HFb) adsorption indicated that SPCL-g-HAPN exhibited resistance to HFb adsorption compared with unmodified HAPN.ZP and CA measurements indicated that the heterogeneous topological structure of SPCL-g-HAPN was caused by the presence of hydrophobic and hydrophilic regions on the surface of the nano-composite.The enzymatic degradation of cholesterol esterase and lipase demonstrated that the hydrolysis rate of SPCL-g-HAPN was very slow via comparison with the SPCL/HAPN mixture.In vitro biological study indicated that human osteoblast-like cells (MG-63) possessed normal cell morphology and could adhere and spread across the surface of SPCL-g-HAPN.Compared with pure HAPN or SPCL materials, higher overall cell proliferation was observed on the SPCL-g-HAPN scaffold.Kumar L et al. [65] developed a porous modified n-HA/polyurethane (m-HA/PU) nano-composite scaffold for bone tissue engineering by grafting etidronic acid (ETD, 0.1 M) onto the surface of n-HA particles and strengthening it into polyurethane scaffolds prepared via the foaming method.As seen in Figure 5, it can be observed that the surface of m-HA particles was completely transformed from a granular structure to a sheet structure with a size of 40 nm.Furthermore, the compressive strength of the obtained PU/m-HA nano-composites with 30% filler concentration was 22.4 MPa, with a required porosity of 80%, showing that the PU nano-composite scaffolds were well suited for its bone healing application.In addition, the results of in vitro soaking in SBF for 4 weeks showed partial surface hydrolysis, and the cell culture results show that m-HA/PU nano-composite scaffolds were very suitable for bone tissue engineering. presence of hydrophobic and hydrophilic regions on the surface of the nano-composite.The enzymatic degradation of cholesterol esterase and lipase demonstrated that the hydrolysis rate of SPCL-g-HAPN was very slow via comparison with the SPCL/HAPN mixture.In vitro biological study indicated that human osteoblast-like cells (MG-63) possessed normal cell morphology and could adhere and spread across the surface of SPCLg-HAPN.Compared with pure HAPN or SPCL materials, higher overall cell proliferation was observed on the SPCL-g-HAPN scaffold.Kumar L et al. [65] developed a porous modified n-HA/polyurethane (m-HA/PU) nano-composite scaffold for bone tissue engineering by grafting etidronic acid (ETD, 0.1 M) onto the surface of n-HA particles and strengthening it into polyurethane scaffolds prepared via the foaming method.As seen in Figure 5, it can be observed that the surface of m-HA particles was completely transformed from a granular structure to a sheet structure with a size of 40 nm.Furthermore, the compressive strength of the obtained PU/m-HA nano-composites with 30% filler concentration was 22.4 MPa, with a required porosity of 80%, showing that the PU nano-composite scaffolds were well suited for its bone healing application.In addition, the results of in vitro soaking in SBF for 4 weeks showed partial surface hydrolysis, and the cell culture results show that m-HA/PU nano-composite scaffolds were very suitable for bone tissue engineering.Yang WF et al. [66] modified HA nano-particles with dopamine and hexamethylene diamine, and PLLA was connected to HA nano-particles through the ammonolysis reaction.The PLLA-modified HA nano-particles were mixed with PLLA to form thermoplastic composites for 3D printing.Due to the high compatibility between the PLLA matrix and PLLA-modified HA nano-particles, 3D-printed PLLA/HA scaffolds displayed strong mechanical properties and good biocompatibility, enabling flexible strategies for manufacturing scaffolds for customized treatment of bone defects.Wang Y et al. [67] successfully used the surface-initiated reverse atom transfer radical polymerization (reverse ATRP) technology to modify HAP nano-particles with polymethyl methacrylate (PMMA).The peroxide initiator component was covalently linked to the surface of HAP through the surface hydroxyl group.The reverse ATRP of methyl methacrylate (MMA) was carried out from the initiator functionalized HAP.Subsequently, the end bromine group of the grafted PMMA initiated the ATRP of MMA.The HAP nano-particles grafted with PMMA exhibited excellent dispersion in MMA monomer, and the dispersibility of surface-grafted Yang WF et al. [66] modified HA nano-particles with dopamine and hexamethylene diamine, and PLLA was connected to HA nano-particles through the ammonolysis reaction.The PLLA-modified HA nano-particles were mixed with PLLA to form thermoplastic composites for 3D printing.Due to the high compatibility between the PLLA matrix and PLLA-modified HA nano-particles, 3D-printed PLLA/HA scaffolds displayed strong mechanical properties and good biocompatibility, enabling flexible strategies for manufacturing scaffolds for customized treatment of bone defects.Wang Y et al. [67] successfully used the surface-initiated reverse atom transfer radical polymerization (reverse ATRP) technology to modify HAP nano-particles with polymethyl methacrylate (PMMA).The peroxide initiator component was covalently linked to the surface of HAP through the surface hydroxyl group.The reverse ATRP of methyl methacrylate (MMA) was carried out from the initiator functionalized HAP.Subsequently, the end bromine group of the grafted PMMA initiated the ATRP of MMA.The HAP nano-particles grafted with PMMA exhibited excellent dispersion in MMA monomer, and the dispersibility of surface-grafted HAP and the compressive strength of HAP/PMMA composites were improved with the increase in the amount of grafted PMMA.Dai YF et al. [68] grafted poly (L-phenylalanine) onto the surface of n-HA through the ring opening polymerization (ROP) of L-phenylalanine N-carboxylic anhydride.By optimizing the reaction conditions, the grafting amount of poly (L-phenylalanine) on the surface of HA could be increased to a range of 20.26% to 38.92%, and the crystal structure of modified HA was almost the same as that of HA.MTT results demonstrated that modified HA had good biocompatibility, indicating that the modified HA could have a potential application in bone tissue engineering, and that the ROP is an effective surface modification method. Ku KL et al. [69] studied the surface modification of n-HAP with ethylene glycol and PCL sequentially via a two-step ring opening reaction, and the affinity was improved between the polymer and ceramic interphases of PCL-grafted ethylene glycol-HAP (PCL-g-HAP) in PMMA; that is, PCL-g-HAP/PMMA not only increased the interfacial adhesion between the nano-particles and cement but also better promoted biological activity and affinity between the osteoblast cells and PMMA composite cement.These results meant that g-HAP and its use in a polymer/bioceramic composite had great potential to improve the functionality of PMMA cement.Furthermore, the composite of PCL-g-HAP with poly (1,6-bis-(p-carboxylphenoxyhexane) co-(sebacic anhydride)) (PANH) was studied.The PCL-g-HAP/PANH composite exhibited excellent mechanical properties and a rapid degradation rate.Preliminary in vivo studies for rat skull repair had affirmed the superior performance of the PCL-g-HAP/PANH composite, which had great potential to be a novel matrix for bone tissue engineering [70].Zarif F et al. [71] synthesized citric acid and aspartic acid grafted HA (g-HA) via the in situ co-precipitation method and explored its controlled delivery of moxifloxacin.The results revealed that g-HA, characterized by high surface area and surface charge and low crystallinity, strengthened its electrostatic interaction with the antibiotic moxifloxacin and decreased the drug release in vitro compared with pure HA.The in vitro antibacterial activity manifested that the drug release of HA and g-HA was against Staphylococcus aureus and Enterobacteriaceae, and the MTT assay confirmed the biocompatibility of HA and g-HA.Li HB et al. [72] utilized HA nano-particles to graft onto the surface of polyethylenemethacrylate (PEGMA) and cross-link with polyethylene methacrylate (PEGDMA) under ultraviolet light to form a composite.The dispersion of HA g-PEGMA nano-particles in the poly (PEGDMA) matrix was better than that of n-HA.With a load of 1 wt%, the strength and modulus of the composite were increased by 14% and 9%, respectively. Kumar L et al. [73] modified n-HA with triethanolamine (TEA-nHA), and the morphology of TEA-nHA was successfully changed from particles to irregular sheets/plates.Compared with pure (PU) composites, the PU/TEA-nHA nano-composite formed with castor oil-based PU with a content of 40 wt% possessed open and interconnected pores with a size range of 150-700 µm, and the compressive strength and porosity of the composites were 20.7 MPa and the porosity was ≤82%, respectively.The cellular compatibility of these new engineering surfaces could maintain exponential growth for up to 8 days and enhance cell viability.Overall, the developed surfaces had improved cell growth, suggesting that the PU/TEA-nHA nano-composite was capable of promoting bone tissue regeneration.Mehmanchi M et al. [74] successfully grafted the arm with the functional group of uracil pyrimidinone with a self-association ability through the tetrahydrogen bond onto the n-HA.Compared with the original n-HA, the supermolecular modified nano-particles (n-HAP-UPy) enhanced the colloidal stability, and they were uniformly dispersed in PCL with different filler loads.Preliminary cell results clearly confirmed that the supramolecular nano-composites were non-toxic and biocompatible.Pielichowska K et al. [75] studied a functionalized n-HAP with PCL, using 1,6-hexamethylene diisocyanate (HDI) as a coupling agent, and then incorporated it into the polyoxymethylene copolymer (POM) matrix using the extrusion technique to obtain POM/HAP-g-PCL composites.It was found that the introduction of HAP-g-PCL to the POM matrix had a limited effect on the phase transitions of POM and its degree of crystallinity, and it caused a significant increase in the thermal stability of the POM.In particular, the crucial parameter in biomedical applications, namely, the in vitro bioactivity, was improved, albeit slightly decreasing the mechanical properties of POM composites. Zhang M et al. [76] used stearic acid to modify HAP in different solvents (water, ethanol, or dichloromethane (CH 2 Cl 2 )) and studied the effects of different solvents on the properties of HAP particles (activation rate, grafting rate, chemical properties), lotion properties (lotion stability, lotion type, droplet morphology), and cured materials (morphology, average pore size).The results confirmed that there was interaction between stearic acid and HAP particles, and the hydrophobicity of HAP particles was enhanced after surface modification.It was best to use ethanol as the solvent for stearic acid to modify HAP particles so that the stability of Pickering lotion could be improved and the cured samples with uniform pore size could be obtained.Song XF et al. [77] prepared PLLA-g-HA by adding ethylene-glycol-tethered hexamethylene diisocyanate.The results showed that the grafting ratio of HA was 25% higher than that obtained by unmodified HA or HA modified with L-lactic acid, and it could be stably dispersed in chloroform for more than 2 days.The tensile test after co-electrospinning fibers showed that the mechanical properties of the PLLA-g-HA/PLGA composite fiber membrane were higher than those of the HA/PLGA membrane.Jiang YR et al. [78] used a series of aminoalkyl phosphates (AAP-n, carbon atom number n between 2 and 6) as surface modifiers to prepare HA hydrolytic colloid.The obtained nano-particles (Cn-HA) had a core-shell structure, in which the ionized layer of calcium (AAP-n) complex [+H 3 N-(CH 2 )n-OPO 3 Ca] encapsulated each HA core.Due to the electrostatic repulsion between suspended particles, long-term colloidal stability could be achieved.The introduction of AAP-n led to a particle aspect ratio increase from C2-HA to C6-HA along the c-axis of the crystal.Preliminary cell culture using osteoblast-like MG-63 cells showed no cytotoxicity associated with the prepared Cn-HA particles.The above results indicated that the functional amino groups around the nano-particles could be used to graft various organic chains to prepare homogeneous HA/polymer composites as bone-bonding materials.Wei JC et al. [79] proposed the surface modification of n-HA via the ring opening polymerization (ROP) of γ-benzyl-L-glutamate N-carboxyanhydride (BLG-NCA) to prepare PBLG-g-HA.The results showed that the PBLG-g-HA hybrid could form an interpenetrating net structure during the self-assembly process.The PBLG-g-HA hybrid could maintain higher colloid stability than the pure HA nano-particles, and the in vitro cell cultures suggested that the cell adhesion ability of PBLG-g-HA was much better than that of pure HA. Makvandi P et al. [80] modified commercial HAP at the micron level with methacrylate and quaternary ammonium salts, and different amounts (i.e., 2.5, 5, and 10 wt%) were used as fillers for UV-cured custom resins in stereolithography (SLA).Compared with pure resins, all modified HA particles (m-HAP)-filled composite had higher strength, and the antibacterial activity of the composite increased with the increase in m-HAP content.Compared to pure HAP, the complex of m-HAP (i.e., 2.5%) exhibited sufficient antibacterial activity and reduced the growth of bacteria and fungi, even with low concentrations.All results summarized that samples containing 5% m-HAP could be considered as the best comprehensive solution for thermal, chemical physical, mechanical, and biological properties, and the composite was selected by SLA to construct an open bite prototype.Li K et al. [81] prepared HA nano-rods doped with Fe and Si on Ti, and the antibacterial peptide HHC-36 was chemically bonded to the nano-rods with and without a polymer brush as a gasket.The results showed that the grafting of polymer brushes onto HHC-36 did not substantially alter the microstructure of the nano-rods, but the brushes effectively increased the loading and stability of HHC-36.Moreover, with the assistance of HHC-36, the synergistic effect of adenosine monophosphate (AMP)-derived antimicrobial peptides and the physical puncture of HA nano-rods could effectively kill Staphylococcus aureus.Compared with Ti, the formation of biofilm was inhibited in phosphate buffer solution and nutrient-rich medium.HA nano-rods with polymer-brushed HHC-36 killed 99.5% of Staphylococcus aureus and 99.9% of Escherichia coli, and they exhibited cellular compatibility in vitro, inhibiting bacterial infections and reducing inflammatory reactions in vivo, which indicates that the polymer-brushed HHC-36 on HA nano-rods had enormous potential application on the Ti surface. Tham DQ et al. [82] successfully prepared vinyl trimethoxysilane-treated HA (vHAP) and PMMA-grafted HAP (gHAP) using original HAP (oHAP) as the raw material.Three groups of HAP-modified PMMA bone cement (oHAP-BC, vHAP-BC, and gHAP-BC) were prepared using three HAPs (oHAP, vHAP, and gHAP) as additives.The results showed that the setting time of HAP-modified bone cement was longer, the maximum exothermic temperature was lower, and the vHAP and gHAP nano-particles were better dispersed in the polymerized PMMA matrix than oHAP nano-particles, thus meeting the requirement of the mechanical properties, which proved the effectiveness of organic functionally grafted HA in acrylic bone cement.Dorm BC et al. [83] studied two sources of L-alanine and three grafting methods for the surface functionalization of HA.The results showed that 8-25 wt% of organic matter was formed in HA.The viability of MG-63 human osteoblasts incubated with alanine grafted HA samples for 24 h was well preserved, which was higher than that of cells incubated with HA in all cases.Alanine-grafted HA prepared in situ and by simple mixing showed higher protein adsorption and cell adhesion, respectively, indicating that it was promising in regenerative medicine. Elbasuney et al. [84] used poly (ethylene copolymerization AA) polymer surfactant to modify the surfaces of HA nano-plates.The surface properties of organic modified HA nano-plates changed from hydrophilicity to hydrophobicity.It demonstrated effective phase transfer from the aqueous phase to the organic phase, reducing the size of the nanoplate to 100 nm L and 50 nm W. By way of surface modification with dodecanedioic acid, layered HA plates were further developed.This method could provide laminated or peeled plates for effective integration into biocompatible polymers, giving hope for the green synthesis of hyaluronic acid nano-particles with controllable morphology and surface properties.Xu M et al. [85] obtained modified HA (HA-APS) with active amino groups on the surface via reaction with silane coupling agent KH-550, and then initiated L-aspartic acid-β-HA grafted with poly (benzyl aspartate) (PBLA) prepared via the ring opening polymerization of benzyl ester N-carboxylic anhydride (BLA-NCA), which realized the transition of the surface of HA from hydrophilicity to hydrophobicity.The dispersion experiment confirmed that the surface-modified HA by PBLA could significantly increase the hydrophobicity of the HA surface and prevent the aggregation of nano-HA particles.Heng CN et al. [86] developed a simple surface-initiated polymerization strategy for n-HA via combination of the surface ligand exchange and reversible addition fragmentation chain transfer (RAFT) polymerization to improve the dispersibility in aqueous solution, where HA nano-rods were first modified with riboflavin-5-phosphate sodium (RPSSD) via ligand exchange reaction between the phosphate group of RPSSD and oleic acid.Then, the hydroxyl group of nHAP-RPSSD was used to immobilize the chain transfer agent, which was used as the initiator for surface-initiated RAFT polymerization.Results showed that nHAP-RPSSD-poly(IA-PEGMA) exhibited excellent water dispersibility, desirable optical properties, good biocompatibility, and high drug loading capability, making it a promising candidate for biological imaging and controlled drug delivery applications in bone repair fields. According to the results of the listed literature, we think that surface grafting polymers onto n-HA is an effective method, which could improve not only the dispersion of n-HA but also the interfacial adhesion between n-HA and polymers.Moreover, the higher grafted amount would be more conducive to enhancing the mechanical properties of nano-composite. Preparation of Hybrid Nano-Apatite by Introducing Macromolecule To obtain n-HA with excellent dispersion, Zhang P et al. [80] paid attention to the raw materials of preparation n-HA, where polyethylene glycol monomethyl ether phosphate (P-MPEG) was used as the auxiliary phosphorus source and steric hindrance, and the hybrid nano-apatite that extended the MPEG chain beyond the n-HA crystal structure was prepared via the co-precipitation method, which could not only be dispersed in water but also in organic solvents such as methanol and dimethylformamide (DMF).Obviously, the hybrid nano-apatite prepared from the raw materials could effectively raise its dispersion and compound with water-insoluble polymers owing to the change in surface properties via the introduction of the MPEG hydrophobic structure chain.However, the molecular weight of P-MPEG, selected in this paper, was very small, and its steric hindrance and hydrophobicity improvement were very limited.It was still necessary to explore the preparation of hybrid nano-apatite by introducing other amphiphilic macromolecules.Cyclodextrin has a unique amphiphilic structure, which is hydrophobic in the cavity and hydrophilic outside the cavity.Cyclodextrin macromolecules are functional materials in bone materials.In the preparation of HA, it was reported that there was a certain chemical bonding interaction between HA and cyclodextrin macromolecules [88,89].Therefore, in our research group [90,91], we explored the influence of cyclodextrin macromolecules of different sorts, addition orders, reaction times, addition amounts, and other factors on the preparation, structure, and dispersion of hybrid nano-apatite (shown in Figure 6).According to Figure 6, it can be confirmed that the dispersion of carboxylated cyclodextrin hybrid nano-apatite (CM-β-CD-HA(Co)) was significantly augmented, and the tensile strength of the composite with the additional amount of 10 wt% CM-β-CD-HA(Co) was the best, which was 14.84% higher than that of pure PLGA.The results confirmed that the hybrid nano-apatite obtained through a new surface modification strategy has significant potential as a reinforcement filler for PLGA used as bone materials in the future. Conclusions In summary, n-HA particles have great application in the bone materials field.The research on the synthesis method and surface modification of n-HA has made some progress, and its modification effect has its own emphasis; for example, ion doping usually improves the biological activity significantly, while the template method can regulate its morphology and adsorption performance.Grafting small molecules or polymers can optimize surface characteristics and interface bonding with polymers, while hybrid nanoapatite, by introducing amphiphilic macromolecules, can significantly improve dispersion, and it has a great promoting effect for the application of n-HA.In future research, we think the following aspects should be considered: (1) more transition metals and some rare earth ions should be designed to substitute Ca 2+ so as to endow n-HA with some new properties, such as luminescence, magnetism, conductivity, etc., which would broaden its application in biomedical fields beyond bone materials, including the diagnosis and treatment of diseases, especially cancer; (2) it is necessary to combine the introduction of ion doping with functional small molecules or polymers during the preparation of n-HA so as to obtain multi-functional hybrid nano-apatite with high dispersion and good biologi- In addition, as we know, with the increasing consumption of non-renewable resources such as oil and coal and the rapid rise in raw material prices, the hybrid nano-apatite has attracted widespread attention for the development and utilization of green and environmentally friendly natural resources.Lignin is the only plant resource containing a benzene ring structure in nature, and it is non-toxic, biodegradable, biocompatible, and possesses some special properties, such as antibacterial, antioxidant, and UV absorption functions, so it is an ideal raw material for preparing functional materials.Ho YK et al. [92] reported that lignin could act as a gene carrier by forming complexes with DNA after co-polymerization with other polymers, and it displayed a high infection rate and low cytotoxicity.Although the above literature indicated that lignin was an excellent green and environmentally friendly chemical raw material, and it was non-toxic to organisms, there have been no reports on lignin being used for the modification of n-HA.Therefore, in our recent research [93], we explored the preparation of hybrid nano-apatite by introducing lignin, adopting the co-precipitation method, and the obtained hybrid nano-apatite displayed excellent dispersion and promoted crystallization effects, which could greatly improve the mechanical strength of PLGA.In addition, in vitro cell culture experiment results indicated that lignin surface hybridization of n-HA was beneficial for improving the cell biocompatibility of PLGA, suggesting that the introduction of lignin was a novel method for obtaining highly dispersed n-HA, and it would provide a new idea for the future implementation of n-HA/PLGA nano-composites as bone materials and offer a new means for application of lignin in the biomedical field.Subsequently, our research team [94] also explored the preparation of a new hybrid nano-apatite via the co-hybrid of lignin and cyclodextrin (g1-HA).The results showed that the hybrid of lignin and cyclodextrin for n-HA had an excellent synergistic effect, which could improve the dispersion, and produced good interface bonding between hybrid nano-apatite and PLGA matrix.When the amount of hybrid nano-apatite was 15 wt%, the tensile strength of the composite was still 14.53% higher than that of PLGA, which was significantly better than the hybrid nano-apatite with lignin or cyclodextrin alone.In addition, the results of immersion in SBF and in vitro cell experiments showed that the co-hybrid nano-apatite had good degradation performance, apatite deposition, and excellent cell biocompatibility.This study could provide important guidance for obtaining a highly dispersed n-HA as a PLGA-based reinforcing filler for bone materials. Conclusions In summary, n-HA particles have great application in the bone materials field.The research on the synthesis method and surface modification of n-HA has made some progress, and its modification effect has its own emphasis; for example, ion doping usually improves the biological activity significantly, while the template method can regulate its morphology and adsorption performance.Grafting small molecules or polymers can optimize surface characteristics and interface bonding with polymers, while hybrid nano-apatite, by introducing amphiphilic macromolecules, can significantly improve dispersion, and it has a great promoting effect for the application of n-HA.In future research, we think the following aspects should be considered: (1) more transition metals and some rare earth ions should be designed to substitute Ca 2+ so as to endow n-HA with some new properties, such as luminescence, magnetism, conductivity, etc., which would broaden its application in biomedical fields beyond bone materials, including the diagnosis and treatment of diseases, especially cancer; (2) it is necessary to combine the introduction of ion doping with functional small molecules or polymers during the preparation of n-HA so as to obtain multi-functional hybrid nano-apatite with high dispersion and good biological activity; (3) some key technologies for controlling the size and morphology of n-HA particles should be deeply studied so as to extend their applications in various fields; (4) theoretical calculation by means of quantum chemistry for the structure of modified n-HA should be emphasized so that some properties of the modified n-HA-for example, the changes in surface properties, the dispersion improvement, and so on-can be further explained.This could also help predict the effective modification methods so as to obtain a more ideal surface state.To summarize, we believe that meaningful surface modifications of n-HA will be developed in the future, which would expand the application of n-HA particles in the biomedical field. Data Availability Statement: The data that support the findings of this study are rearranged from the reported references and available within the article. Figure 2 . Figure 2. The formation mechanism of synthesized HAp templated by ozonolyzed natural rubber latex. Figure 2 . Figure 2. The formation mechanism of synthesized HAp templated by ozonolyzed natural rubber latex. Figure 3 . Figure 3. Nano-rod micelles of licorice root extract as a novel green template for the formation of uniform. Figure 3 . Figure 3. Nano-rod micelles of licorice root extract as a novel green template for the formation of uniform. Figure 5 . Figure 5. (a) TEM and SEM images of n-HA and modified n-HA; (b) comparison between porosity and compressive strength of m-nHA/PU nano-composite with varying concentrations of m-nHA [65]. Figure 5 . Figure 5. (a) TEM and SEM images of n-HA and modified n-HA; (b) comparison between porosity and compressive strength of m-nHA/PU nano-composite with varying concentrations of m-nHA [65]. Figure 6 . Figure 6.(a) Dispersion pictures of nano-particles in dichloromethane for different time points; (b) tensile strength of samples [90]. Funding: This work is supported by the Postgraduate Scientific Research Innovation Project of Hunan Province (CX20230518). Table 1 . Modification methods of n-HA. Table 2 . Comparison of the different chemical modification methods.
2024-05-03T15:09:11.234Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "af2e24e3f6c20687935c6113ab45b229e83a872d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ce4b3af1c39233d52bc613e66b2746c4f45b99de", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Engineering" ], "extfieldsofstudy": [] }
257215578
pes2o/s2orc
v3-fos-license
Abusive Supervision and Employee Knowledge Sharing: The Roles of Psychological Safety and Perceived Motivational Climate Drawing on conservation of resources (COR) theory, this study examines the relationship between abusive supervision and employee knowledge sharing and investigates the mechanism underlying their relationship through perceived motivational climate and psychological safety. The data were collected from 337 supervisor-employee dyads from knowledge-based companies in China. Hierarchical regression and path analysis were used for data analysis. The results showed that abusive supervision played a detrimental role in affecting employees’ willingness to share knowledge and that a perceived motivational climate moderated the effect of abusive supervision on employee knowledge sharing through psychological safety. Based on the findings, we provide the theoretical and practical implications. Introduction Knowledge is a strategic resource for organizations. Previous studies have pointed out that employees' engagement in knowledge sharing and organizational support in knowledge management can foster business success and build competitive advantages for companies' short term and long-term development (Argote & Ingram, 2000). Therefore, the research on identifying the factors that influence employees' willingness to engage in knowledge sharing and the relevant mechanism has generated a substantial attention. Among the potential antecedents of knowledge sharing, leaders' behaviors have attracted considerable attention, which are found to have a great influence on employees' behaviors. For instance, transformational leadership (Le & Lei, 2018;Son et al., 2020), empowering leadership (A. Srivastava et al., 2006;Xue et al., 2011), and ethical leadership (Bavik et al., 2018) have been found to positively promote employees' knowledge sharing at work. These studies provide us with a clear picture on the positive effects of leadership on employees' knowledge sharing. Nevertheless, not all the leadership can engage employees in knowledge sharing. We argue that certain leaders' behaviors can have negative impacts on employees' willingness to share knowledge. However, few studies have precisely examined the dark sides of leadership that may hinder employee knowledge sharing in an objective manner. The present study addresses the identified gap by employing the Conservation of Resources (COR) theory (Hobfoll, 1989) to investigate how abusive supervision, a common stressor in the workplace, depletes employees' resources and then hinders employees' knowledge sharing behaviors. We aim to enhance the understanding of the psychological mechanisms underlying the process by which negative leadership behaviors influence employees' knowledge sharing. According to Ipe (2003) knowledge sharing is ''the art of making knowledge available to others within the 1 Huaqiao University, Quanzhou, China 2 Kedge Business School, Talence, France organization.'' At the individual-, team-, and organizational levels, knowledge sharing can provide employees with opportunities to pursue better performance (Wang & Noe, 2010). However, when employees possess the knowledge that gives themselves a competitive edge over other colleagues, and when knowledge is difficult to achieve, knowledge sharing will make employees feel risk-taking (S. and which is often considered as an extra-role behavior (Matzler et al., 2011). Moreover, during the knowledge sharing, knowledge holders may be poorly perceived and resisted by knowledge recipients, or even called ''pretentious'' (Wang & Noe, 2010). Therefore, only when employees feel safe enough and they are provided with sufficient resources (e.g., incentives, corporate policy), they are more likely to engage in knowledge sharing (Hu et al., 2018). Thus, this study proposes that psychological safety, which enables individuals fully display themselves without concerns regarding negative impacts on their self-image, status or occupation (Kahn, 1990), might play an important mediating role in the relationship between abusive supervision and knowledge sharing. Consistent with COR theory, we propose that abusive supervision depletes employees' psychological safety and thus affects their knowledge sharing. Knowledge sharing among individuals is affected not only by a supervisor's leadership style, but also by the workplace environment. To better examine the impact of abusive supervision on knowledge sharing, we examine the motivational climate, which is a group-level variable that has potential moderating effect. The motivational climate refers to group members' shared perception of success and failure standards regarding their work. Two types of motivational climates exist, which are mastery climates and performance climates (Nerstad et al., 2013). Mastery climates focus on individual/team learning, development, and hard work, while performance climates define success based on normative abilities and social comparisons (Ames, 1992). According to Hobfoll et al. (2018), the organizational environment is a doubleedged sword, which can nurture employees' resources and restrain them as well. Therefore, this study aims to explore the negative moderating role of the motivational climate in the relationship between abusive supervision and employees' knowledge sharing. Our study contributes to the current literature in the following aspects: First, this study enriches the abusive supervision literature and knowledge sharing literature by investigating the influence of abusive supervision on knowledge sharing, thereby contributing to the insufficient exploration of the negative factors that hinder employees' knowledge sharing. Second, the context in which individuals are embedded has always been regarded as an important factor affecting individual knowledge sharing. Through examining how the mastery climate and performance climate influence the negative relationship between abusive management and knowledge sharing across different levels, our research verifies both the nourishing and restraint effects of context on personal resources. Third, by investigating the mediating role of employees' psychological safety in the relationship between abusive supervision and knowledge sharing, we identify the internal mechanism by which abusive supervision influences employees' knowledge sharing. Furthermore, our study reveals the important role of psychological safety in employees' knowledge sharing. The theoretical model of this study is shown in Figure 1. Abusive Supervision and Knowledge Sharing Abusive supervision refers to subordinates' perceptions of the extent to which supervisors engage in the sustained display of hostile verbal and nonverbal behaviors, excluding physical contact (Tepper, 2000). The typical examples of abusive supervision include, but not limit to, publicly criticism, ridicule, and the silent treatment. In the United States, 13% of employees report that they were victims of abusive supervision (Tepper, 2000). According to a survey which has more than 10,000 Chinese professionals' participation, more than 70% of employees reported that they had been abused by their supervisors, much more frequently than by colleagues (Shen et al., 2020). Other previous studies have shown that abusive supervision, as a stress source in the workplace, has significant negative effects on employees' work performance, such as increasing the level of psychological tension and stress, decreasing the satisfaction at work and in daily life, causing the intention to quit, inducing the levels of deviant workplace behaviors, and lowering the levels of performance (Tepper, 2000;Tepper et al., 2017;Tepper, 2007). In line with the lack of resources, we predict that employees who are under the abusive leadership may reduce their willingness to engage and efforts in knowledge sharing, representing an expansion of the scope of the study of abusive supervision. Knowledge sharing is a process in which individuals offer task-related information and experience to help others, solve problems, develop new ideas, and implement new processes through cooperation with others (Cummings, 2004) and, thus, is essentially a process in which employees exchange knowledge with each other and jointly create new knowledge (Wang & Noe, 2010). For employees, knowledge sharing is a typical selfdetermination behavior that requires each individual to invest extra time and energy to engage. When sharing knowledge with others, individuals might risk being subjected to the recipient's maladaptive reactions (Wang & Noe, 2010). Therefore, an employee's discretionary knowledge sharing is promoted only when the individuals realize that the organization will compensate them for the risk of the knowledge lost by providing sufficient resources (e.g., support from a supervisor) (Park & Kim, 2018;Wang & Noe, 2010). According to COR theory, individuals tend to acquire, maintain, and preserve resources such as experience, practice, energy, social support, expertise, and discretionary decision-making (Hobfoll et al., 2018). When encountering a risk of resource loss, having an actual loss or the failing to obtain corresponding benefits after investing resources, individuals experience strong psychological pressure. A supervisor's abusive behavior, such as hostility and ridicule, might cause employees to experience the actual loss of important resources or perceive a risk of resource loss (A. . In turn, employees might focus on conserving and protecting their existing resources to avoid further resource losses. For employees, special knowledge, expertise, skills, and information are very important resources. When employees suffer from abusive supervision, they might use the discretion to keep their knowledge to themselves. In addition to the perceived loss of leadership support, which is a vital resource, abusive supervision might further consume the resources that employees set aside to cope with a supervisor's mistreatment, leading employees to reduce efforts at work and to reduce the good behaviors such as knowledge sharing. Overall, drawing upon COR theory, we propose that employees who have suffered abusive supervision are unlikely to share knowledge, thus leading to the following hypothesis: H1: Abusive supervision is negatively correlated with employee knowledge sharing. Moderating Effect of the Group Motivational Climate The achievement context in which employees perform daily tasks plays a very important role in their decision to share knowledge (Connelly et al., 2019). The motivational climate defined by traditional achievement goal theory represents a situation in which employees perceive the standards of success and failure as the policies, practices, and procedures implemented in the work environment. Practices and procedures are usually communicated through what leaders support and reward (Č erne et al., 2014). Therefore, the mastery climate and performance climate, representing different value orientations, can enable employees to understand which behaviors are promoted and can be rewarded in the organization. Different climates encourage employees to adopt different methods when addressing diverse information, which inevitably influences the interaction among group members (Nerstad et al., 2018). Therefore, we propose that the performance climate and mastery climate play different roles in weakening or enhancing the relationship between abusive supervision and employees' knowledge sharing. The performance climate emphasizes intragroup competition, norms, and social comparisons (Nerstad et al., 2018). Embedded in this climate, employees pay more attention to demonstrating their abilities and the distribution of the group members' abilities. In this compulsive social comparison situation, group members often exhibit anxiety, such as worrying about personal performance, and are often overwhelmed by comparing themselves to other group members and attempting to outperform others, which becomes the goal of employees (Gerber et al., 2018). However, only the most accomplished employees are recognized as the most successful. When employees suffering from abusive supervision perceive their group climate as a performance climate, their knowledge sharing behaviors are further hindered. Under the performance climate, only high performance is recognized. Therefore, to meet the needs of selfachievement and the pursuit of surpassing others in the performance climate, employees may actively pay attention to information or resources that can enable them to ''surpass'' their colleagues. Generally, the line leader is one of the most important sources of feedback for employees in the organization. Consequently, leadership behaviors toward employees becomes an important criterion for employees to use to evaluate whether their personal goals (surpassing others) have been achieved (Ashford et al., 2016). Therefore, in a performance climate, employees who suffer from abusive supervision not only experience resource depletion but also immerse themselves in a sense of failing to ''surpass'' others. In turn, such depletion and sense of failure may hinder employees' willingness to share their ''cherished'' knowledge with other group members. Therefore, we propose the following: H2a: A performance climate moderates the relationship between abusive supervision and knowledge sharing. Specifically, the higher the performance motivational climate level is, the stronger the negative correlation between abusive supervision and employee knowledge sharing. In contrast to a performance climate, a mastery climate does not overemphasize the social comparison process and norms within a group but, rather, emphasizes the development of learning, mastery and technology (Nerstad et al., 2018). In a mastery climate, employees actively pay attention to self-development and improve their capabilities, and an individual's sense of accomplishment is mainly derived from self-comparison, that is, the current performance level exceeds prior performance levels (Č erne et al., 2017). A mastery climate is based on cooperation, information exchange, and trust, and important criteria for employees' success include positively helping others and improving oneself while developing skills and contributing to knowledge enhancement at work (Men et al., 2020). Thus, under a mastery climate, group members appreciate personal effort, and they are keen to share knowledge and engage in cooperation with others. Furthermore, group members tend to learn and cooperate with each other for self-growth. On the one hand, group members accept help from others; on the other hand, group members are happily help others. Thus, a mastery climate builds community support relations with individuals who provide positive external resources, such as social support from other members of the group, which provides opportunities for employees to cope with increased job demands or pressure (Hobfoll et al., 2018). Although abusive supervision can cause subordinates to perceive a loss of internal resources, the resources obtained from the external environment (e.g., through colleagues) characterized by a mastery climate might alleviate the feeling of the lack of resources caused by abusive supervision. Therefore, in a mastery climate, even though abusive supervision exists, employees are encouraged to engage in knowledge and information sharing with each other to achieve selfimprovement. Therefore, we propose the following hypothesis: H2b: A mastery climate regulates the relationship between abusive supervision and knowledge sharing. Specifically, the higher the mastery motivational climate level is, the weaker the negative correlation between abusive supervision and employee knowledge sharing. Mediating Role of Psychological Safety Psychological safety is essentially a mental state of an individual, that is, the individual believes that there is no threat in the surrounding interpersonal situation and thus no embarrassment or punishment for self-expression (Kahn, 1990). For employees, psychological safety is an important factor that drives their knowledge-sharing behavior within an organization (Wu & Lee, 2016). Sharing knowledge with others is a risky behavior (Argote & Ingram, 2000). On the one hand, knowledge is an important resource for individuals to use to maintain a competitive advantage within an organization, whereas sharing knowledge with others might diminish the respective competitive advantage (Park & Kim, 2018). On the other hand, knowledge sharing is a self-expression behavior that might also expose individuals' insufficiency and cause discomfort in the recipients of such knowledge (Wu & Lee, 2016). Individuals exhibit knowledge-sharing behavior only when they perceive a high level of psychological safety and believe that sharing knowledge with others will not have deleterious results (Park & Kim, 2018). Within an organization, the leader is the agent of the organization that controls the various resources required by subordinates to complete their work. Leaders' interaction with subordinates could affect employees' psychological safety perception (Walumbwa & Schaubroeck, 2009). Previous studies have revealed that positive leadership behaviors, such as transformational leadership (Carmeli et al., 2014), authentic leadership (S.-M. Liu et al., 2015), ethical leadership (Men et al., 2020;Walumbwa & Schaubroeck, 2009) and shared leadership (S. Liu et al., 2014) that display inclusiveness, understanding, and supportiveness, can effectively improve employees' psychological safety. In contrast, abusive supervision, including the frequent ridiculing and taunting of subordinates, might have a negative effect on the psychological safety of subordinates (A. J. Xu et al., 2015;Yang et al., 2020). Similarly, the organizational environment may affect the relationship between abusive supervision and employee psychological safety (Edmondson & Lei, 2014). For example, under a performance climate, fierce competition exists among group members and employees aim to outperform each other, leadings individuals to experience difficulties in obtaining additional supportive resources from colleagues and they perceive knowledge sharing as the loss of important resources (Č erne et al., 2014). Therefore, under performance climate, employees who have experienced abusive supervision might feel obvious helplessness, a sense of a lack of resources and thus psychological insecurity at higher levels, causing them to be more reluctant to share knowledge with others. Under a mastery climate, cooperation among group members and positive interdependence provides individuals with additional resources that can effectively compensate for employees' resource losses caused by abusive supervision, making their psychological security perception less affected by abusive supervision, which in turn encourages individuals to maintain a high level of knowledge sharing. Based on this analysis, we propose the following hypothesis: H3A: Employees' psychological safety plays a mediating role in the relationship between a performance climate and abusive supervision in knowledge sharing interactions. Specifically, the higher the group's performance climate level is, the lower the level of psychological safety experienced by employees with a high level of perceived abusive supervision, and thus, the lower their level of knowledge sharing behavior. H3b: Employee psychological safety plays a mediating role in the relationship between a mastery climate and abusive supervision in knowledge sharing interactions. Specifically, the higher the group's mastery climate level is, the higher the level of psychological safety experienced by employees with a high level of perceived abusive supervision, and thus, the higher the level of knowledge sharing behavior. Samples and Procedures To test the hypotheses, the data from employees and their supervisors were collected from eight high-tech companies in the provinces of Fujian, Guangdong, and Jiangsu in China. To ensure the quality of the questionnaire survey and the effectiveness of the pairing, we first obtained the active support of the human resources department of each company prior to the survey, and together with the HR department personnel, we selected leaders of 83 groups and 398 employees as the subjects of this study, who were then coded accordingly. The questionnaires were addressed to each group-based unit, where employees completed the survey, and we collected the date on site. The survey is composed of items concerning leadership behavior (abusive supervision), motivational climate (performance climate and mastery climate) and psychological safety, and the group leaders were surveyed using a questionnaire with items concerning group members' knowledge sharing behavior. Generally, the team supervisor has several subordinates. To ensure the accuracy of their evaluation of their subordinates' knowledge sharing, we invited the supervisors that have worked with their subordinates on a daily basis and have frequent interactions with them to join our survey. In addition, the supervisors supposedly have a good understanding of their subordinates' daily behaviors. Because the content in the questionnaire was rather sensitive, the questionnaire was issued in an envelope with a double-sided adhesive seal, with instructions emphasizing that the questionnaire was conducted anonymously and reminding the participants to seal the completed questionnaire in the accompanying envelope and return it to the survey team. In addition, all supervisors and subordinates voluntarily participated in the survey, and their anonymity was ensured. These specific data collection design can support participants to give honest answers. In this study, 435 questionnaires were sent to supervisors and their respective subordinates of 86 groups in the companies; 417 questionnaires were returned from the supervisors, and 398 questionnaires were returned from the subordinates, yielding recovery rates of 95.8% and 91.5%, respectively. After excluding the invalid copies, data from 337 supervisor-employee paired questionnaires were obtained, male respondents accounted for 55.5% of the surveyed population, and 59.3% of the employees were younger than 25 years. Measures Abusive supervision: The 10-item one-dimensional structured scale developed by Aryee et al. (2007) was adopted. The scale was generated by selecting 10 items from the original scale developed by Tepper (2000) according to the Chinese cultural context. Sample items include, ''My supervisor makes negative comments about me to others'' and ''My supervisor tells me my thoughts or feelings are stupid''; the items are scored using a 5-point Likert scale (1 = I don't remember that he/she has ever acted like that to me; 5 = He/she acts like that frequently). The consistency of the scale used in this study was 0.92. Psychological safety: The four-item one-dimensional structured scale developed by Nembhard and Edmondson (2006) was adopted. Sample items include, ''People in this group are comfortable checking with each other if they have questions about the right way to do something'' and ''No one on this time would deliberately act in a way that undermines my efforts''; the items are scored using a 7-point Likert scale (1 = Strongly disagree; 7 = Strongly agree). The internal consistency coefficient of the scale in this study was 0.89. Motivational climate: The 14-item motivational climate scale developed by Nerstad et al. (2013), containing two dimensions, namely, performance climate (8 items) (e.g., In my department/work group, it is important to achieve better than others)and mastery climate (6 items) (e.g., In my department/work group, one is encouraged to cooperate and exchange thoughts and ideas mutually), was adopted; the items are scored using a 7-point Likert scale (1 = Strongly disagree; 7 = Strongly agree). The internal consistency coefficients of performance climate and mastery climate in this study were 0.90 and 0.84, respectively. The confirmatory factor analysis showed that the two-dimensional structure of motivational climate fit well (x 2 /df = 1.61, GFI = 0.97, CFI = 0.99, IFI = 0.99, and RMSEA = 0.019). To test the feasibility of the integration of the performance climate and mastery climate within the group, we calculated the R wg values of performance climate and mastery climate and the intragroup correlation coefficients, that is, ICC (1) and ICC (2). Regarding the performance climate, the R wg ranged from 0.56 to 0.98, with an average of 0.94, and the ICC (1) and ICC (2) values were 0.38 and 0.73, respectively. Regarding the mastery climate, the R wg ranged from 0.66 to 0.98, with an average of 0.91, and the ICC (1) and ICC (2) values were 0.35 and 0.71, respectively. According to James (1982), the R wg should be greater than 0.70, and the ICC (1) and ICC (2) values should be greater than 0.05 and 0.5, respectively. These results indicate that the R wg values of the performance climate and mastery climate were greater than the threshold standard of 0.70 and that the ICC (1) and ICC (2) values were also greater than the threshold standards of 0.05 and 0.5, respectively. Therefore, we concluded that performance climate and mastery climate have good consistency within the group, and hence, the measurement values of individuals can be integrated and added at the group level. Knowledge sharing: The 7-item one-dimensional structured knowledge sharing behavior scale developed by Srivastava et al. (2006) was adopted. Sample items include ''The subordinate shares his/her special knowledge and expertise with others'' and ''the subordinate shares lot of information with others.'' The scale is scored using a 7-point Likert scale (1 = Strongly disagree; 7 = Strongly agree). The internal consistency coefficient of the scale was 0.89. Control variables: The employee's gender, age, work experience, and education as well as the supervisor's gender and work experience were used as control variables in this study. Confirmatory Factor Analysis To determine the discrimination ability of the data, we used Amos 24.0 software to conduct a confirmatory factor analysis. We tested whether the five-factor model fits the data better and found that relative to other available models, the five-factor model showed the best fit and supported the hypothesis model, as shown in Table 1. Table 2 shows the descriptive statistics of all variables used in this study. The results showed that abusive supervision was significantly negatively correlated with employee knowledge sharing, which is consistent with our predicted result. In addition, there was a negative correlation between abusive supervision and employee psychological safety and a significant positive correlation between employee psychological safety and knowledge sharing, indicating that the key variables of this study are suitable for further analyses. Hypotheses Testing HLM6. 08 software was used to test the hypotheses. The control variables, including the supervisors' gender and the employees' gender, age, work experience, education, years working under the direction of the supervisor, and working hours, were integrated into the HLM analysis. The results are shown in Table 3. Hypothesis 1 proposes that abusive supervision negatively affects employee knowledge sharing. The M1 model in Table 3 indicates that abusive supervision is able to significantly negatively predict employee knowledge sharing. Therefore, H1 is validated. We propose that the motivational climate regulates the relationship between abusive supervision and employee knowledge sharing, such that, a performance climate strengthens the negative relationship between abusive supervision and employee knowledge sharing, and a mastery climate weakens the negative relationship between abusive supervision and employee knowledge sharing. Moreover, the interaction item between abusive supervision and motivational climate (performance climate and mastery climate) could affect employee knowledge sharing by affecting employee psychological safety. To validate these hypotheses, based on an analysis of the mediating effect, as recommended by Preacher et al. (2007), we treated the interaction term between motivational climate and abusive supervision at the individual level as an antecedent variable and adopted the cross-level mediating effect test procedure recommended by Mathieu and Taylor (2007) to further examine the mediating effect of the mediating variables on the interaction term and the outcome variable. The validation of the cross-level mediating effects is based on the following four criteria: The independent variables (interaction items between performance climate and abusive supervision and between mastery climate and abusive supervision) are significantly predictive of the outcome variable (knowledge sharing). The independent variables (interaction items between performance climate and abusive supervision and between mastery climate and abusive supervision) are significantly predictive of the mediating variable (psychological safety). The mediating variable is significantly predictive of the dependent variable (knowledge sharing). When the independent variables (interaction terms between performance climate and abusive supervision and between mastery climate and abusive supervision) and the mediating variable (psychological safety) are simultaneously incorporated into the regression equation, if the effect of the independent variables is no longer significant, the mediating variable plays a full mediating role; if the effects of both the mediating variable and independent variables are significant, the mediating variable plays a partial mediating role. The results are shown in Table 3. The M2 model shown in Table 3 indicates that the interaction item between motivational climate and abusive supervision is significantly predictive of employee knowledge sharing. Specifically, the interaction between performance climate and abusive supervision can significantly negatively predict employee knowledge sharing, indicating that in an organization with a high-performance climate, abusive supervision and employee knowledge sharing are strongly negatively correlated. Therefore, H2a is supported. However, the interaction between mastery climate and abusive supervision can significantly positively predict employee knowledge sharing, indicating that with a high-performance climate level, abusive supervision and employee knowledge sharing are strongly positively correlated. Therefore, H2b is validated. The interactive effect between performance climate and abusive supervision on employee knowledge sharing is shown in Figure 2. The results reveal that in an organization with a higher performance climate level, abusive supervision and employee knowledge sharing exhibit a stronger negative correlation. Under a highperformance climate, abusive supervision predicts an even lower level of knowledge sharing, whereas under a low performance climate, the predictive effect of abusive supervision on knowledge sharing is not significant. The interaction effect between mastery climate and abusive supervision on employee knowledge sharing is shown in Figure 3. The results reveal that in an organization with a lower mastery climate, abusive supervision and employee knowledge sharing exhibit a stronger negative correlation. When an organization has a low mastery climate, abusive supervision predicts an even lower level of knowledge sharing, whereas in an organization with a high-performance climate, the predictive effect of abusive supervision on knowledge sharing is still not significant. These results support H2a and H2b. According to the M3 model, only a performance climate moderates the relationship between abusive supervision and psychological safety; thus, in a group with a high performance climate, employees who have suffered abusive supervision experience a lower sense of psychological safety, whereas the moderating effect of a mastery climate on the relationship between abusive supervision and psychological safety is not significant, indicating that regardless of the mastery climate level in an organization, the relationship between abusive supervision and employee psychological safety does not change much. Based on M2, employee psychological safety is incorporated into the equation to form the M4 model, and the results show that after such incorporation, employee psychological safety, can significantly positively predict employee knowledge sharing. However, the predictive effect of the interaction between abusive supervision and a performance climate on knowledge sharing is nonsignificant, whereas that of the interaction between a mastery climate and abusive supervision on knowledge sharing is still significant, indicating that employee psychological safety plays a complete mediating role in the mechanism underlying the interaction effect between the performance climate and abusive supervision on knowledge sharing, whereas the effect of the interaction between the mastery climate and abusive supervision on knowledgesharing is not achieved through employee psychological safety. Therefore, H3a is supported, but H3b is not supported. Conclusion and Discussion In a volatile, uncertain, complex, and ambiguous (VUCA) business world, employee knowledge sharing is critical for the effectiveness of an organization and the maintenance of its competitive advantage. As the agent of an organization, leaders play an important role in increasing or decreasing individuals' resources and predicting employees' engagement in knowledge sharing. Based on COR theory, we examined the effect of abusive supervision on employee knowledge sharing, the moderating effect of a motivational climate in the organization on these relations, and the mediating effect of the interaction between a motivational climate and abusive supervision on employee psychological safety. We found that although abusive supervision weakens employee knowledge sharing behaviors, its negative effects vary among different motivational climate types within a group. Specifically, the higher the performance climate level in a group, the lower the level of psychological safety among employees who have suffered abusive supervision, and the lower their level of knowledge sharing behavior. Next, we discuss the theoretical and practical implications of these findings. Theoretical Implications With the increasing importance of knowledge sharing by individuals in the knowledge economy, it is crucial to identify the factors that might hinder employees' knowledge sharing intention and behavior. The results of this study expand previous studies in the following aspects. First, based on COR theory, we found that employees who had been abused by their supervisors experience a loss of resources, potentially causing them to reduce their level of knowledge sharing to protect their existing resources. This finding provides a new line of evidence concerning the role of leadership behavior in influencing employee knowledge sharing. Prior studies have noted that leadership might have an important influence on individuals' willingness to share knowledge and that transformative and empowering leadership styles might promote knowledge sharing while destructive leadership styles might hinder knowledge sharing (Wang & Noe, 2010). Since that abusive behavior of leaders affects employees' knowledge sharing behavior has rarely been studied, in this study, from the COR perspective, we confirm that a relationship exists between abusive supervision and knowledge sharing, which effectively bridges the gap in the literature regarding knowledge sharing. Second, by introducing the moderating effect of a motivational climate, we examined the boundary condition of the effect of abusive supervision on employee knowledge sharing. We found that the two dimensions of the motivational climate, namely, mastery climate and performance climate, bidirectionally regulate the negative effect of abusive supervision on employee knowledge sharing. Specifically, a mastery climate has a buffering effect on the negative effect of abusive supervision whereas a performance climate further aggravates the effect of abusive supervision. Although the organizational environment affects the devastating effects of abusive supervision, few studies addressed the moderating role of the organizational environment (e.g., hostile working conditions, inequity at work in relation to abusive supervision and outcome variables). By introducing the motivational climate, which is an organizational contextual factor, as a mitigator to or enhancer of the negative effect of abusive supervision, we expanded the scope of studies investigating abusive supervision. Our findings are consistent with prior studies, and we highlight that the knowledge-sharing culture within an organization is an important situational factor that can be used to predict whether employees opt to engage in knowledgesharing behaviors. Moreover, the results of this study verified the argument by Hobfoll et al. (2018), who highlight that the external environment affects the inner mechanism of resources. Our study supports the call for further research concerning contextual factors when applying COR theory to interpret organizational phenomena. Third, we found that employee psychological safety mediates the interaction effect between employees' performance climate and abusive supervision on knowledge sharing behavior. Specifically, in high-performance climate groups, abusive supervision causes employees to experience low levels of psychological safety and then have low levels of knowledge sharing behavior. Previous studies demonstrated that abusive supervision is significantly correlated with employee psychological safety (W. Liu et al., 2016;A. J. Xu et al., 2015;Yang et al., 2020), and the latter is an important factor that affects the level of employee knowledge sharing. Nevertheless, in this study, we introduced psychological safety as a psychological mechanism to connect abusive supervision and knowledge sharing, providing a new perspective to reveal the ''black box'' in the relationship between abusive supervision and knowledge sharing. We shed light on how abusive supervision affects employee knowledge sharing behaviors while underscoring the key role of psychological safety in the knowledge-sharing process. Moreover, we revealed that psychological safety does not play a mediating role in the interaction effect between a mastery climate and abusive supervision on employee knowledge sharing. Thus, in groups with a high mastery climate level, employees who experienced abusive supervision can still exhibit a high level of knowledge sharing behavior, but the impact of abusive supervision on employee knowledge sharing is not exerted by enhancing employee psychological safety, thus, regardless of the mastery climate level within an organization, the effect of abusive supervision on employee psychological safety does not vary greatly. Baumeister et al. (2001) noted that the effect of negative events is stronger than that of positive events; relative to the effect a performance climate and abusive supervision, the effect of a mastery climate within a group, representing a positive external environment, on employee psychological safety might be rather weak, which can be clearly observed in the results of the M3 model in Table 3. Therefore, this result provides new evidence supporting the notion that the effect of negative events is greater than that of positive events. Practical Implications The results of this study have several implications for management practice. First, organizations should be aware that leadership behavior might be an important factor influencing various employee behaviors, such as knowledge sharing. Our results demonstrate that the knowledge management system implemented by an organization might become ineffective when employees experience the abusive supervision. Given the negative effect of abusive supervision on knowledge sharing behavior, organizations should prompt perceived effective leadership behaviors that could lead to employees' knowledge sharing. For businesses highly dependent on innovations, it is necessary to adopt measures to decrease the abusive supervision behaviors, such as providing leadership training programs or coaching to enhance managers' interpersonal skills, enhancing managers' sensitivity to abusive supervision behaviors and their personal behavior such as self-monitoring capabilities, and setting up a feedback channel which allows employees to report abusive supervision anonymously. Second, the results of this study indicate that a performance climate, which advocates comparisons and competition among individuals, aggravates the negative impact of abusive supervision on employee knowledge sharing, whereas a mastery climate, which cherishes personal improvement and development, mitigates the negative effect of abusive supervision on employee knowledge sharing. These findings suggest that managers must be aware work climate has impacts on employee knowledge sharing. Therefore, it is necessary to create a mastery motivational climate that advocates self-development and fulfilment through management practices in organizations. For example, work can be designed into meaningful, challenging and diverse tasks or job crafting that enable employees to have more autonomy in addressing challenges such that their motivation for selfimprovement can be stimulated. Finally, we found that employees' psychological safety plays an important role in influencing employee knowledge sharing. Our findings reveal that only when employees perceive high level of job security and the trust relationship is widely existed in the organization, employees are more likely to have psychological safety. For example, the perception of employee psychological safety can be enhanced by allowing open discussions regarding all subjects in the organization, setting collective goals by avoiding employees blaming each other, and emphasizing the importance of positive work relationships and trust toward colleagues and leaders to foster psychological safety. Limitations and Future Research Directions There are some limitations in this study. First, due to the cross-sectional study design, it is impossible for us to infer causality. Based on the present study, we were unable to argue whether abusive supervision is derived from the reduced level of employee knowledge sharing. In fact, it is possible that employees being unwilling to share knowledge with others causes supervisors to adopt abusive behaviors, especially when the supervisor is aware that knowledge sharing is critical for the success of the company. Therefore, in future studies, it is necessary to further investigate the causal relationship between abusive supervision and employee knowledge sharing through longitudinal designs or field experiments. Second, abusive supervision, motivational climate and psychological safety were measured by self-reporting. However, due to the sensitivity of abusive supervision and motivational climate, employees may feel scared and refuse to report the ''real situation'' because of the influence of social desirability, requiring a ''safer'' measure to dispel participants' feeling of ''fear'' when measuring sensitive variables in future research. In addition, to measure knowledge sharing, we adopted a supervisor evaluation method that has been widely used in previous studies, which avoided common method bias to a certain extent (Wang & Noe, 2010). However, the evaluation of employee knowledge sharing from different sources, such as supervisors, subordinates, and colleagues, could increase the effectiveness of this measurement. Third, similar to previous studies, this study focused on the supervisor-subordinate pair level. However, in recent years, it has been found that the perception of a third party regarding the abusive supervision of the same supervisor also affects the behavior of the third party (E. Xu et al., 2020). Therefore, when the third party perceives that a colleague has been abused by a common supervisor, how will the knowledge-sharing behavior of the third-party change? Will the third-party share knowledge with the victim to help the victim solve work problems and jointly cope with the boss's abusive supervision? Alternatively, will the third party choose schadenfreude and further impose resource restrictions on the victim (such as intentionally not sharing knowledge)? Therefore, in future studies, it would be interesting to include a third party in the research framework to reveal the mechanism of employee knowledge sharing behavior more comprehensively. Despite these limitations, our findings contribute to a better understanding of the knowledge-sharing mechanism by incorporating the impacts of abusive leadership behavior, motivational climate and psychological safety. Forth, the sample in this study was collected from employees of high-tech companies in China, which might limit the generalizability to other industries in other countries. The cultural issues may influence the results. Therefore, future research could consider employees outside tech industry, such as doctors, teachers, researchers, etc., and a cross-national study to further test the effectiveness of this research model in other industries. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by ''The National Natural Science Foundation of China'' (Grant No.: 71801097, 72172048). Ethics Statement All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
2023-02-27T16:09:13.191Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "982c025b2f940f72275b0346f627764a045a04f5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/21582440231158256", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "41de72f766e575c4e92747c4234d955e151d4a6f", "s2fieldsofstudy": [ "Psychology", "Business" ], "extfieldsofstudy": [] }
251275079
pes2o/s2orc
v3-fos-license
Early detection of COVID-19 outbreaks using textual analysis of electronic medical records Purpose Our objective was to develop a tool promoting early detection of COVID-19 cases by focusing epidemiological investigations and PCR examinations during a period of limited testing capabilities. Methods We developed an algorithm for analyzing medical records recorded by healthcare providers in the Israeli Defense Forces. The algorithm utilized textual analysis to detect patients presenting with suspicious symptoms and was tested among 92 randomly selected units. Detection of a potential cluster of patients in a unit prompted a focused epidemiological investigation aided by data provided by the algorithm. Results During a month of follow up, the algorithm has flagged 17 of the units for investigation. The subsequent epidemiological investigations led to the testing of 78 persons and the detection of eight cases in four clusters that were previously gone unnoticed. The resulting positive test rate of 10.25% was five time higher than the IDF average at the time of the study. No cases of COVID-19 in the examined units were missed by the algorithm. Conclusions This study depicts the successful development and large scale deployment of a textual analysis based algorithm for early detection of COVID-19 cases, demonstrating the potential of natural language processing of medical text as a tool for promoting public health. Introduction The COVID-19 pandemic poses a significant challenge to healthcare systems worldwide potentially overwhelming local resources [1]. As a result, many countries around the world implemented measures to prevent its spread through lockdowns and contact tracing [2,3]. The first cases of COVID − 19 infections were detected in Israel during late February 2020 resulting in the first wave of the pandemic during March and April [4]. Early and aggressive measures were utilized by the Israeli authorities to prevent the disease spread and were effective in stopping the first wave of infections by early May [5]. These efforts were hindered but by the lack of sufficient testing kits, leaving many potentially infected person untested. Thus, early detection of new outbreaks, an immensely important part in controlling the disease spread, was delayed [6]. The Israeli Defense Forces (IDF) provide health care services for its enlisted personnel, practically serving as a health maintenance organization (HMO). During the first wave of the COVID-19 outbreak in Israel a team of physicians and data scientists were assembled to tackle challenges posed by the new epidemic among the enlisted population. This article depicts the development and testing of a computational tool for analyzing electronic medical records (EMR) that employs Natural Language Processing tools(NLP) to examine free text written by physicians to detect potential COVID-19 cases and guide focused epidemiological investigations and testing. Study population The study population included all military personnel that were on active duty throughout the months of March and April of 2020 in 92 units of an approximate size of a company (80-150 persons). Healthcare providers IDF personnel are treated exclusively by the medical corps' staff consisting of physicians, nurses, and medics. Additionally, during the COVID-19 pandemic a designated call center was built to address issues regarding potential contact with infected individuals or for reporting new onset of symptoms. All healthcare staff, including the call center personnel, are required to record any visit or call in the EMR. The designed system had access to records by all healthcare providers. Data collection Data was extracted from all visits conducted by physicians, nurses, medics and emergency call center personnel that were recorded in the IDF Electronic Medical Records(EMRs) starting on February 1st. The records contain three text fieldspatient-recounted medical history, physical examination and treatment plan. Due to significant variation in the way healthcare workers filled these fields, they were combined into a single body of text for analysis. Additionally, fever measurements that were recorded during visits were extracted from pre-specified data fields of the EMR. Symptom detection The constructed algorithm made use of textual analysis to scan the EMR text for suspected COVID-19 cases. The search focused on the most common symptoms presented by COVID-19 patients: fever and cough [7]. The text analysis was done in a rule based manner. First, the text was examined for verbs and nouns depicting each symptom, for example "cough" and "coughed". The three nearby words to the symptom declaration were examined for the presence of commonly used negation terms that would rule it out as a positive occurrence. This is similar to looking for "no fever" or "without fever" in the English language. This approach, while simple, proved highly effective due to the tendency of IDF medical personnel to repeat the wording and sentence formulation when reporting symptoms. Additional corrections were made for cases in which several symptoms were recorded as negative together (i.e. "the patient presents without fever or cough"). Fever was not always recorded in the intended field and additional fever measurements were commonly described in the text. To extract these measurement, numerical values between 34 and 43 were searched in a window of five words before and after the term "fever". The numerical range is highly specific for body temperature fitting no other vital signs values that have higher values, while most common blood test results have lower values. Additional corrections were made to avoid common instructions that did not represent actual fever measurement, for example, "return to the clinic if your fever rises above 38 ′′ . Our algorithm used the sub-febrile cutoff of 37.5 • C to flag the visit as containing a symptom of fever. Definition of suspected infection cluster A person suspected for having COVID-19 or a "potential case" was defined as a person suffering from cough or fever and these symptoms were recorded by a healthcare professional in the EMR. Each person was counted once no matter how many symptoms or how many visits they had during the measurement time-window of a week. The results were compared to the previous week, with a suspected cluster defined as a twofold increase in unique suspected cases among the unit's registered personnel. The system's workflow is presented in Fig. 1. Algorithm evaluation After internal validation the algorithm was evaluated in two stages. First, the algorithm was examined in its ability to detect cough and fever symptoms in EMR records. The performance of the algorithm was compared to a manual survey conducted by physicians of the medical corps of the IDF independently to our study. Second, the algorithm was examined in its ability to assist in discovering new COVID-19 outbreaks. This was accomplished prospectively by monitoring 92 randomly selected units over a period of one month, and alerting the epidemiological branch of the IDF medical corps when the algorithm detected a possible outbreak. It is important to note that the epidemiological effort conducted as a result of the alert, while guided by the results of the algorithm, was independent of our study and no author of this study participated in the investigation. Ethics statement The article was approved by the IDF medical corps IRB Symptom detection Manual monitoring by physicians was conducted regarding nine units during a period of one month and detected 32 persons who reported cough and 27 who reported fever. Our algorithm was able to detect all cases reported manually as well as three more cases of cough and four more cases of fever. To confirm the additional detected symptomatic cases, we contacted the physicians who conducted the original manual survey and ask them to re-examine the cases in question. The repeated evaluation found that all the additional symptom detected by our algorithm were correctly labeled, and the human error was attributed to the hastiness of the original survey. Detection of new COVID-19 outbreaks The algorithm was next tested for its ability to assist in discovering new Covid-19 cases. To achieve this goal, 92 randomly selected units were followed for a period of one month. During the follow-up period, a twofold or more increase in suspected cases were detected in 17 of the units examined. The results were subsequently reported to the epidemiology branch of the IDF medical corps. The following epidemiological investigations used the data provided by the algorithm but were not limited to it. The investigation identified and quarantined 78 suspected patients who underwent polymerase chain reaction(PCR) testing for COVID-19. The test results diagnosed eight COVID-19 patients (10.25% of tests) in four clusters. The specificity for cluster detection was 81.5% and the positive predictive value (PPV) was 23.5%. Thus resulting in false positive rate of 76.5% for disease clusters. We compare the testing results achieved using our algorithm to testing conducted using the standard survey of the IDF medical corps for COVID-19 cases during this period. This standard survey was conducted based on symptoms and exposure history to known sick patients and each PCR test was approved by a public health officer. The symptoms used for this survey included cough, shortness of breath, sore throat, fever > 38 • C and loss of taste and smell [8]. Our method achieved a positive test rate of 10.25% compared to 2.25% using the standard IDF survey (P value < 0.001), the comparison is presented in Table 1. It should be noted that asymptomatic and mildly symptomatic persons were not tested for COVID-19 during this period due to shortage of PCR tests, thus, the sensitivity of both methods cannot be fully assessed and compared. During the study period the persons included in the study continued to receive the same medical treatment as any other soldiers in IDF and could undergo COVID-19 testing as part of the standard survey. However, all persons who were tested positive in the units included in the study were detected using our algorithm. Discussion In this study, we described the construction and evaluation of a freetext analyzing algorithm that through examination of electronic medical records on a large scale was instrumental in detecting new COVID-19 outbreaks. The algorithm assisted testing resulted in a five-time higher positive test rate during a period of scarce testing capabilities. Covid-19 disease posed a significant challenge for health care systems around the globe. With no effective treatment or vaccine available during the initial phases of the pandemic, the emphasis was put on containment. The tools available for governments were crude, including large scale lockdowns and cumbersome contact tracing systems [9]. Another obstacle faced during the first wave of the pandemic, was the shortage of sufficient testing capabilities [10]. To help mitigate these challenges our group was tasked with devising a monitoring system for potential COVID-19 outbreaks through the use of electronic medical records. The IDF has been a pioneer in adopting electronic health records in Israel, resulting in today's highly digitalized healthcare system of the medical corps that enabled this study. EMRs provide an indispensable resource for the treating physician and have been shown to improve clinical outcomes [11] and provide an efficient platform for research [12]. This study adds to the growing use of EMRs for surveillance and monitoring with the goal of improving public health [13,14]. Manual collection of data and monitoring the spread of a disease tends to be a slow and mistake ridden process [15], giving rise to various automation attempts. Previous notable examples include the use of EMR surveillance of bronchiolitis in the emergency department as an early warning for increased winter cases [16] and detection of local clusters of Legionnaires' disease [17]. Automated monitoring proved crucial in the face of a fast spreading COVID-19 infection [18,19]. Free medical text, while harder to analyze compared to structured data, holds a promise of vast information as patients' symptoms and sings on physical examination. Previous attempts to predict and monitor infectious disease outbreaks using free text analysis used mainly social media or search data with the notable example of Google Flu system [20,21]. These approaches were later deemed unreliable due to various artifacts and distortions that are inherent when dealing with free-text generated by the general public [22]. Medical text written by trained health care professional is likely to be better structured and less biased, mitigating the set-backs that plagued previously mentioned systems [22]. During the COVID-19 pandemic natural language processing and medical text mining have been used to improve disease registration of patients with undocumented positive PCR results [23] and provided a comprehensive research tool regarding the disease symptoms and progression using the COVID-19 SignSym tool [24]. Additionally, previous work has shown that COVID-19 like symptoms, detected from EMR free text correlate well with the trend in positive PCR results [25]. Our system advanced previous attempts by deploying a large-scale monitoring system encompassing a large population that was widespread geographically. The system is based on extracting COVID-19 like symptoms from free medical text emphasizing simplicity to allow early deployment. It also took advantage of the military social dynamic where intra-unit is much more common than inter-unit contact, therefore clusters of infections were well defined within units. Another advantage of our system was its design as a decision support tool, both focusing and supplying information for the subsequent epidemiological investigation. While human judgment was still part of the process the augmentation and support of our system culminated in a significant increase in positive test rate compared to testing according to human judgment alone that was conducted beforehand. Similar civilian circumstances could arise in a tightly connected local communities and education systems where clusters of COVID-19 infections could be detected and prompt action can be taken to prevent further spread [26]. Similarly, tools for symptom monitoring can be used for the detection of local environmental exposures (e.g., water or air pollution) that might go unnoticed until a critical mass of cases amount [27]. This study has several limitations. First, the study was conducted during the first wave of the pandemic in Israel when the rate of COVID-19 cases was low. However, despite this limitation the study showed an acceptable PPV even in such low prevalence situation. Second, the algorithm was designed to detect clusters of patients presenting symptoms that can be caused by COVID-19 but also by other respiratory infections. To mitigate the personal cost of a false positive results an additional stage of manual investigation determined which patients should be tested or quarantined. Third, the sensitivity of the algorithm cannot be fully assessed as a general screening was not performed and mildly symptomatic or a-symptomatic cases could have been missed. In summary, this study demonstrated the use of a natural language processing tool for early detection of possible COVID-19 patient clusters by analyzing medical free text stored in EMRs. Thus, provided an automatic, online monitoring system that successfully guided testing and epidemiological investigation for early detection of infection cases.
2022-08-03T22:04:02.965Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "a48fc242566e6537a070227412a58944499a03ee", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jcv.2022.105251", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b692bee39a010a0904aec1f379fbe51e176df379", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54463074
pes2o/s2orc
v3-fos-license
Teneurins: Domain Architecture, Evolutionary Origins, and Patterns of Expression Disruption of teneurin expression results in abnormal neural networks, but just how teneurins support the development of the central nervous system remains an area of active research. This review summarizes some of what we know about the functions of the various domains of teneurins, the possible evolution of teneurins from a bacterial toxin, and the intriguing patterns of teneurin expression. Teneurins are a family of type-2 transmembrane proteins. The N-terminal intracellular domain can be processed and localized to the nucleus, but the significance of this nuclear localization is unknown. The extracellular domain of teneurins is largely composed of tyrosine-aspartic acid repeats that fold into a hollow barrel, and the C-terminal domains of teneurins are stuffed, and least partly, into the barrel. A 6-bladed beta-propeller is found at the other end of the barrel. The same arrangement—6-bladed beta-propeller, tyrosine-aspartic acid repeat barrel, and the C-terminal domain inside the barrel—is seen in toxic proteins from bacteria, and there is evidence that teneurins may have evolved from a gene encoding a prokaryotic toxin via horizontal gene transfer into an ancestral choanoflagellate. Patterns of teneurin expression are often, but not always, complementary. In the central nervous system, where teneurins are best studied, interconnected populations of neurons often express the same teneurin. For example, in the chicken embryo neurons forming the tectofugal pathway express teneurin-1, whereas neurons forming the thalamofugal pathway express teneurin-2. In Drosophila melanogaster, Caenorhabditis elegans, zebrafish and mice, misexpression or knocking out teneurin expression leads to abnormal connections in the neural networks that normally express the relevant teneurin. Teneurins are also expressed in non-neuronal tissue during development, and in at least some regions the patterns of non-neuronal expression are also complementary. The function of teneurins outside the nervous system remains unclear. INTRODUCTION Teneurins are type-2 transmembrane proteins with a variable N-terminal intracellular domain and a large, phylogenetically conserved extracellular domain. The extracellular domain features epidermal growth factor (EGF)-like domains, a 6-bladed beta-propeller composed of NHL repeats, tyrosine-aspartic acid (YD) repeats, a rearrangement hot spot (RHS) core protein domain and a C-terminal domain related to both GHH toxins and corticotropin-releasing factor ( Figure 1A). The genomes of most vertebrates include four related teneurin genes encoding teneurins numbered 1 through 4 (Tucker et al., 2012). In Drosophila melanogaster there are two teneurins, tena and ten-m (Baumgartner et al., 1994;Levine et al., 1994), and in Caenorhabditis elegans there is a single teneurin, ten-1 (Drabikowski et al., 2005). This review will concentrate on what is known about the domain organization of the best studied teneurins, what can be inferred about their evolution from studies of extant genomes, and patterns of teneurin expression. TENEURIN DOMAIN ORGANIZATION The Teneurin Intracellular Domain The teneurin intracellular domain typically includes one or more proline-rich SH3-binding domain and one (or more) predicted nuclear localization sequence ( Figure 1A). Yeast two-hybrid screens and co-immunoprecipitation experiments demonstrated that one of the SH3-binding domains from teneurin-1 binds CAP/ponsin (Nunes et al., 2005). CAP/ponsin, also known as sorbin, is a widely expressed adaptor protein involved in the organization of the cytoskeleton and growth factor-mediated signaling (Kioka et al., 2002). The intracellular domain of teneurin-1 also binds to MBD1, a methylated DNA binding protein (Nunes et al., 2005), but the biological significance of this interaction is unknown. When the intracellular domain is overexpressed in tissue culture cells it is found in the nucleus where it co-localizes with PML protein in nuclear bodies (Bagutti et al., 2003;Nunes et al., 2005). In chicken embryos antibodies to the intracellular domain of teneurin-1 often stain the cell nucleus in regions where antibodies to the extracellular domain stain the cell surface (Kenzelmann et al., 2008;Kenzelmann Broz et al., 2010), suggesting that teneurins may be processed so that the intracellular domain can be released for yet-to-be determined function in the nucleus. A likely site for proteolytic cleavage within the intracellular domain is the conserved basic sequence motif RKRK. When fibroblasts are transfected with native teneurin-1, antibodies to the intracellular domain of teneurin-1 stain the nucleus, but they do not stain the nucleus if the cells are transfected with a teneurin-1 following mutation of the basic motif to AAAA (Kenzelmann et al., 2008). The potential for this type of processing has recently been confirmed by others (Vysokov et al., 2016). Finally, there are many alternatively spliced variants of the intracellular domains of teneurins from chicken and human (Tucker et al., 2012), but the biological significance of these variants is unknown. EGF-Like Domains Most teneurins have eight EGF-like domains starting approximately 200 amino acids C-terminal to their transmembrane domain (Tucker et al., 2012). The Basic Local Alignment Search Tool (BLAST) reveals that these domains, which have the conserved consensus sequence Ex 2 Cx(D/N)x 2 Dx(D/E)xDx 3 DCx 3 (D/E)CCx 4 Cx 5 C (where "x" is any amino acid), are most similar to those found in the tenascin family of extracellular matrix glycoproteins. This explains why teneurins were first identified in a low stringency screen of Drosophila DNA with a probe based on the EGF-like domains of chicken tenascin-C (Baumgartner and Chiquet-Ehrismann, 1993). The names given to the Drosophila teneurins, ten-a and ten-m, reflect this historical connection to tenascins. In turn, the name "teneurin" is a conflation of "ten-a/ten-m" and "neurons, " which are a major site of teneurin expression (Minet et al., 1999). Note that teneurins were discovered independently in D. melanogaster and named odd Oz (Levine et al., 1994), which accounts for the alternative name Odz for teneurins in the literature and in some genome search engines. One well-established function of the teneurin EGF-like domains is to permit dimerization in cis. Most teneurin EGFlike domains have six cysteines that form three pairs of disulfide bonds. However, the second and fifth teneurin EGF-like domains have only five cysteine residues. The odd number allows cysteines in one teneurin to make disulfide bonds with cysteines in a neighboring teneurin, resulting in covalently linked side-by-side dimers (Oohashi et al., 1999;Feng et al., 2002). This explains the distinctive "pair of cherries" appearance of the extracellular domain of teneurins when viewed in the electron microscope after rotary shadowing: the stems are the attached EGF-like domains, and the cherries are the remaining C-terminal part of the extracellular domain (Feng et al., 2002). Beta-Propeller Domain The central region of the teneurin extracellular domain was first predicted to fold like a beta-propeller (i.e., it contains a series of NHL repeats) in an early study of teneurin domain architecture (Tucker et al., 2012) and later demonstrated conclusively to be a 6-bladed beta-propeller by X-ray crystallography and cryoelectron microscopy (Jackson et al., 2018;Li et al., 2018). Beta-propellers are typically protein-protein interaction domains, and that appears to be the case with teneurins. HT1080 cells expressing the transmembrane and extracellular domains of teneurin-2 clump together in culture, but HT1080 cells expressing the transmembrane domain and a truncated extracellular domain that only includes the EGF-like domains do not (Rubin et al., 2002). The domain used for these teneurin-teneurin interactions was narrowed to the 6-bladed beta-propeller using atomic force microscopy while swapping and deleting the various teneurin extracellular domains that were expressed on the cell surface (Beckmann et al., 2013). This study also showed that the homotypic interactions between the beta-propellers of teneurin-1 were stronger than the heterotypic interactions between the beta-propellers of teneurin-1 and teneurin-2 (Beckmann et al., 2013). The beta-propeller domain of teneurin-1 seems to be critical for its function, as a mutation in this region leads to congenital anosmia in humans (Alkelai et al., 2016). YD Repeats and the RHS Core Protein Domain Almost a third of the huge extracellular domain of teneurins is composed of over two dozen YD repeats. These repeats have the consensus sequence Gx 3−9 YxYDx 2 GR(L, I or V)x 3−10 G, FIGURE 1 | The domain organization of teneurins and teneurin-related YD proteins from prokaryotes. (A) The domain organization of a typical teneurin, human teneurin-1. A colored key and a scale bar indicating the amino acid positions from N-terminus to C-terminus is shown. The Ig-like domain, which includes both a conserved cysteine-rich domain and a carboxypeptidase-like domain, is indicated with the blue bracket. (B) A schematic illustrating the tertiary organization of a typical teneurin. Key features in the extracellular domain includes the tyrosine-aspartic acid (YD)-repeat barrel and the 6-bladed beta-propeller, which is exposed for protein-protein interactions. Current evidence indicates that the C-terminal Tox-GHH/TCAP domain is found outside the barrel, as is a conserved RxRR motif, which may represent a proteolytic cleavage site. (C) Most vertebrates have four teneurins numbered 1 through 4. The domain organization of the four teneurins from humans are illustrated. (D) A predicted teneurin is found in the genome of the choanoflagellate Monosiga brevicollis. The extracellular domain of this teneurin is most similar to the extracellular domains of bacterial toxins. Examples of the bacterial toxins are illustrated, as are UniProt ID and GenBank accession numbers. where "x" is any amino acid (Minet et al., 1999;Minet and Chiquet-Ehrismann, 2000). The presence of YD repeats in teneurins was unexpected: prior to the sequencing of human and chicken teneurins YD repeats had only been identified in prokaryotic proteins. The potential function of the YD repeats became clear following the detailed description of a similar series of repeats found in a toxin from the bacterium Yersinia entomophaga using X-ray crystallography (Busby et al., 2013). The YD repeats in this bacterial toxin form a hollow barrel that is approximately 130 Å long and 50 Å wide [i.e., the approximate size of the "cherry" of the teneurin extracellular domain seen in the electron microscope (Feng et al., 2002)]. The RHS core protein domain forms a plug in the hollow end of the barrel. This bacterial YD repeat-containing protein also has a 6bladed beta-propeller, and the beta-propeller is exposed to ligand binding at the N-terminal end of the barrel. The high degree of sequence similarity and domain architecture identity between the C-terminal half of the extracellular domains of teneurins and these YD repeat-containing proteins from bacteria strongly suggested that teneurins would fold in a similar way. This was recently confirmed by X-ray crystallography and cryoelectron microscopy with teneurin extracellular domains (Jackson et al., 2018;Li et al., 2018). C-Terminal Domain: A Toxin and a TCAP Just C-terminal to the RHS core protein domain of all sequenced teneurins, and most predicted teneurins, lies a region with striking amino acid similarity to the C-terminal GHH toxin domain of certain prokaryotic YD repeat-containing proteins (Zhang et al., 2012;Ferralli et al., 2018). GHH toxins are prokaryotic nucleases that are predicted to be encapsulated in a YD repeat barrel (like the toxic C-terminal domain of the YD repeat-containing protein from Y. entomophaga). Though similar, the GHH toxin domain of teneurins lack the key glycine-histidine-histidine motif that is necessary for the bacterial enzyme's nuclease activity (Zhang et al., 2012). However, when the GHH toxin domains of teneurin-1 or teneurin-2 are expressed in HEK 293 cells in culture, or when nanomolar concentrations of the purified GHH toxin domains of chicken teneurin-1 or chicken teneurin-2 are added to the culture medium, the cells rapidly undergo apoptosis (Ferralli et al., 2018). The toxicity may be related to nuclease activity, as purified GHH toxins from teneurin-1 and teneurin-2 cleave plasmid DNA and completely hydrolyze mitochondrial DNA in vitro (Ferralli et al., 2018). The C-terminal 40 or 41 amino acids of teneurins is known as TCAP (from "teneurin C-terminal associated peptide"). The TCAP sequence was first identified by researchers who noted its similarity to corticotropin-releasing factor (Qian et al., 2004), and purified TCAP has profound effects on animal behavior when injected into brain ventricles (Tan et al., 2008). For example, TCAP-treated rats behave in acoustic startle, open field and elevated plus maze tests in a manner that is consistent with elevated anxiety. These and other remarkable studies with TCAP have recently been reviewed by others (Woelfle et al., 2016). The TCAP sequence partially overlaps with the GHH toxin domain and extends to the very C-terminus of the protein (see above). Interestingly, teneurins are known to bind to the G-protein coupled receptor latrophilin (Silva et al., 2011), and the teneurin domain responsible for this interaction is the TCAP (Woelfle et al., 2015). This may contribute to the localization of some teneurins, and the C-terminal toxin/TCAP domain, to developing synapses (Li et al., 2018). Teneurin Tertiary Organization The stick diagrams used for describing the domain organization of teneurins can now be refined thanks to the pioneering X-ray crystallography done with a related bacterial protein (Busby et al., 2013) and the elegant X-ray crystallography and cryoelectron microscopy done with the extracellular domains of teneurins themselves (Jackson et al., 2018;Li et al., 2018). We now know that the region found between the EGF-like domains and the 6bladed beta-propeller folds into a beta-sandwich domain that is reminiscent of either a fibronectin type III (FN3) repeat (Jackson et al., 2018) or an immunoglobulin (Ig)-like domain (Li et al., 2018), and this domain "plugs" the N-terminal end of the hollow YD barrel ( Figure 1B). This FN3/Ig-like domain has two subregions, one of which is highly conserved across phyla and is rich in cysteines (Tucker et al., 2012), and another which is predicted by standard domain architecture software programs (e.g., Superfamily) to be a carboxypeptidase domain. The latter is particularly interesting because some bacterial YD proteins with a 6-bladed beta-propeller also have a carboxypeptidase domain in this region, and these amino acid sequences align with nearly 50% similarity with the same region in human teneurins (Figure 2A). This striking phylogenetic conservation suggests that this unstudied domain may be more than a plug: perhaps it is involved in proteolytic processing of teneurins or teneurinassociated proteins. The RHS core protein fits into the C-terminal end of the teneurin YD barrel, but interestingly, both recent studies of teneurin structure (Jackson et al., 2018;Li et al., 2018) showed the GHH toxin/TCAP domain poking out through the side of the barrel instead of being contained within the barrel like the toxins of bacterial YD proteins. Thus, the conformational changes that are believed to release the toxin from the YD barrel of prokaryotes may not be necessary for the release of the toxin/TCAP domain from teneurins. Moreover, the arrangement of the C-terminal region of teneurins revealed by cryoelectron microscopy means that the TCAP domain is available to bind to latrophilin without any prior processing or the disruption of the YD barrel. Perhaps the toxic nuclease near the C-terminus of teneurins can be released by regulated proteolytic activity after the teneurin reaches the cell membrane. Supporting this hypothesis is the observation that almost all teneurins examined to date have the conserved basic motif RxRR between the RHS core protein and the GHH toxin domain (Tucker et al., 2012), and similar motifs are known to be targeted by proteases that act extracellularly [e.g., by members of the proprotein convertase subtilisin/kexin family of peptidases (Rawlings, 2009)]. Future studies are needed to address this important aspect of teneurin biology. Differences Between the Teneurins The overall domain organization of the four chordate teneurins is identical, but upon closer examination certain distinguishing features can be recognized ( Figure 1C). For example, the intracellular domains of human teneurin-1 and teneurin-4 have both proline-rich SH3-binding domains and predicted nuclear localization sequences, but the intracellular domain of human teneurin-3 lacks SH3-binding prolines and the intracellular domain of teneurin-2 lacks a nuclear localization sequence. These differences, however, are not conserved across species. In the chicken, where teneurins have been widely studied, the intracellular domains of all four teneurins have predicted nuclear localization sequences, whereas in the mouse only teneurins-1 and -3 have predicted nuclear localization sequences (Tucker et al., 2012). Though the intracellular domains of chordate teneurins and the teneurins found in ecdysozoa share little sequence homology, the intracellular domains of both ten-a and ten-m from D. melanogaster and ten-1 from C. elegans have predicted nuclear localization sequences and SH3-binding domains (Tucker et al., 2012). However, whenever discussing teneurin nuclear localization sequences it is important to remember that the vast majority are only predicted in silico. The only experimental evidence that the intracellular domains of teneurins can be transported to the nucleus come from studies with chicken sequences in cell lines (Bagutti et al., 2003;Nunes et al., 2005), in chicken embryos (Kenzelmann et al., 2008;Kenzelmann Broz et al., 2010) and in C. elegans (Drabikowski et al., 2005). In chordates a cysteine in the second teneurin EGF-like domain has been replaced by a tyrosine, and the cysteine in the fifth EGF-like domain has been replaced by a tyrosine (teneurin-2 and teneurin-3) or by a phenylalanine or tyrosine (teneurin-1 and teneurin-4). This general arrangement is also found in the teneurins of ecdysozoa, but not teneurins from lophotrochozoa. For example, the predicted teneurin from the blood fluke Schistosoma mansoni has only four EGF-like domains, and all have a complete complement of cysteines (Tucker et al., 2012). Thus, while the dimerization of teneurins via their EGF-like domains is widespread, in some animals teneurins may act as monomers. Teneurin-2 and teneurin-3 from chordates, as well as the teneurins from almost all invertebrates, have a predicted furin cleavage site between the transmembrane domain and the EGFlike domains. This site was shown to be functional in teneurin-2 FIGURE 2 | Analysis of the carboxypeptidase-like domain. (A) The carboxypeptidase-like domains from various teneurins (HsTen1-4, Homo sapiens teneurin-1 to 4; DmTenm, Drosophila melanogaster ten-m; MbTem, Monosiga brevicollis teneurin) and the Desulfurivibrio alkaliphilus YD protein (DaYD) aligned showing identity (blue) and strongly similar properties (yellow; >0.5 in the Gonnet PAM250 matrix). (B) An unrooted phylogenetic tree constructed using phylogeny.fr default parameters, TreeDyn and 100 rounds of bootstrapping. Branch support higher than 0.50 is indicated. Bacterial and choanoflagellate sequences segregate to the same clade, supporting the hypothesis that teneurins evolved through horizontal gene transfer. Teneurin-1 and teneurin-4, and teneurin-2 and teneurin-3, appear to have evolved through recent gene duplication events. Relevant UniProt ID and GenBank accession numbers are indicated. Scale bar = substitutions/site. Vysokov et al., 2016), and its widespread phylogenetic conservation suggests that it is important for teneurin function. This processing would suggest that the extracellular domain of teneurins is shed from the cell surface. However, the extracellular domain appears to remain anchored to the remaining transmembrane part of the teneurin through noncovalent interactions (Vysokov et al., 2016). Whether or not such interactions are found in other teneurins with the predicted furin cleavage site remains to be determined. Functional differences in the extracellular domains of different teneurins have also been identified. As mentioned earlier, homotypic interactions between the beta-propellers of teneurins are stronger than heterotypic interactions (Beckmann et al., 2013), suggesting that the beta-propellers have properties that are unique to different teneurin forms. Moreover, the C-terminal regions of different teneurins have different affinities for latrophilins (Boucard et al., 2014). These observations will likely be keys to our understanding of why teneurins have duplicated to become a multigene family independently in arthropods and in chordates (Tucker et al., 2012). THE EVOLUTION OF TENEURINS An examination of sequenced metazoan genomes revealed that teneurins are found in all animals with a central nervous system, but not in sponges, Trichoplax or cnidarians (Tucker et al., 2012). Given the prominent expression of teneurins in the developing central nervous system of flies, worms and chordates, this led, at least temporarily, to the assumption that teneurins evolved together with a complex nervous system. However, when predicted proteins with the teneurin domain organization were searched for in non-metazoan sequences, a teneurin was discovered in the genome of the single-celled choanoflagellate Monosiga brevicollis. The teneurin from this choanoflagellate is remarkable in many ways. First, its domain organization matches that of chordate teneurins almost perfectly: it has an intracellular domain with a predicted nuclear localization sequence and an extracellular domain with eight EGF-like domains (all with six cysteine residues), a cysteine-rich domain and a carboxypeptidase domain, a beta-propeller, YD repeats and an RHS core domain ( Figure 1D). It only lacks predicted furin cleavage sites found in most metazoan teneurins and a C-terminal GHH toxin/TCAP domain. Second, it is encoded on only four exons, the third of which contains 6829 residues and encodes almost all of the extracellular domain. Finally, BLAST searches of the sequences encoded on the third exon revealed that the extracellular domain of choanoflagellate teneurin was more similar to the YD proteins of bacteria than to the extracellular domain of metazoan teneurins. This pointed to the possibility that teneurins evolved via horizontal gene transfer from a bacterial prey (with a YD protein gene encoded on a single exon) to a single-celled predator prior to the evolution of metazoa from a choanoflagellate-like ancestor (Tucker et al., 2012). Horizontal gene transfer into choanoflagellates from their prey (bacteria, algae, and diatoms) is well-documented, and many of these events have contributed genes that are still used in modern choanoflagellates, sometimes replacing similar host genes, and sometimes contributing novel enzymes that can be used by the host to exploit nutrient-deficient niches (Tucker, 2013). However, relatively few metazoan genes appear to have originated from a choanoflagellate gene that was in turn acquired from bacteria or algae. One survey revealed only two: dihydroxy-acid dehydratase and teneurins (Tucker, 2013). As mentioned earlier, the extracellular domains of teneurins are remarkably similar to many prokaryotic YD proteins, and a protein similar to the modern-day YD proteins of bacteria is the most likely candidate for the ancestral teneurin. Some of these are illustrated schematically in Figure 1D. The carboxypeptidase-like domain found near the 6-bladed betapropeller of all teneurins is also found in the YD protein of Desulfurivibrio alkaliphilus, an anaerobic, gram-negative, nonmotile bacterium that lives in the extreme high-saline and high-pH soda lakes of North Africa (Melton et al., 2016). This YD protein has an uncharacterized toxin domain, but otherwise resembles, from the carboxypeptidase-like domain to the RHS core protein domain, the extracellular domain of teneurins. The remarkable similarity of the carboxypeptidaselike domain from the D. alkaliphilus YD protein and the similar domain of various teneurins is shown in Figure 2A. This stretch of approximately 70 amino acids is also particularly wellsuited for establishing the phylogenetic relationships between the YD proteins and teneurins as well as teneurins themselves ( Figure 2B). Consistent with a proposed origin from bacteria, the choanoflagellate teneurin and the bacterial sequence are found in the same clade. In chordates teneurin-1 and teneurin-4 are likely to have evolved through gene duplication, as have teneurin-2 and teneurin-3. These paired relationships are consistent with the organization of the intracellular domains, the predicted extracellular domain furin cleavage sites, and the residues that replace the cysteine residues in the EGF-like domains. Analysis of other domains results in similar phylogenetic trees (Tucker et al., 2012). From studies of teneurin evolution we can gain insight into teneurin function. For example, what is known about the function of the YD repeat-containing proteins of bacteria? First and foremost, they are toxins (Zhang et al., 2012). The C-terminal toxin domain is encased in a YD repeat barrel, apparently to protect the cell that is expressing the YD repeat protein from the toxin (Busby et al., 2013). One class of bacterial toxins with YD barrels are the ABC toxins (ffrench-Constant and Waterfield, 2005). The B and C parts of the toxin are expressed either from a single gene or on two or more adjacent genes, and they can form a complex containing a beta-propeller, YD repeats, RHS core protein domain and C-terminal toxin domain. Multiple versions of the C gene allow different types of toxins to be deployed ( Figure 1D). The A protein provides a way for the toxin to get into the cell, either by making a pore or by inserting into the membrane and acting as a receptor for the BC component (Busby et al., 2013). Other YD proteins are, like teneurins, type-2 transmembrane proteins. They appear to be members of the "toxin on a stick" type of bacterial polymorphic toxins (Jamet and Nassif, 2015). Polymorphic toxins are part of self-recognition between bacteria and are used in interbacterial warfare; when identical proteins interact they do not release the C-terminal toxin domain, but when dissimilar proteins interact the toxin domains are released. Given the complementary patterns of expression of many teneurins during the development of the nervous system (see below), one can hypothesize that heterotypic interactions between teneurins may somehow result in the release of the GHH toxin domain. This in turn could lead to programmed cell death or the pruning of dendrites. However, to date this inviting hypothesis is only supported by circumstantial evidence. Drosophila melanogaster and Caenorhabditis elegans The first description of teneurin expression came from Baumgartner and Chiquet-Ehrismann (1993), who used in situ hybridization to determine the sites of expression of ten-a in Drosophila. They found widespread ten-a expression in the early embryo and high levels of expression in the developing ventral nerve cord following germ band retraction. Ten-a transcripts are also observed in muscle apodemes, the clypeolabrum and the antenna-maxillary complex. This work was followed by two independent reports (Baumgartner et al., 1994;Levine et al., 1994) describing the expression of ten-m. Both papers reported ten-m expression in seven stripes during the blastoderm and germ band extension stages of development, and the eventual expression of ten-m in the ventral nerve cord and in cardioblasts. Both papers go on to describe the failure of ventral denticle belts to fuse in P-element insertion mutants (i.e., an "oddless" pair rule phenotype), and one illustrates the disruption of the central nervous system in these mutants (Levine et al., 1994). Higher resolution studies using LacZ expression under the tenm promoter revealed expression in imaginal disks (Levine et al., 1997;Minet et al., 1999). These studies reveal ten-m expression in sensory mother cells and the R7 photoreceptor in developing ommatidia. The expression of teneurins in cardioblasts was revisited and expanded on by Volk et al. (2014). Ten-a is expressed at the border of cardioblasts and pericardial cells, and ten-m is expressed by both cardioblasts and pericardial cells. However, ten-m and ten-a mutants do not have heart defects (Volk et al., 2014). The developing olfactory system of Drosophila has proven to be a particularly useful model for studying both the expression of teneurins and their roles in development. In Drosophila, olfactory receptor neurons (ORNs) are the primary neurons that receive olfactory information. ORN axons synapse with the dendrites of projection neurons (PNs) in glomeruli found in the antennal lobe, and the PNs in turn send their axons elsewhere in the central nervous system. ORN/PN pairs have been mapped precisely. For example, Or47b ORNs normally project to the VA 1 lm glomerulus, and Mz19 PN dendrites are found in an adjacent glomerulus. Hong et al. (2012) used this model to screen for genes that might regulate the development of precise neural networks. They observed that overexpression of ten-m in the Mz19 PNs leads to abnormal connections between Mz19 neurons and Or47b ORNs. A similar system was used for a second screen: Or88a ORNs normally project to the VA1d glomerulus where they intermingle with Mz19 PN dendrites. Overexpression of ten-a in Mz19 PNs disrupts the normal intermingling of Mz19 and Or88a dendrites. In a screen of 410 candidate genes, only the ectopic misexpression of ten-a and ten-m caused these disruptions. High levels of ten-a and ten-m are found in antennal lobe glomeruli in mostly non-overlapping patterns, but both are found in low levels in all glomeruli . In five glomeruli examined in detail, ORNs expressing high levels of ten-m send axons to glomeruli with PNs expressing high levels of ten-m, and the axons of ORNs expressing high levels of tena are found in glomeruli with PN dendrites that also express high levels of ten-a ( Figure 3A). Genetic and RNAi knockdowns result in shifting patterns of ORN/PN interactions, indicating that homophilic interactions between the teneurins are necessary for proper synaptic patterning. Drosophila is also a useful model for studying the roles of teneurins in the development of neuromuscular junctions. In this system ten-a is expressed by neurons and is presynaptic, while most of the ten-m is expressed by muscle and is post-synaptic . In vivo ten-a and ten-m appear to form a complex, and disruption of the expression of either or both leads to severe defects in the neuromuscular junction. These defects include disorganization of microtubules presynaptically, and disruption of alpha-spectrin post-synaptically. Heterophilic interactions between the ten-a and ten-m expressed at basal levels in antennal lobe glomeruli also appear to occur in neuronal synapses (Mosca and Luo, 2014). Presynaptic ten-a controls the number of ORN synapses that are found in a glomerulus, and tena/ten-m interactions regulate presynaptic active zone number. The possible roles of teneurins in synaptogenesis were reviewed by Mosca (2015). There is a single teneurin gene in C. elegans, but two transcripts are generated via alternate promoters (Drabikowski et al., 2005). The differences between the variants lie in the size of the intracellular domain: Ten-1L has a longer intracellular domain that includes two predicted nuclear localization sequences, while Ten-1S has a severely truncated intracellular domain. The extracellular domain of the variants is identical. Ten-1L expression was studied using a GFP translational fusion protein (Drabikowski et al., 2005). It is found in neurons in the ventral nerve cord and in the pharyngeal nerve ring. There is also significant non-neuronal expression in vulvar and diagonal muscles, gonadal distal tips cells and in the vas deferens, among other sites ( Table 1). After injection with Ten-1 RNAi there are neuronal pathfinding defects, abnormal gonad development, and severe morphological defects resulting from abnormal migration of hypodermal cells. Similar defects are seen in Ten-1 null mutants (Drabikowski et al., 2005). In the mutant Ten-1(et5), which has a point mutation leading to a premature stop codon near the end of the EGF-like domains, defects resulting from stalled growth cone migration and abnormal pathways of neurite outgrowth are seen in the pharyngeal nerve ring (Mörck et al., 2010). Unlike Ten-1 null mutants, Ten-1(et5) worms typical live to become reproductive adults, suggesting that the intracellular domain and EGF-like domains can impart some survival benefit. Ten-1 may also play a role in the organization of the extracellular matrix, as basement membrane integrity is compromised in Ten-1 null larvae (Trzebiatowska et al., 2008). Zebrafish One of the first detailed descriptions of teneurin expression in a vertebrate was reported by Mieda et al. (1999), who cloned and sequenced zebrafish teneurin-3 and teneurin-4 while searching for genes that were regulated by Islet-3. Using in situ hybridization they showed that teneurin-3 is transiently expressed in the notochord, somites, branchial arches, and central nervous system (Table 1). Teneurin-4 is expressed faintly during gastrulation, and after that is primarily expressed in the developing brain. Within the central nervous system, teneurin-3 and teneurin-4 are found in largely complementary patterns. For example, at 23 hpf teneurin-4 is found in two lines that wrap around the rostral diencephalon and teneurin-3 is expressed in the region between the lines. The sharp borders between the domains expressing these two teneurins become less clear later in development. The expression of teneurin-4 in narrow bands of cells in the zebrafish central nervous system is remarkably similar to the first report of teneurin-4 expression in the mouse: in E10.5 and E11.5 mouse embryos, teneurin-4 transcripts are found in a sharp line at the boundary between the midbrain and hindbrain (Wang et al., 1998). Teneurin-3 is also found in the developing zebrafish retina (Antinucci et al., 2013). It is expressed by retinal ganglion cells and amacrine cells, which synapse with each other in the inner plexiform layer. Teneurin-3 is also expressed by the targets of retinal ganglion cell projections in the tectum. Knockdown of teneurin-3 expression with antisense morpholino oligonucleotides leads to both abnormal arborization of retinal ganglion cells in the inner plexiform layer and to abnormal pathfinding in the tectum (Antinucci et al., 2013). Zebrafish larvae normally adapt their level of pigmentation to background lighting and appear lighter in bright light and darker at lower levels of illumination. The teneurin-3 morphants are darkly pigmented in bright light, indicating that they probably have severe visual deficits. A teneurin-3 knockout zebrafish was also generated by TALEN genome editing (Antinucci et al., 2016). In the teneurin-3 knockouts amacrine cells that normally express teneurin-3 fail to arborize in the appropriate strata of the inner plexiform layer. The authors of this study go on to show that teneurin-3-expressing neurons form a distinctive circuit in the zebrafish retina that is responsible for orientation selectivity. Patterns of Expression in the Visual Systems of Birds and Mice The first description of teneurin-1 features low-resolution in situ hybridization images pointing to developing neurons as a primary site of expression (Minet et al., 1999). In the developing chicken diencephalon teneurin-1 transcripts are found in the rotund nucleus, and in the optic tectum teneurin-1 is expressed by the large neurons of the stratum griseum centrale. This study was followed by a paper that compared the expression of teneurin-1 and teneurin-2 in the developing avian central nervous system . Teneurin-1 and teneurin-2 mRNAs are both found in the developing thalamus, but in different nuclei, and in the optic tectum teneurin-2 expression was observed in layers that straddled the stratum griseum centrale but is missing from the stratum griseum centrale itself. This led to the conclusion that teneurin-1 and teneurin-2 are expressed in different populations of developing neurons. Similar results were reported in a study of the developing mouse that included other teneurin forms (Zhou et al., 2003), but the complementary patterns are not as obvious in the mouse as in the chicken. More detailed mapping of expression using antibodies specific for teneurin-2 led to the remarkable observation that teneurin-1 and teneurin-2 are each expressed by interconnected populations of neurons (Rubin et al., 2002). This is particularly clear in the developing visual system of the chicken, where teneurin-1 is expressed in the developing tectofugal visual pathway, and teneurin-2 is expressed in the developing thalamofugal visual pathway ( Figure 3B). The timing of expression typically follows the period of growth cone pioneering and neurite outgrowth and coincides with periods of synaptogenesis, pruning, and apoptosis. In the mouse, most retinal ganglion cells project to the lateral geniculate nucleus or the superior colliculus. The former projections form a critical map of visual field information prior to further processing in the cortex. Retinal ganglion cells also make a map of the visual field in the superior colliculus, and this map is important for integrating responses to auditory, somatosensory, and visual information (Kandel et al., 2012). Appropriate binocular vision requires that retinal ganglion cells from each retina project to either the ipsilateral or contralateral superior colliculus and lateral geniculate nucleus. Teneurins appear to be critical for the successful development of these visual circuits. Using in situ hybridization, Leamey et al. (2007) found that teneurin-3 is expressed in a gradient in the developing mouse retinal ganglion cell layer, with highest levels of expression in the ventral retina. This was confirmed with quantitative PCR. Teneurin-3 is also expressed in a gradient within the lateral geniculate nucleus, with highest levels of expression found in the dorsal part of the nucleus. As retinal ganglion cells from the ventral retina project to the dorsal part of the lateral geniculate nucleus, the authors next chose to study these projections in teneurin-3 knockout mice. The brains and retinas of the knockout mice appeared normal in standard histological preparations. However, a tracing study with the knockouts reveals abnormal ipsilateral projections that are no longer limited to the dorsal part of the lateral geniculate nucleus, as well as abnormal connections between the lateral geniculate nucleus and the visual cortex (Leamey et al., 2007;Merlin et al., 2013; for review see Leamey and Sawatari, 2014). Behavioral studies are consistent with the hypothesis that teneurin-3 knockout mice lack binocular vision (Leamey et al., 2007). Teneurin-3 mRNA is also expressed in a gradient in the superior colliculus, with highest expression medially and the lowest laterally (Dharmaratne et al., 2012). This also corresponds to the high ventral, low dorsal expression of teneurin-3 in the retina. When the teneurin-3 knockout mice were further analyzed, ipsilateral projections to the superior colliculus are highly abnormal, just as they are in the lateral geniculate nucleus. EphA7 is significantly reduced, and EphB1 is significantly upregulated, in the visual system of teneurin-3 knockout mice. This suggests that teneurins may work together with ephrin/Eph signaling in this system (Glendining et al., 2017). Other teneurins may also be critical for the development of visual pathways. Like teneurin-3, teneurin-2 is expressed by interconnected populations of neurons in the murine retina, lateral geniculate nucleus and superior colliculus (Young et al., 2013), and in teneurin-2 knockout mice there is a reduction in the number of retinal ganglion cells that project to ipsilateral targets. Interestingly, antibodies to teneurin-4 label retinal ganglion cell axons in the nasal, but not temporal, retina of the chicken embryo (Kenzelmann Broz et al., 2010). This may indicate that other teneurins may regulate the development of other circuits in the visual system. Expression in the Thalamus, Cortex, and Hippocampus The thalamus contains dozens of nuclei that generally act as relay stations between sensory inputs and the cerebral cortex. The first studies of teneurin-1 and teneurin-2 noted that these teneurins are prominently expressed in distinct populations of thalamic nuclei (Minet et al., 1999;Rubin et al., 1999), some of which are interconnected parts of the visual system (Rubin et al., 2002). The importance of normal teneurin-2 and teneurin-3 expression in the lateral geniculate nucleus, which is found in the thalamus, was described in the preceding section. In order to learn more about the repertoire of guidance molecules that are responsible for establishing the complicated set thalamic of circuits, Bibollet-Bahena et al. (2017) performed in situ hybridization with teneurin probes on sections through the rostral and caudal thalamic nuclei of embryonic and newborn mouse brains. Each teneurin has a distinctive pattern of expression. There is significant overlap between the expression patterns of teneurin-2, teneurin-3 and teneurin-4, with teneurin-1 forming a pattern that is largely complementary to the other teneurins. For example, in the rostral thalamus teneurin-1 is expressed in dorsal thalamic nuclei and the reticular nucleus, and these regions show little or no expression of the other teneurins. The other teneurins, but not teneurin-1, are expressed in the laterodorsal nucleus, and both teneurin-2 and teneurin-3 are expressed in the ventral anterior nucleus. One of the thalamic nuclei, the parafascicular nucleus, projects to the striatum. Teneurin-3 is expressed in a dorsal to ventral gradient in both the parafascicular nucleus and the striatum (Tran et al., 2015). As neurons in the dorsal parafascicular nucleus project to the dorsal striatum, this gradient of teneurin-3 expression matches earlier studies of retinal projections to the lateral geniculate nucleus and superior colliculus (Leamey et al., 2007;Dharmaratne et al., 2012). The size of these regions and the numbers of neurons found in them are similar in both wild type and teneurin-3 knockout mice, but anterograde tracer studies revealed abnormal projections to the striatum as well as the loss of distinctive cluster terminals within the striatum (Tran et al., 2015). Consistent with the known functions of the parafascicular nucleus and striatum in goal-directed learning, teneurin-3 knockout mice exhibit delayed acquisition of motor skills (Tran et al., 2015). The cerebral cortex is patterned both by intrinsic factors originating in neuronal progenitors and by extrinsic factors that originate in thalamocortical projections. One of the key factors regulating the intrinsic patterning of the neocortex is the homeobox transcriptional regulator EMX2, which is expressed in a high-caudal to low-rostral gradient in the cortical plate. Teneurin-4 was identified in a screen of genes that are differentially regulated in the Emx2(−/−) mouse (Li et al., 2006). In the developing mouse brain, teneurin-4 is normally expressed by cortical neurons and their precursors in a gradient that matches that of EMX2. In the Emx2(−/−) mouse, there is both a reduction in the overall level of expression of teneurin-4 and a loss of the expression gradient (Li et al., 2006). There is also strong evidence that teneurin-1 expression in the cortex is regulated by EMX2 (Beckmann et al., 2011). Finally, teneurin-3 was also identified as a gene that is differentially regulated in the developing mouse cortex (Leamey et al., 2008). It is highly expressed in layer V of the caudal-most cortex, which corresponds well with its prominent roles in the patterning of visual system (see above). Interestingly, overexpression of teneurin-3 in the embryonic cerebral cortex via in utero electroporation leads to clustering of teneurin-3-expressing cells, suggesting stronger homophilic interactions between these neurons when compared with their neighbors. The first study of teneurin expression in the mouse using immunohistochemistry described teneurin-1 in the molecular layer of the CA3 region of the adult hippocampus as well as the molecular layer of the cerebellum (Oohashi et al., 1999). This pioneering work was followed with a comparative study showing the expression of all four teneurins in the adult hippocampus: teneurin-1 is expressed in CA 3 and the dentate gyrus, teneurin-2 is expressed most strongly in the CA 1 and CA 2 regions, teneurin-3 is limited to the stratum lacunosum moleculare, and teneurin-4 is most prominently expressed in the molecular layer of the dentate gyrus and in the stratum lacunosum moleculare and stratum oriens of the CA 3 region (Zhou et al., 2003). A recent study addressed the importance of normal teneurin-3 expression in the developing hippocampus (Berns et al., 2018). Teneurin-3 is expressed by a patch of neurons in the proximal part of the CA1 region of the P10 hippocampus, as well as in the neighboring distal subiculum and the medial entorhinal cortex (MEC). Tracer injected into the MEC labels teneurin-3-positive neurons in the subiculum and the proximal CA1, and tracer injected in the lateral entorhinal cortex labeled both the proximal CA1 and the distal subiculum, demonstrating that the neurons expressing teneurin-3 form a neural network ( Figure 3C). The model was then exploited experimentally with a teneurin-3 knockout mouse to show the necessity of normal teneurin-3 expression in the development of CA1/subiculum connections (Berns et al., 2018). Non-neuronal Patterns of Teneurin Expression in Birds and Mammals While the name "teneurin" comes from "ten-m" and "neuron" (Minet et al., 1999), it is important to remember that teneurins are also expressed in many non-neuronal tissues. As described above, teneurins are expressed in stripes in Drosophila embryos (Baumgartner et al., 1994;Levine et al., 1994), by motile cells and muscles in C. elegans (Drabikowski et al., 2005) and in somites and branchial arches in zebrafish (Mieda et al., 1999). In early chicken embryos antibodies to teneurin-4 immunostain the mesenchyme in many areas and co-localize with laminin in or near basement membranes (Kenzelmann Broz et al., 2010). Developing limbs also show distinctive and temporally dynamic patterns of teneurin expression. The apical ectodermal ridge (AER) is a prominent site of teneurin-2 expression (Tucker et al., 2001;Kenzelmann Broz et al., 2010). The teneurin-4 is expression pattern is more dynamic. It is initially observed in both the AER and the zone of polarizing activity, but later it is seen in the distal mesenchyme underlying the AER on the anterior part of the limb (Tucker et al., 2000;Kenzelmann Broz et al., 2010). Teneurin-1 expression in the developing limb is particularly interesting. Antibodies to the intracellular domain of teneurin-1 stain the cell surface of ectodermal cells in the dorsal limb, but they stain the cell nucleus in mesenchyme in the ventral limb (Kenzelmann Broz et al., 2010). These patterns are summarized in Figure 4. A recent study (Pickering et al., 2017) found that FIGURE 4 | Teneurin expression in the developing limb. Teneurins show complementary patterns of expression in developing chicken limbs. For example, teneurin-2 is expressed in the apical ectodermal ridge (AER), teneurin-4 is expressed in the anterior part of the underlying progress zone (PZ), and teneurin-1 is expressed in the ectoderm dorsally, and in the mesenchyme ventrally (Tucker et al., 2000(Tucker et al., , 2001Kenzelmann Broz et al., 2010). The inset shows a schematic cross section through a chicken embryo and the location of the developing limb. teneurin expression changes when retinoic acid-soaked beads are applied to the anterior part of the limb bud (teneurin-4 expression decreases, while teneurin-2 expression increases), but the roles of teneurins in limb patterning are unknown. These and other patterns of teneurin expression in non-neuronal tissues are summarized in Table 1. CONCLUSION Since their serendipitous discovery 25 years ago considerable progress has been made in our understanding of teneurin organization, evolution and expression. The intracellular domain is more variable than the rest of the protein, and its function remains mostly a mystery. In particular, its processing and localization to the nucleus at some, but not all, sites of expression is an observation in dire need of additional experimental work. More is known about the extracellular domain of teneurins, which apparently evolved from the extracellular domain of a prokaryotic YD protein via horizontal gene transfer. We now know that the YD repeats of both the prokaryotic YD proteins and the teneurins fold into a hollow barrel with a nearby betapropeller that can be used as a protein-protein interaction domain. Remaining work to be done includes studies of the highly conserved carboxypeptidase-like domain and whether or not the C-terminal domain can be released to act as a toxin. And if so, what triggers its release? In the central nervous system of vertebrates and flies teneurins appear to be expressed in largely non-overlapping patterns than correspond to interconnected populations of neurons. As genetic manipulation of this pattern leads to disruption of the development of these networks, teneurins appear to be key players in brain development. However, just how teneurins accomplish this is unclear. Do they act primarily through differential adhesion, or is the more important interaction the one between TCAP and latrophilins? And is the GHH toxin domain somehow involved in this process? Finally, studies should not neglect the interesting sites of non-neuronal expression of teneurins, such as developing limbs. During the next quarter century, discovering the answers to these questions will present researchers with special challenges. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and has approved it for publication. ACKNOWLEDGMENTS The author is grateful to Matthias Chiquet, Jacqueline Ferralli, and David Lovejoy for their review of the manuscript. They would like to acknowledge the Associate Editor Dr. Roubos for his contribution in handling the review process for this manuscript.
2018-12-11T14:08:26.099Z
2018-12-11T00:00:00.000
{ "year": 2018, "sha1": "fb9df487ab51a7d38007411349bfbf011004a61c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.00938/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb9df487ab51a7d38007411349bfbf011004a61c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257557928
pes2o/s2orc
v3-fos-license
The Impact of COVID-19 Research on the Development of Scalable Frameworks for Efficient Clinical Trials in Cardiovascular Medicine is an example of adaptive platform trial investigating the effect of multiple perioperative interventions on hospitalization and mortality among elderly patients (NCT03861767 The COVID-19 pandemic has spurred the demand for prompt evaluation of possible treatments. Conventional clinical trial timelines for evidence generation are therefore inadequate. One of the most effective solutions has been the deployment of platform trials, which enable the simultaneous investigation of multiple therapeutic strategies. Platform trials operate under a comprehensive master protocol that standardises critical design and operational aspects and incorporate adaptive elements that enable modifications to the trial in response to its own data. Although platform trials and adaptive designs have been implemented in other fields, such as oncology, opportunity remains for their broader adoption in cardiovascular medicine. The COVID-19 pandemic imposed an enormous burden on the health care sector. At the outset, no therapies were known to be effective against this new disease, and supportive care was the main pillar in the management of patients. Based on clinical anecdotes or biological reasoning, a number of existing drugs were proposed as candidates to be repurposed; new COVID-19especific therapies were in the early days of preclinical development. Randomised clinical trials were urgently needed to evaluate the efficacy of these therapies. To achieve this, rapid mobilization of clinical trial networks was imperative. Clinical trials often take years to initiatedfrom hypothesis to design, pilot phases, funding, definitive trial launch, and large-scale ramp-up of recruitment, followed by trial closeout and reporting. The usual timelines would have been prohibitively slow. Newer models were needed to expedite trial execution during a rapidly evolving pandemic. Epitomizing the difficulty of launching large-scale randomised trials in a short timeframe, initially a number of smaller trials sprouted in parallel. According to an assessment conducted by the Center for Drug Evaluation and Research at the U.S. Food and Drug Administration, at the end of 2020, there were 2024 COVID-19 trials registered worldwide, testing 2895 different treatment regimens. 1 Only about 5% of these studies, however, were considered to be adequately randomised and powered. 1 Apart from power and ethical issues, multiple small trials investigating similar therapies may increase the risk of type I error (false positive result). These trials in many cases also competed with one and another for enrollmentdpotentially slowing evidence generation. In contrast, centralised approaches for coordinating collaboration and prioritising investigational therapies, as shown to occur in the U.K. and the U.S. National Institutes of Health (NIH) Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) program, which brought together investigators, funders, regulators, and site networks with remarkable efficiency for the purpose of conducting clinical trials in these jurisdictions. The trials that followed generated some of the most useful evidence during the pandemic. One effective solution used strategies to implement novel platform trials. Platform trials are designed to study several therapeutic strategies for the same disease simultaneously, with therapy-specific arms being allowed to enter or leave the platform based on centralised decisions by the trial leadership as scientific knowledge evolves. Platform clinical trials improve clinical trial efficiency. 2,3 Alternatively, more traditional trial designs, such as factorial or multi-arm, could also be used to study multiple interventions. However, particularly during a pandemic, platform designs become more flexible by formally incorporating the addition or removal of intervention arms after the trial initiation. 4 For example, in Randomized, Embedded, Multifactorial Adaptive Platform Trial for Community-Acquired Pneumonia (REMAP-CAP; NCT02735707), at the time of writing, a total of 12,034 patients had undergone 21,242 randomisations, overall evaluating 61 treatments at 326 sites in a single platform trial (remapcap.org). The ability of such a trial to quickly generate new evidence cannot be overstated. Platform trials operate under an overarching master protocol that contains standardised operating procedures and possibly standard definitions for eligibility criteria and outcomes. Instead of creating a new protocol for every new therapy, amendments to the master protocol are implemented to facilitate the evaluation. 4 A modular platform for data collection is harmonised and expanded or contracted as needed for each specific intervention. The control group often may be shared across multiple interventions or be specific to a particular intervention arm to match evolving changes in standards of care or to comply with the specificities unique to that intervention (eg, different eligibility criteria or need for a different placebo). Besides harmonizing clinical and data coordination centrally, platform trials allow the same study sites to potentially randomise patients to multiple treatments. 4 Operationally, such scalability is of immense advantage to facilitate the rapid initiation and efficient completion of clinical trials. Platform trials frequently use adaptive trial designs, allowing key design features to be modified in response to the ongoing trial's own data. 2 This approach can improve statistical efficiency and may facilitate earlier identification of effective therapies. For example, response-adaptive randomisation allows randomisation ratios to be modified, allocating more patients to treatments with increasing evidence of efficacy based on the blinded results of the interim analyses. 2 Such adaptation would decrease the number of participants randomised to an apparently less effective therapy, allowing the trial to prioritise treatment evaluation based on accumulating trial knowledge and speeding evidence generation for subjects within and outside the trial. Group sequential stopping designs may also permit flexible sample sizes and optimise efficiency in trial design. A bayesian statistical framework aligns well with adaptive platform trial designs, but is not required. 2 During the pandemic, adaptive platform clinical trials have been used to generate clinical evidence rapidly. The Randomised Evaluation of COVID-19 Therapy (RECOVERY) trial is an adaptive platform trial from the U.K. evaluating multiple interventions, such as dexamethasone, tocilizumab, hydroxychloroquine, and aspirin. As a pragmatic trial, RECOVERY focuses on the critical components necessary to produce highquality evidence, facilitating quick trial implementation in a large number of centres: As of November 2022, nearly 50,000 patients had been enrolled from 200 centres (recoverytrial. net). Linkage to national health care databases facilitated end point identification, reducing data collection by local teams. As a result, it took approximately 3 months from the initial protocol draft for the enrollment of more than 10,000 patients. By June 2020, only a few months into the COVID-19 pandemic, RECOVERY reported neutral findings for hydroxychloroquine and promising results for dexamethasone in hospitalised patients. Conducting similar initiatives in lowand middle-income countries is also possible, as exemplified by Coalition COVID-19 Brazil: In the first months of the pandemic, a large clinical trial network was formed including more than 70 large centres across the country. Through a series of independent trials, the Brazilian investigators provided evidence concerning the use of hydroxychloroquine (alone or in combination with azithromycin), dexamethasone, and different anticoagulation strategies across a broad spectrum of COVID-19 severity, from outpatient to critical illness settings. International collaboration has also been crucial for knowledge generation during the pandemic. The Solidarity trial, led by the World Health Organization, spanned 600 hospitals from 52 countries and focused on repurposed treatments for COVID-19. The previously mentioned REMAP-CAP is a platform trial initially aimed at studying multiple interventions in critically ill patients with community-acquired pneumonia. REMAP-CAP was started in 2014 and planned since its inception to pivot should a respiratory disease pandemic occur. When COVID-19 appeared, a pandemic appendix was added to the core protocol, including some modifications in the eligibility criteria, primary end points, and statistical analysis plan. The previously assembled trial infrastructure could swiftly adapt to study multiple antiviral (hydroxychloroquine, lopinavir/ritonavir, inhibitors of the renin-angiotensin system), immune modulation (hydrocortisone, tocilizumab, sarilumab, anakinra, immunoglobulin), and antithrombotic (heparin and aspirin) strategies. By allowing a multifactorial design, the same patient could be randomised to multiple interventions, with a minority of patients receiving no active therapy. Specifically for the study of anticoagulants, REMAP-CAP partnered with 2 other trial networks, the Canadian-led Anti-Thrombotic Therapy to Ameliorate Complications of COVID-19 (ATTACC) and the NIH-sponsored ACTIV 4 ACUTE (ACTIV-4a) trial. 5 Investigators from the 3 networks harmonised the respective trial protocols into a single multiplatform trial, recruiting both critically ill and noncritically ill patients hospitalised for COVID-19. Similar eligibility criteria, interventions, data collection procedures, and outcome measures were adopted by the 3 platforms, and data were federated prospectively. Trial execution was expedited by the use of a flexible bayesian adaptive design and frequent interim analyses. The REMAP-CAP/ATTACC/ACTIV-4a multiplatform trial involved almost 400 sites in 10 countries, and randomised the first patient in April 2020. The trial studying anticoagulation strategies in critically ill patients was stopped in December 2020, when it met the a priori defined trigger for futility. In January 2021, the trial focused on anticoagulation strategies for noncritically ill patients met the a priori defined criteria for superiority of therapeutic anticoagulation. As a result of the multiple collaborations, the multiplatform trial recruited approximately 3500 patients in 9 months, resulting in a scientifically robust and timely answer to inform anticoagulation use in patients hospitalised for COVID-19. 5 In the aftermath of the pandemic, REMAP-CAP continues to introduce new therapeutic domains to be tested in patients hospitalised with noneCOVID-19 pneumonia, such as modulators of the endothelial function or alternative strategies for mechanical ventilation. Other platforms are also emerging, for example, the Platform of Randomised Adaptive Clinical Trials in Critical Illness (PRACTICAL) aiming to study therapies for patients with acute hypoxemic respiratory failure (practicalplatform.org). In conclusion, the COVID-19 pandemic highlighted that adaptive platform clinical trial designs, coupled with enhanced collaboration via federated networks of networks among investigators and trial sites, can dramatically improve efficiency to generate practice-changing knowledge. Platform trials and adaptive designs have a demonstrated track record in other fields, such as oncology (eg, the Systemic Therapy in Advancing or Metastatic Prostate Cancer: Evaluation of Drug Efficacy [STAMPEDE] trial and the Investigation of Serial Studies to Predict Your Therapeutic Response Through Imaging and Molecular Analysis 2 [I-SPY 2 TRIAL]), but remain underutilised in cardiology. There is no a reason why that should be the case. Although not a cardiovascular trial per se, the Strategies to Promote Resiliency (SPRY; NCT03861767) is an example of an adaptive platform trial investigating the effect of multiple perioperative interventions on hospitalisation and mortality among elderly patients. The first intervention being tested is metformin, and the core protocol allows for the addition of other therapies to be evaluated in the future. As shown by the previous examples, it may be time for cardiovascular investigators to build on the lessons and experience generated during the COVID-19 pandemic, developing a global framework for scalable collaboration, and potentially optimising efficiency in cardiovascular clinical trials.
2023-03-17T05:05:51.547Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "3d5a118456a617c1865eccc9d72020af9a4a9841", "oa_license": null, "oa_url": "http://www.onlinecjc.ca/article/S0828282X23002441/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3fa2e3dc5520a3efdd4681905722c42e8f12c258", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56345219
pes2o/s2orc
v3-fos-license
GeO 2 / Ge structure submitted to annealing in deuterium : Incorporation pathways and associated oxide modifications Deuterium (D) incorporation in GeO2/Ge structures following D2 annealing was investigated. Higher D concentrations were obtained for GeO2/Ge samples in comparison to their SiO2/Si counterparts annealed in the same conditions. Oxygen vacancies produced during the annealing step in D2 constitute defect sites for D incorporation, analogous to defects at the SiO2/Si interfacial region. Besides D incorporation, volatilization of the oxide layer is also observed as a consequence of D2 annealing, especially in the high temperature regime of the present study (>450 °C). In parallel to this volatilization, the stoichiometry and chemical structure of remnant oxide are modified as well. These results evidence the broader impact of forming gas annealing in dielectric/Ge structures with respect to SiO2/Si counterparts. ][3][4][5] However, the passivation of the Ge surface is still an issue.Unlike SiO 2 thermally grown on Si, GeO 2 is unstable under temperatures usually employed in device fabrication. 1 As a result, the interfacial quality of GeO 2 /Ge structures following usual device processing steps is rather poor, exhibiting high densities of electronic states.Even considering a scenario where the native oxide is replaced by another dielectric with superior physico-chemical properties, Ge substrate oxidation can occur during deposition of the dielectric material and/or device processing. 6,7Thus, in order to overcome these problems, an efficient passivation strategy is required. Si dangling bond (DB), or Pb center, is the key defect at the SiO 2 /Si interface.It has been extensively reported that DBs can be passivated by hydrogen (H 2 ) thermal treatments, [8][9][10] where formation of Si-H bonds remove electronic states from the Si bandgap. 11For Ge-based metal-oxide-semiconductor (MOS) structures, it has been shown that H 2 treatments are also efficient to improve electrical characteristics.Forming gas annealing (FGA) of Al 2 O 3 /Ge 12,13 and HfO 2 /Ge structures 14,15 was shown to accomplish superior interface quality and to decrease interface state density values.However, the role played by FGA is not clear.Physico-chemical modifications other than DB passivation were proposed as the origin of these electrical improvements: Swaminathan et al., 13 for example, proposed that oxidation of the Ge substrate induced by the FGA may passivate the interface.Diffusion of Ge into the HfO 2 during FGA may also stabilize higher-k phases of this dielectric layer improving capacitance scaling. 168][19] All these results point to a different effect of FGA on dielectric stacks prepared on Ge in comparison with SiO 2 /Si. Hints of the mechanisms underlying H incorporation in GeO 2 were provided by Ogawa and coworkers, 20 who investigated the depth distribution of H within GeO 2 films.They observed high amounts of H incorporated at the GeO 2 /Ge interfacial region, as result of thermal treatments under a humid atmosphere.The H concentration decreases as the distance from interface is increased.This behavior was ascribed to the oxygen vacancies sites (Vo), which should be responsible for capturing H mobile related species.The formation of Vo occurs via the GeO 2 þ Ge !GeO reaction at the GeO 2 / Ge interface.Subsequent diffusion of these vacancies towards the GeO 2 surface promotes desorption of the oxide layer. 21,22his process is dominated by O transport, according to an oxygen vacancy diffusion model.In this picture, the concentration of Vo is much higher at the interfacial region.According to this model, it can be concluded that H behavior in GeO 2 / Ge structures is related to O transport within these systems. In this context, we investigated the incorporation of H in GeO 2 /Ge upon thermal treatments.As previously described, GeO 2 can be formed, intentionally or not, on the top of the Ge substrate, acting as the major passivation agent.Thus, understanding the effects of H incorporation in this layer is crucial to tailor efficient passivation routes for Ge based devices.The present findings evidence volatilization of GeO 2 induced by H 2 annealing as well as modification of the stoichiometry of this layer. p-type epiready Ge (100) doped with Ga (Umicore) wafers with a resistivity of 0.24-0.47X cm were first cleaned in an ultrasonic acetone bath and then followed a cleaning procedure with H 2 O 2 and HCl aqueous solutions. 23i (100) samples were cleaned in a mixture of H 2 SO 4 and H 2 O 2 followed by etching in 40% HF aqueous solution for 1 min.After rinsing the samples in deionized water, they were immediately transferred to load lock chambers.Thermal treatments were performed in a resistively heated quartz tube furnace that was pumped down to 2 Â 10 À7 mbar a) Author to whom correspondence should be addressed.prior to pressurization with the gas of interest.All thermal treatments were performed under atmospheric pressure.O 2 enriched to 97% in the 18 O rare isotope (termed 18 O 2 ) was used for thermal oxidations.Samples were submitted to thermal processing under a static atmosphere of deuterium (termed D 2 ) for 60 min, within the range of 250-550 C. The use of D 2 and 18 O 2 enabled to employ nuclear reaction techniques to quantify D (natural abundance of 0.015%) and 18 O (natural abundance of 0.205%).Besides, we could identify H incorporation exclusively related to the thermal annealing step rather than that originated from ambient contamination.Samples consisting of deposited GeO 2 films were prepared by pulsed DC reactive magnetron sputtering using a Ge target.Argon (Ar) and oxygen (O 2 ) were introduced in the chamber at a constant flux, keeping the pressure at 4 mTorr during sputtering.Deposition parameters were optimized aiming at a stoichiometric GeO 2 layer as checked by Rutherford backscattering spectrometry (RBS).Each sample prepared on a Ge substrate had a Si counterpart deposited or annealed concurrently.Areal densities of D and 18 O were determined by nuclear reaction analysis (NRA) using the plateau regions at 400 keV and 730 keV of D( 3 He,p) 4 He and 18 O(p,a) 15 N nuclear reactions, respectively.Ge amounts on samples prepared on Si substrates were determined by RBS using He þ ions of 2 MeV.X-ray photoelectron spectroscopy (XPS) was performed with Al K a radiation. Figure 1(a) shows D areal density as a function of the annealing temperature of thermally oxidized Si and Ge substrates.Following annealing at 250 C, D was not detected within the sensitivity of the technique.For 350 C and above, GeO 2 /Ge samples incorporate higher amounts of D than their SiO 2 /Si counterparts.This observation is probably related to the production of oxygen vacancies in GeO 2 /Ge samples which constitute incorporation sites for D. In both cases, a maximum of D incorporation is observed for samples annealed at 450 C. The present results for Si based samples are in agreement with those of Myers 24 who stated that the Si-D configuration has a significant activation barrier for formation, being observed only after D 2 exposures above $300 C. Due to the similar temperature dependence of D uptake of Si and Ge based samples of Fig. 1, one may think that D incorporation and desorption mechanisms are similar in both cases.However, one may remember that in the high temperature range of these curves (450-550 C), GeO 2 is not stable on Ge.Quantification of the remaining 18 O amount of each sample following these treatments was performed to probe the stability of such structures.Fig. 1(b) shows these values as a function of the D 2 annealing temperature.Above each point, the percentage of the 18 O concentration of asoxidized samples (remaining after D 2 annealing) is indicated.Si samples keeps 76% of the original 18 O amount following annealing at 250 C. For higher temperatures, this percentage varies around 60%.The observed 18 O losses are probably a result of oxygen isotopic exchange induced by the annealing step. 25A much stronger 18 O loss is observed for Ge based samples.Only 54% of the original 18 O amount remains following annealing at 250 C. The higher the annealing temperature, the lower the remnant 18 O amount.At 550 C, 18 O is no more detected.Despite possible oxygen isotopic substitution occurring at some point of the sample processing, the lower thermal stability of GeO 2 /Ge structures with respect to SiO 2 /Si counterparts is clear. D incorporation in deposited GeO 2 films on both Ge and Si substrates was also investigated.The main objective was to determine the role of the semiconductor substrate, more precisely, how the production of V O at the GeO 2 /Ge interface influences the D uptake.GeO 2 was deposited concomitantly on Si and Ge.The resulting samples were annealed in D 2 in the same temperature range of Fig. 1.D concentrations obtained from these samples are shown in Fig. 2. A similar dependence of D uptake with temperature is observed for both Si and Ge.However, the two set of data seem to be shifted in the horizontal axis, reflecting the role of the semiconductor substrate.Since GeO 2 is more stable on Si than on Ge, 26 due to the absence of V O production at the GeO 2 /Si interface, lower GeO 2 volatilization is expected at the high temperature range (>450 C).This effect may explain the higher D uptake observed for Si samples.On the other hand, in the lower temperature regime (up to 450 C), where GeO 2 volatilization is not so pronounced, V O production at the GeO 2 /Ge interface may furnish a higher concentration of interaction sites for D 2 .D uptake is the net result of these two processes as shown in Figs. 1 and 2. It seems that D incorporation and GeO 2 volatilization are directly related to V O production.This assumption seems to be in contradiction with the results of GeO 2 /Si samples where the maximum D uptake is almost identical to the value measured for GeO 2 /Ge samples (Fig. 2) (no V O is formed at the GeO 2 /Si interface as evidenced by isotopic tracing studies 26 ).Besides, GeO 2 volatilization should be suppressed when GeO 2 is deposited on Si substrates.However, Fig. 3(a) shows that GeO 2 volatilization was indeed observed in GeO 2 /Si samples.Pronounced Ge loss is observed at 450 C, reaching approximately 50% of the original Ge concentration following annealing at 550 C.These results evidence that GeO 2 /Si and GeO 2 /Ge structures behave similarly with respect to oxide stability and that an agent other than V O s formed at the GeO 2 /Ge interface is responsible for the observed thermal instability.Due to its reduction properties, the D 2 atmosphere can play this role.H 2 interaction with GeO 2 was previously investigated. 27The following reaction scheme was proposed: (1) The reactions evidence that exposure of GeO 2 to H 2 leads first to the formation of a GeO 2 .xH 2 O complex, then to the splitting of the oxygen and the creation of nonstoichiometric oxide.In this way, H 2 exposure would extract O from GeO 2 , resulting in H incorporation and additional volatilization of the oxide film.The latter could be a result of the production of V O s followed by disproportionation of the substoichiometric oxide. 28In order to validate the role played by the D 2 atmosphere, we prepared another set of GeO 2 /Si samples replacing D 2 by an inert gas, namely Ar.The remnant Ge concentrations on these samples as determined by RBS are shown in Fig. 3. Following the harshest annealing condition at 550 C, approximately 85% of the original Ge amount stays on the substrate in clear contrast to the sample annealed in D 2 at the same conditions.This is a clear evidence of the reducing role of D 2 , which destabilizes GeO 2 and promotes its volatilization. Besides volatilization of the oxide layer, D 2 annealing may also change the composition and chemical structure of the remnant film.Fig. 3(b) shows the O/Ge ratios of the same samples.O and Ge concentrations were determined by RBS measurements.GeO 2 /Si samples annealed in Ar present ratios around 2, indicating that, even following the harshest annealing, the stoichiometry still corresponds to that of GeO 2 .In contrast, D 2 annealing induces reduction of the GeO 2 layer: substoichiometric Ge oxide is already obtained following annealing at 450 C. At 550 C, O is no more detected within the sensitivity of the technique.Fig. 4 shows Ge 3d regions of XPS spectra of samples following D 2 annealing.Samples annealed up to 400 C present two components related to GeO 2 and to substoichiometric species.Above 400 C, the relative intensity of the GeO x component with respect to the GeO 2 raises, in agreement with the O depletion observed by RBS.At 500 C, an intense component with binding energy characteristic of metallic Ge is observed.Its relative intensity rises with the annealing temperature.These results evidence the progressive reduction of the remnant germanium oxide film following annealing in D 2 . A clear modification of the GeO 2 layer following annealing in D 2 was observed for both Si and Ge based samples.However, the GeO 2 growth method (thermal growth or sputtering) could play a role in this process.GeO 2 sputter deposited is more defective than thermally grown GeO 2 .This fact explains the higher D amounts observed for GeO 2 layers prepared by thermal oxidation (Fig. 1) with those sputter deposited (Fig. 2), even considering the higher thickness of deposited oxides (at least 3 times thicker than those thermally grown).Thus, the decrease of O amounts within GeO 2 due to D 2 annealings could be ascribed to the more defective nature of the deposited samples rather than the reduction reactions proposed in Eqs. ( 1) and (2).In order to confirm the role played by D 2 , GeO 2 /Si samples were once again submitted to D 2 annealings at 550 C. Nevertheless, aiming at improving the quality of the GeO 2 layers, they were pre-annealed for 60 min under the following conditions: (i) 1 atm of O 2 at 350 C and (ii) 1 atm of Ar at 550 C. The remnant O and Ge concentrations obtained by RBS reveal that, even after a preannealing step, the stoichiometry of the oxide layer is changed: the O/Ge ratios decreased to 0.66 (O 2 pre-treatment) and to 0.30 (Ar pre-treatment). The results of D uptake obtained here for GeO 2 /Ge are in contrast to those obtained for SiO 2 /Si structures.In the latter case, D passivates interface defects and incorporates in the near-interface oxide. 29Deuterium (hydrogen) incorporation in these structures is due to interactions of molecular D 2 with pre-existing defect sites rather than chemical reactions involving the breaking of Si-O bonds.Myers 24 compared the D uptake in SiO 2 layers submitted or not to irradiation with high-energy He ions.The irradiation step increased the D uptake by two orders of magnitude, confirming the role of defects in D incorporation.In the case of GeO 2 /Ge, a much higher D uptake is observed with respect to SiO 2 /Si samples prepared in the same annealing conditions.Comparing Ge-H and Ge-O bond enthalpies (Ge-H 321.8 kJ/mol and Ge-O 658.1 kJ/mol, Ref. 30) with those of Si-H and Si-O (Si-H 299.2 kJ/mol and Si-O 809.6 kJ/mol, Ref. 30), it is reasonable to state that D incorporation should be similar in both cases: no breaking of oxide bonds during D 2 annealing and D incorporation in defective sites.One should keep in mind that these values may differ considerably from the real bond energies in the solid state.However, they suggest that a higher concentration of defects in the oxide layer results in a higher D incorporation.Following this reasoning, V O production at both the GeO 2 surface (induced by D 2 annealing) and at the GeO 2 /Ge interface may create interaction sites for D 2 . In summary, D incorporation in GeO 2 /Ge structures following D 2 annealings was investigated.Higher D concentrations were obtained for GeO 2 /Ge samples in comparison to their SiO 2 /Si counterparts annealed in the same conditions.V O s produced during the annealing step in D 2 constitute defect sites for D incorporation analogous to defects at the SiO 2 /Si interfacial region.These vacancies are created both at the GeO 2 /Ge interface and at the GeO 2 surface.The latter mechanism results from the interaction of D 2 with the oxide. Besides D incorporation, volatilization of the oxide layer is also observed following D 2 annealing, especially in the high temperature regime of the present study (>450 C).In parallel to this volatilization, oxide stoichiometry is also modified: reduction of GeO 2 to metallic Ge takes place.All these results constitute important benchmarks to the choice of FGA parameters of Ge based devices.They also provide a deeper insight into the physico-chemical modifications and related electrical characteristics of Ge MOS structures submitted to FGA. FIG. 1 . FIG. 1.(a) D areal densities of GeO 2 /Ge and SiO 2 /Si structures submitted to annealing in D 2 at the indicated temperatures.Both GeO 2 and SiO 2 layers were obtained by thermal oxidation in 18 O 2 .(b) Remnant 18 O areal densities of the same samples.Above each point is indicated the percentage of 18 O concentration of as-oxidized samples, remaining after D 2 annealing.Lines are only to guide the eyes. FIG. 3 . FIG. 3. (a) Ge areal densities of GeO 2 /Si structures submitted to annealing in D 2 or Ar at the indicated temperatures.GeO 2 layers were sputter deposited on Si substrates.The right vertical axis corresponds to the respective oxide thickness obtained assuming a GeO 2 density of 3.6 g/cm 3 .(b) O/Ge atomic ratios of the same samples.Lines are only to guide the eyes.
2018-12-18T07:46:08.960Z
2014-10-10T00:00:00.000
{ "year": 2014, "sha1": "8d6837aed43e1a0cf58e25e49a92f61ceaed2b61", "oa_license": "CCBYNCSA", "oa_url": "https://lume.ufrgs.br/bitstream/10183/142396/1/000944139.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "8d6837aed43e1a0cf58e25e49a92f61ceaed2b61", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
238738507
pes2o/s2orc
v3-fos-license
Taxonomy of Key Terms for Mathematics Education The International Journal of Education in Mathematics, Science, and Technology (IJEMST) is a peerreviewed scholarly online journal. This article may be used for research, teaching, and private study purposes. Authors alone are responsible for the contents of their articles. The journal owns the copyright of the articles. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of the research material. All authors are requested to disclose any actual or potential conflict of interest including any financial, personal or other relationships with other people or organizations regarding the submitted work. Introduction The specific descriptors of a discipline are tools that allow the knowledge emerging from it to be synthesized with key concepts. In general, the Mathematics Education community continues to assign in an open way the key terms of its documents in journals and events of the discipline. This highlights the need to generate a controlled vocabulary that facilitates the characterization of documentation. 587 document searches in Scopus and Web of Science (Adamuz-Povedano et al., 2013;Jiménez et al., 2011), and the development of a taxonomy of key terms in Mathematics Education (Gómez & Cañadas, 2013). MathEduc Database MathEduc proposes a classification of Mathematics Education topics. These topics are organized into 16 main areas that are related to both Mathematics Education and computer science. Within these areas, there are specific themes that are associated with the level of education or type of training. For all the mathematical contents, the first specific subject is the same: -comprehensive works on (...) and the teaching of (...)‖. The other themes are usually more specific in terms of content. In some cases, the description of the topics includes a reference to other categories. As an example, we present, in Figure 1, a section of the category corresponding to arithmetic, number theory and quantity. We use red boxes to represent the main area and topic, blue to highlight the level of education and green for the topics covered in other areas. The subject classification used in the MathEduc database does not make it easy to classify and search for documents. In the case of the arithmetic category, number and quantity theory (Figure 1), a specific topic includes several issues that, although related, differ in their meaning (e.g., operations in natural numbers and positional value in F30). In addition, other issues have to be identified in other categories (e.g., estimates in N20). Descriptors for Document Search Some researchers in Mathematics Education recognize the lack of a closed list of descriptors that is specific to the discipline, that allows the identification of the particular characteristics that distinguish it from other disciplines and that makes possible the recovery of information produced in it. The works of Jiménez et al. (2011) and Adamuz-Povedano et al. (2013) make an approach to the determination of basic descriptors that characterize the scientific production of the Mathematics Education community that is indexed in the Scopus and Web of Science databases. The purpose of these works is to present a list of descriptors that can be used in the search for documents of the discipline in the previously mentioned databases. In the first proposal, the authors establish three groups of categories: descriptors of Mathematics, descriptors of Education and descriptors of Mathematics Education. The second proposal coincides with the first in initially organizing the descriptors by categories (specific descriptors of Mathematics and specific descriptors of Education) to then provide a unified list that characterizes Mathematics Education in the databases. Although both works highlight the need to generate a vocabulary that allows the characterization of documentary production in Mathematics Education, they provide descriptors that are too general to account for the phenomena and problems that are dealt with in a document. For example, the use of a term such as learning does not make it possible to establish which aspects of this subject is being dealt with (learning theories, learning expectations, difficulties or errors?). Gómez and Cañadas (2013) provide a classification and hierarchy of descriptors specific to the discipline. In their proposal, they organize the key terms into purpose, educational level and subject. The purpose characterizes the type, intent and usefulness of the document, and the educational level refers to the level of training of the subjects referred to. With regard to the subject, the authors differentiate the terms related to Mathematics Education from those that deal with mathematical contents; these, in turn, are divided into school mathematics and higher mathematics contents. The terms that address specific issues of Mathematics Education arise from a specific curricular approach (Rico, 1997). From this approach, the authors tackle teaching, learning and assessment, and support the categories of the taxonomy according to four dimensions of the curriculum (conceptual, cognitive, formative and social) at five levels: purposes, disciplines, educational system, teacher planning and local planning. Taxonomy of Key Terms in Mathematics Education The existing taxonomy arises from a particular theoretical approach and is designed with a practical purpose. The authors produced a hierarchical structure of key terms that allowed for the systematic classification, in the open access digital repository Funes (http://funes.uniandes.edu.co), of documents produced by the Mathematics Education community. The use of taxonomy in the codification process of the documents hosted in the repository began in October 2009. After 10 years, it is relevant to verify its effectiveness and evaluate if it meets the current interests of researchers and mathematics educators in a global context. Conceptual Framework Controlled vocabularies are used for the representation of content objects in knowledge organization systems (National Information Standards Organization, 2005). The selection of terms to be included in a controlled vocabulary should be based on three elements: the natural language used to describe content objects, the language of the users, and the needs and priorities of the organization. Vocabulary control is carried out through three methods: (a) definition of the scope or meaning of the terms, (b) use of the equivalence relation to link synonymous and quasi-synonymous terms, and (c) distinction between homograph terms. International Journal of Education in Mathematics, Science, and Technology (IJEMST) 589 The controlled vocabularies focus on content. However, it is possible to address other aspects of the documents such as authorship, location, format, language and place of publication. Controlled vocabularies can be lists of terms, thesauri or taxonomies. In the lists of terms there is only one relation of belonging of each term to the list. Thesauri show the various relationships between terms by standardized relationship indicators. Taxonomies present key terms organized hierarchically into categories and subcategories. A taxonomy is defined as a structure that organizes knowledge according to the hierarchy of concepts that underlie it (Paukkeri, García-Plaza, Fresno, Unanue, & Honkela, 2012). Among the applications that have been recognized for taxonomies, we highlight their use in information management, in the organization and categorization of data, and in the search for content (Sujatha, Bandaru, & Rao, 2011). A taxonomy allows the organization of content based on the standardization of its descriptors, provided that it has a defined content and its related metadata (Engel, Pryde, & Sappington, 2010). Taxonomy is consolidated as an effective means the management and access to digital information. The method for generating a taxonomy is associated with several factors such as the nature of the data, the semantic implication and the type of application it will have (Irfan, Khan, Abbas, & Shah, 2019). In this sense, it is possible to organize short data, such as tags or keywords, whose nature is concise and represent them in a way that easily identifies the hierarchical relations between them. From a semantic approach, the extraction of concepts that are considered relevant in the knowledge that is covered by the taxonomy is used. Irfan et al. (2019) recognize that the incorporation of semantics in the generation of a taxonomy by existing computational techniques is complex due to the differences that can be established in the meaning of the terms or concepts that make up the taxonomy. In addition, the automatic generation of labels by these methods is often not very precise and significant compared to the manual assignment of terms. Hence, the importance of having a conceptual basis to define and organize terms. In fact, the literature recognizes that very few automatic techniques have addressed the problem of determining hierarchy, since the hierarchical structure must reflect the essence of the relationships between terms. Aim Our aim is to produce a taxonomy of key terms that emerge from the knowledge produced by the international Mathematics Education community. To characterize the community's knowledge in a hierarchy of key terms, we combine the use of the existing taxonomy in the coding of documents in a digital repository of open access documents in Mathematics Education and the most used keywords in the Mathematics Education journals that were indexed in Scopus and Web of Science in 2017. The new taxonomy has been endorsed by the community and allows to link the current state of knowledge, according to publications in research journals specialized in the discipline, with the practical use of the existing taxonomy (Gómez & Cañadas, 2013). This relationship is shown in Figure 2. Method We carried out a systematic process to produce a taxonomy of key terms in Mathematics Education. We identified the terms that are relevant to the discipline due to their frequency of use in publications of the discipline at an international level. We generated the hierarchical list of terms and carried out a process of validation of the taxonomy in relation to its structure and the labels used. In the following, we describe the sources of information and procedures. Sources of Information We used the same type of information sources that have been used in the production of taxonomies for other fields (Aadland & Aaboen, 2020;Fellnhofer, 2019;Klassen & Donald, 2020;Pertegal-Vega, Oliva-Delgado, & Rodríguez-Meirinhos, 2019). We used four sources of information for the production of the new taxonomy:  The first is the terms proposed in the encyclopedias published specifically in Mathematics Education (Grinstein & Lipsey, 2001;Lerman, 2014).  The third is a discipline-specific taxonomy of key terms (Gómez & Cañadas, 2013). Below, we describe each source of information. Taxonomy of key terms Approved by the community Arises from the current state of knowledge Attends to the use of a theoretical proposal Terms Defined in Specialized Encyclopedias We reviewed the 450 terms in the first specialized encyclopedia (Grinstein & Lipsey, 2001). Although aspects such as curriculum and assessment are included, this encyclopedia emphasizes the mathematical content that is taught at different educational levels. We analyzed the 162 defined terms, and their related keywords, in the second encyclopedia (Lerman, 2014). In this publication, the inclusion of theoretical proposals that are specific to Mathematics Education that address the problem of teaching and learning mathematics is evident. Key Terms for Articles in Indexed Journals We identified the specialized journals in the discipline that were indexed in the Scopus and Web of Science Specific Taxonomy in Mathematics Education We took, as a basis, the existing taxonomy in Mathematics Education (Gómez & Cañadas, 2013) to produce our proposal. The authors propose a discipline-specific taxonomy based on a solid conceptual framework. The first aspect that these authors address is the key terms called purpose and educational level. The purpose characterizes the type, intent and usefulness of the document. A document can be a research paper, an essay, an innovation paper or an activity paper. The educational level refers to the type of training of the subjects referred to in the document: pre-school education, primary education, secondary education, upper secondary education, adult education, postgraduate education, professional education, undergraduate education, all educational levels, no educational level and other educational level. The taxonomy differentiates key terms referring to mathematical content from those referring to Mathematics Education. To establish the terms associated with mathematical content, the authors based themselves on the classification used by TIMMS (Mullis et al., 2005) and TEDS-M (Tatto, Schwille, Schmidt, Ingvarson, & Beavis, 2006) and distinguished school mathematics from higher mathematics. Mathematics Education subjects emerge from a curricular approach that addresses four central issues: knowledge to be taught, learning, teaching methods and the assessment (Rico, 1997). The four issues give rise to the conceptual, cognitive, formative and social dimensions, and to five levels (purposes, disciplines, educational system, teacher planning and local planning). This curriculum theory supports nine categories of key terms: (a) education system, For Mathematics Education topics, the taxonomy is made up of 236 key terms that are organized into 12 main categories. There are 89 school mathematical terms (organized into 8 categories) and 16 higher mathematical terms. In the last section, the authors did not establish any hierarchy. The construction of the taxonomy was based on MathEduc's classification of topics (FIZ Karlsruhe, 2010), so that every key term in that database would have an equivalent term in the proposal. To ensure relevance to the discipline, Gómez and Cañadas (2013) reviewed the way in which some research journals, conference proceedings and national and international databases assign key terms to their papers, and explored the usefulness of the taxonomy with various experts in the discipline. By doing so, the authors guaranteed the International Journal of Education in Mathematics, Science, and Technology (IJEMST) 593 relevance of the taxonomy and its relationship to the key terms that have been traditionally used in the discipline. Key Terms in a Digital Document Repository Funes, the digital document repository in Mathematics Education, makes available to the community of mathematics educators the documents that are not restricted by copyright and that can support the work of this community. Its content is available to the entire public. There are no restrictions on access to the portal and the documents are not differentiated for access. The documents are classified into different types, according to their purpose: research, essays, curricular innovations or tasks. The documents hosted in the repository can be articles, book chapters, theses, reports and presentations of meetings or working papers. In order for a document to be published in Funes, it must go through a codification process that establishes its focus and educational level, as well as its key terms in relation to the topics of curricular theory and the mathematical content it addresses. The assignment of the key terms of each document is made from the existing taxonomy in Mathematics Education that we described in the previous section (Gómez & Cañadas, 2013). The hierarchy levels of the taxonomy allow relationships between key terms to be identified and provide information about the number of documents associated with each key term (http://funes.uniandes.edu.co/view/subjects/). We decided to take the frequencies of the key terms of the documents hosted in the Funes repository as a source of information for the following reasons: (c) on March 6 th , 2020, it had more than 12000 documents, and (d) it is focused on the Ibero-American community, whose documental production is on the rise. We highlight the last aspect since it has been identified that, in the Scopus database, production is focused on non-Spanish speaking countries-United States, United Kingdom, Australia, Turkey, Canada and Germany (Cruz-Ramirez & Rodriguez-Devesa, 2019). Procedures The production of the new taxonomy, specific to Mathematics Education, involves five phases: (a) the identification of key terms of the discipline, (b) the revision of the synonymy of these key terms in relation to the key terms of the existing taxonomy (Gómez & Cañadas, 2013), (c) the identification of terms to be included in the new taxonomy, (d) the selection and organization of these terms and (e) their validation. We describe these phases below. Figure 3 presents the data collection and organization procedures leading to the production of the new taxonomy. Identification of Key Terms in the Discipline We identified the key terms in Mathematics Education in the first two sources of information. To establish which terms are relevant to Mathematics Education, we recognized the theoretical trends of the terms included in the encyclopedias. Regarding the terms assigned in articles published in the journals indexed in Scopus and Web of Science, we unified a list of terms with their respective frequency. For example, we found that the term Decision making has a total frequency of 13 in the journals considered (see Table 2). Revision of the Synonymy of Key Terms For each key term in each journal, we analyzed its synonymy in relation to the key terms of the existing taxonomy (Gómez & Cañadas, 2013). To do so, we identified four possibilities, so that each term can (a) be synonymous or identical to a term in the existing taxonomy, (b) be included in some category of the existing taxonomy, (c) not be included in the existing taxonomy but be relevant to the discipline (included in encyclopedias) and (d) not be relevant to the discipline. As an example, in the analysis of the key term Engineering and mathematics programs (STEM) from the Eurasia Journal of Mathematics, Science and Technology Education, we saw that it is related to the term Relationship of Mathematics Education with other areas of the existing taxonomy. Although the term, as we present it, has frequency 1 in this journal, we decided to mark it as relevant due to the use of the generic term STEM. In the same publication, we identified the terms STEAM, STEM and STEM education, whose frequency is 1, 4 and 5, respectively. Identification of Terms to Be Included in the New Taxonomy After reviewing the synonymy of the 8560 key terms in the indexed journals, we established that 245 of them We took into account the absolute frequency of the key terms in the list. We decided that the 30 terms with the highest frequency would be included. These terms have a relative frequency of more than 0.5%, in relation to the sum of the frequencies of the terms we initially identified as relevant. We present in Table 3 the list of the first 10 terms that were included in the new taxonomy, by their frequency of use in the indexed journals. Selection of Terms that Make Up the New Taxonomy To elaborate the new taxonomy, we took as a basis the list of terms that make up the existing taxonomy (Gómez & Cañadas, 2013). Additionally, we included the terms that we identified as relevant due to their frequency of use in the journals indexed in Scopus and Web of Science. The inclusion of these terms implies the elimination of the same amount of terms from the existing taxonomy, since we were interested in conserving the quantity of terms between 350 and 400. We took into account the frequency of use of the key terms of this taxonomy in the Funes repository. As previously stated, we decided to include the 30 most frequently used terms in the indexed journals. Their relative frequency, in relation to the sum of the frequencies of the terms in the list of relevant terms, is over 0.5%. This led us to the neglect of terms from the existing taxonomy that are little used in practice. We determined the measure of use of the terms of that taxonomy according to the frequencies of the key terms in the repository. We took the list of key terms from the Funes repository on March 6th, 2020. At that date, the repository contained approximately 12000 records. We organized the key terms in ascending order according to their frequency. We selected 30 terms for their low use (their relative frequency in relation to the sum of the frequencies of all key terms in the repository is less than 0.03%) and decided to omit them for the new taxonomy. There is no relationship between these relative frequencies and the relative frequency we identified as a limit in the list of key terms in the indexed journals. We took into account an equivalent amount of terms that are included in the new taxonomy and terms that are omitted from the existing taxonomy. We present in Table 4 the list of the first 10 terms omitted from the new taxonomy. relevant to the community and should be included in the new taxonomy. In Table 5, we present each -other‖ term we had to go into in depth, its absolute frequency, its relative frequency, and the new terms that emerged from it. In our proposal, we decided to retain the groups of key terms called focus and educational level in the existing taxonomy. We organized the discipline-specific terms into three groups: foundations of Mathematics Education, research in Mathematics Education and pedagogical notions. We included the key terms of the mathematical content in one single category, without differentiating terms for school mathematics and terms for higher mathematics. Validation of the New Taxonomy The relevance of the new taxonomy was validated by researchers and innovators in Mathematics Education who participated in a process of triangulation of information. The revision of the taxonomy in its two versions (English and Spanish) was carried out from the list of key terms, organized hierarchically. Suggestions and comments for the adjustment of the proposal were recorded there. On one side of the list, we established a column of suggestion, in which, for each term, a list with these options was displayed: (a) It must have another tag, (b) It must be removed, and (c) It must be in another section. Each reviewer was able to select one of these options if required. In another column, we invited them to record in the column called Comments the reason for the suggestion (see Figure 4). In addition, we provided a space for the registration of suggestions for terms that should be included in the taxonomy. However, the recommendation was that, if so, it should be indicated which term from the list we International Journal of Education in Mathematics, Science, and Technology (IJEMST) 599 provided initially should be removed. We were interested in retaining the length of the taxonomy, given our purpose of its practical use for coding, organizing and searching documents. Figure 4. Outline for the Suggestion Log for Each Term Initially, we consulted experts from local research teams on the structure of the taxonomy and the labelling of key terms. Some of the comments they provided related to the lack of clarity of several terms. For example, with regard to the assessment, one expert stated that -the difference between types and purposes is not found. This term could be clarified with some options‖. This led us to omit the Purposes label and use Assessment modalities, while referring to self-assessment, co-assessment and peer assessment, among other possible modalities. Another recommendation was to modify the International standardized label and restrict it to Standardized -to include national tests‖. The comments of these researchers also led us to review the organization of the mathematical content. As one of the results, we organized the terms of Statistics into two sections: descriptive and inferential. Based on the comments received for version 1, we prepared the second version of the taxonomy. This version was sent to internationally recognized experts in Mathematics Education. We received suggestions to include, join or delete terms. However, these recommendations were only considered when verifying the relevance of the terms by their frequency of use. An example of this is related to the types of research: -perhaps the terms ‗design or action research' should be joined together as they are not disjointed‖. In this respect, we verified the frequency of these terms in the open access repository and decided to keep the two key terms Design research and Action research. In relation to the labels, we followed suggestions that would allow the terms not to be restricted to a particular theoretical approach. In this sense, we changed the term Representation systems to Representations, as it could be -restrictive and in the end if someone searches for this subject, they usually search by ‗representations'. Although there is a theoretical justification for this term to appear, it is not practical‖. We also adjusted the labels that refer to teacher associations, tasks, problems and special educational needs. Version 3 of the taxonomy was sent to the editors of the 33 Mathematics Education journals that have JCR and/or SJR impact factors-a list of these publications is available at https://bit.ly/3hBZTJZ. Additionally, we contacted experts attached to organizations that lead international events in the discipline. The purpose of this last phase of taxonomy review was to validate its relevance and effectiveness in assigning key terms in articles or contributions to event reports. The comments of this group of experts led us to make adjustments to the structure and some labels of the terms. Regarding the first case, we placed Affectivity as a first level term in the category of Pedagogical notions, since it appeared as a second level term, emerging from Cognition, in the previous versions. We placed the term Analytical geometry at the same level as Trigonometry and Topology, and we omitted the inclusion of a term called recordings, since it is immersed as an instrument in the Interviews and Classroom observations that we propose in the Sources of information label. On the other hand, we included full expressions for terms such as Justification processes (which includes both argumentation and demonstration) and Mathematical analysis (not only analysis), as far as the mathematical content is concerned. In accordance with the procedures set out above, we present below the structure of the new taxonomy of key terms in Mathematics Education. Structure of the New Taxonomy The new taxonomy is organized into six categories of key terms: Key Terms Associated with Purpose The key terms associated with purpose are task, essay, innovation and research. A task is a stimulus on a specific topic that seeks to promote learning in the classroom. An essay is the presentation of an opinion or position, which does not require systematic processes of justification. An innovation is a curriculum design based on disciplinary knowledge. A research is a work that makes a contribution to the knowledge that emerges from a systematic process of inquiry. Key Terms Associated with Educational Level The key terms corresponding to the level of education are as follows. Key Terms Associated with Pedagogical Notions The key terms in the category of pedagogical notions are organized in 10 first level terms: (a) educational system, (b) educational center, (c) teacher, (d) content, (e) learning, (f) cognition, (g) teaching, (h) assessment, (i) inclusion and (j) affectivity. Terms referring to learning, cognition, assessment and affectivity are not exclusive to the student; they can be associated with the teacher. Conclusions We present in this document the systematic process of production and validation of a new taxonomy of specific key terms in Mathematics Education. This proposal addresses the current state of knowledge in the discipline, as it is consolidated from the frequency of use of key terms in specialized research journals and in an open access digital document repository. We took as a basis an existing taxonomy, which emerged from a concrete theoretical approach and with a specific purpose of use (Gómez & Cañadas, 2013). We eliminated from that taxonomy the less used terms in the document repository and included the terms that are relevant because of their frequency in the journals indexed in Scopus and Web of Science. The revision of encyclopedias specific to Mathematics Education (Grinstein & Lipsey, 2001;Lerman, 2014) supported the classification of terms as relevant and guided us in the hierarchical organization of the taxonomy. International Journal of Education in Mathematics, Science, and Technology (IJEMST) 609 We carried out a three-stage validation process in which we invited experts in the discipline to evaluate the structure, relevance and usefulness of the taxonomy. These comments allowed us to make the proposal more concrete. It should be clarified that, although the experts' suggestions were considered, the terms that finally made up the taxonomy satisfy the criterion of being relevant because of their use in the databases and the open access document repository. Compared to the classification of terms used in MathEduc (FIZ Karlsruhe, 2019), our proposal focuses on Mathematics Education and provides a hierarchy of key terms, organized by main categories, which facilitates the coding and search of documents. In relation to the use of Bloom's taxonomy to establish descriptors associated with learning mathematics (Long & Dunne, 2014;Radmehr & Drake, 2019), the new taxonomy starts from the current knowledge of the discipline to organize key terms associated with pedagogical notions such as teacher, learning, cognition, teaching, assessment and affectivity. It is a fact that Mathematics Education, at present, is not limited to the study of the cognitive dimension of the student, but includes in its research and innovation agenda other aspects of mathematics teaching, teacher training and development, educational policy and the affective dimension (Lerman, 2020). We start from the knowledge included in specialized journals in Mathematics Education to identify the terms that are relevant to the discipline and add them to the taxonomy proposed by Gómez and Cañadas (2013). For example, in the articles in the Web of Science database, aspects such as theoretical frameworks of the discipline, and teaching and cognitive processes such as generalization are evidenced as trends in Mathematics Education (Gökçe & Guner, 2021). These terms are included in our proposal. In the same way, we identified the terms of this taxonomy that are less used as document descriptors, which led us to refine it. The quantitative analysis carried out to make decisions about the key terms that are relevant in the discipline is supported by the fact that numerical trends highlight the systemic and systematic practices of the international community of mathematics educators (Young & Young, 2022). The production of the new taxonomy is aligned with reflections on the usefulness of this type of controlled vocabulary, because of its effectiveness in organizing, managing and searching for information from tags that characterize knowledge in a field (Sujatha et al., 2011). Our proposal starts from conceptual references typical of Mathematics Education to establish the categories that organize the key terms. This approach has been used in other academic disciplines to develop their own taxonomies from existing references and resources (Aadland & Aaboen, 2020;Klassen & Donald, 2020). Bibliographic review in databases (strategy used in our proposal) has served as a basis in other studies to identify and structure disciplinary knowledge in specific categories (Fellnhofer, 2019;Pertegal-Vega et al., 2019). We recognize some limitations of our proposal. First, we restricted the review of key terms to journals in the discipline that, as of 2017, were indexed in both databases. This meant that we left aside publications that only satisfied the condition of being indexed in Scopus or that, although not specific to Mathematics Education, published documents from the discipline (for example, Revista Enseñanza de las Ciencias). To solve this situation, we considered it important to invite the editors of all the journals currently (2020) indexed in Web of Science, Scopus and Emerging Sources Citation Index to validate the taxonomy according to their experience and knowledge in the discipline. In addition, we are aware that the taxonomy may be limited to characterize the knowledge produced in Mathematics Education that is manifested in the documentation that is disseminated through various dissemination schemes. To go into more detail would imply a substantially greater amount of key terms. We wanted to keep the size of the previous taxonomy and to include in it the terms that may have greater frequency in the current documents of our discipline and those that will be published in the short and medium term. We believe that the taxonomy presented in this document manages to synthesize the current focuses of work in Mathematics Education at the international level. Its structure and extension facilitate the codification, organization and search of documents in different dissemination schemes, such as journals, event pages or specific databases. However, this proposal is not rigid and is susceptible to revision and adjustment as changes in the thematic trends of the academic community are recognized. In fact, the procedures we develop to produce and validate the taxonomy can be used to generate new proposals. We emphasize that the method we propose is also useful in other disciplines. In general, in the characterization of knowledge it is relevant to identify the advances that arise from research (through, for example, the publications with the greatest impact) but it is also important to study the documentation that is disseminated in an open manner and that is not restricted to research results (Castro & Gómez, 2021). In the case of Mathematics Education, the identification of trends manifested in the documentation produced by the international community also provides opportunities for the development of future researches (Gökçe & Guner, 2021). We make the taxonomy available to the Mathematics Education community under license the Creative Commons Attribution-NonCommercial-NoDerivs License. On the https://bit.ly/3f6ffVA website, the taxonomy can be downloaded in English and Spanish, and in different formats.
2021-10-14T00:07:56.640Z
2021-08-23T00:00:00.000
{ "year": 2021, "sha1": "e2f1613e07e99121c904d0924e6e6e8803aa7cfe", "oa_license": "CCBYNC", "oa_url": "https://ijemst.net/index.php/ijemst/article/download/1289/262", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cd03ceafbc76c4e6863a9f4f4b6fb01d1008a611", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
99441399
pes2o/s2orc
v3-fos-license
Study on plant Gums and their new development in application : with focus on tragacanth , guar and arabic Gum ; a short review Для цитирования For citation Farnoush Hassanpour.Study on plant Gums and their new development in application: with focus on tragacanth, guar and arabic Gum; a short review // Вестник ВГУИТ.2016. No 4. С. 148–150. doi:10.20914/2310-1202-2016-4-148-150 Farnoush Hassanpour.Study on plant Gums and their new development in application: with focus on tragacanth, guar and arabic Gum; a short review. Vestnik VSUET [Proceedings of VSUET]. 2016. no. 4.pp. 148–150. (in Russian). doi:10.20914/2310-1202-2016-4-148-150 Introduction There are many challenges in way of production of functional beverages.The main target is supplying health claims and permanent developments of flavor profile and desire mouth feel throughout their shelf life. Formulating a functional component is an obligation and fulfillment of functional properties is one of the controversial issues in production of any functional foods.All these challenges increase in terms of final product.It must be noted that compounds are final and critical items in ensuring the quality and safety of final products (Pimenov N., 2015).Before formulating, it is important that final functionality must define base on all the food ingredients (Pimenov N., 2016). Texture definitions and beverages stabilization Texture is a wide term which covers many rheological and sensorial qualities.Texture covers appearance and mouth feel of beverage when drink: appearance when poring, glassing on bottle, mouth feel, precipitation of taste. The importance of rheological behavior, in particular flow properties of hydrocolloids can be related to mouth feel and textural properties of Gum (Glykex Mann, 1982).Stabilizing, a term relating to bio-physiochemical mechanism is complicated and different.Each parameter needs to be categorized correctly and overall, a stabilized beverage is homogenous and flow.A homogenous beverage defines as a beverage with no gelling status, no viscosity, lack of layer forming, lack of phase separation, lack of clarification, with no flocculation (Pimenov N., 2013) There are several factors affecting aforesaid property such as the type of used hydrocolloid, its concentration and food system. From the viewpoint of functionality hydrocolloids categorize in two groups as follows: 1) Thickening agent: creating beverage texture but unable to make it suspension, they slow down precipitation of oil droplet with no possibility of prevention of separation. 2) Gelling agent: make connections and bond among molecules along with 3 dimensional matrixes.It leads to stabilize oil droplets in matrix and thus density of created droplets will be less than the amount of efficacy in matrix. Since Gums have solid soft matrix, they can widely use in food applications.Some correlated phenomenon is affecting emulsifier properties of Gums which include retarding of precipitation, diminishing of oil droplet and so forth.It must be noted that Gums will adsorb very slowly onto liquid surface. Gums or Hydrocolloids are main compounds which create stability of emulsion via entering into water phase.The importance of these compounds is on viscosity and electrostatic reactions to stabilize nonalcoholic emulsion with below properties; 1) easily soluble in cold water, 2) the lowest amount of viscosity in water, 3) having maximum level of emulsifier amount, 4) no creation of gelling. Introduce of some Gums 1.1 Guar Guar Gum and Locust Bean Gum: they are galactomanam extracted from endosperm of Cyamopsistertragonobola and Ceratoniasiliqua respectively.Endosperm parts degrade to fine particles.Both Gums compose of glycoside bonds (β 1,4) and a bond on branch which connects Galactose. 1.2 Arabic Gum Acacia Gum or so-called Arabic Gum is used since thousand years ago is an exudate Gum and today known as an additive (Imenson, 2010).The structure of Gum Arabic is relatively complex.The main chain of this polysaccharide is built from (1→3) and (1→6)linked β-D-galactopyranosyl. Tragacanth Gum This kind of Gum exudate from Astragalus Gummifer Labillardiere and other special species of Astragalus from western Asia (mostly in Iran, some in Turkey).This type includes a water soluble part which is соntаinеd30-40% of Gum Structure which is a highly branched neutral polysaccharide composed of 1→6-linked D-galactosyl backbones with L-arabinose side chains joined by 1→2, 1→3 -and/or 1→5-linkages. Reference The type of Gum studied Results Hu et al, 2016 Gum Arabic purpose of present work was to develop eugenol oil nanoemulsions using Gumarabic and lecithin as food grade natural emulsifiers, and study their antimicrobial activity. Results showed that nanoemusions with aparticle size of 103.6 ± 7.5 nm were obtained by mixing aqueous phase (0.5% Gum arabic, 0.5% lecithin, w/v) and eugenol oil (1.25%, w/v), which was premixed with ethanol(as a co-surfactant), followed by high speed homogenization process Conclusion Diversity and functionality of Gums and regarding their still novelty in food industries have made Gums one of the main additives in food formulations.Since sourced of Gums are different we must focus on using them together to improve their synergistic effect but interactions among them and combined matrixes produced by them also need to be studied in details. Figure 1 Figure 1.Arabic Gum 1.3Tragacanth Gum With increasing SSG fraction, the extent of viscosity reduction in the range of 0.01-316s-1 increased from 58.68 for GG to 832.73 times for SSG whichwas not the same at different ranges of shear rate.
2019-04-08T13:12:36.486Z
2016-12-03T00:00:00.000
{ "year": 2016, "sha1": "f35b07361b4807443e3481113657aeba57d190d0", "oa_license": "CCBY", "oa_url": "https://www.vestnik-vsuet.ru/vguit/article/download/1122/1457", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e6255a26ef9f3cb5c027196e55d48858fdb6402c", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
266086622
pes2o/s2orc
v3-fos-license
Unraveling the role of γδ T cells in the pathogenesis of an oncogenic avian herpesvirus ABSTRACT Marek’s disease virus (MDV) is an oncogenic alphaherpesvirus that causes deadly lymphomas in chickens. In chickens, up to 50% of all peripheral T cells are gamma delta (γδ) T cells. Until now, their role in MDV pathogenesis and tumor formation remains poorly understood. To investigate the role of γδ T cells in MDV pathogenesis, we infected recently generated γδ T cell knockout chickens with very virulent MDV. Strikingly, disease and tumor incidence were highly increased in the absence of γδ T cells, indicating that γδ T cells play an important role in the immune response against MDV. In the absence of γδ T cells, virus replication was drastically increased in the thymus and spleen, which are potential sites of T cell transformation. Taken together, our data provide the first evidence that γδ T cells play an important role in the pathogenesis and tumor formation of this highly oncogenic herpesvirus. IMPORTANCE Gamma delta (γδ) T cells are the most abundant T cells in chickens, but their role in fighting pathogens remains poorly understood. Marek’s disease virus (MDV) is an important veterinary pathogen, that causes one of the most frequent cancers in animals and is used as a model for virus-induced tumor formation. Our study revealed that γδ T cells play a crucial role in combating MDV, as disease and tumor incidence drastically increased in the absence of these cells. γδ T cells restricted virus replication in the key lymphoid organs, thereby decreasing the likelihood of causing tumors and disease. This study provides novel insights into the role of γδ T cells in the pathogenesis of this highly oncogenic virus. both innate and adaptive immune responses.Various cell types are thought to be involved in the immune response against MDV including macrophages, natural killer cells, CD4 + , and CD8 + T cells (11)(12)(13). T cells are characterized by their T cell receptor (TCR), which can be divided into two main subgroups: alpha beta (αβ) and gamma delta (γδ) T cells (14).γδ T cells are unconventional T cells and represent up to 50% of the peripheral T cells in chickens (5).The diversity of their TCR repertoires is greater than that observed in humans and mice (15).γδ T cells also represent a major subset of cytotoxic lymphocytes that can spontaneously lyse target cells without being restricted to major histocompatibility complex (MHC) molecules (15).Until now, the role of γδ T cells in the immune response against many pathogens remains poorly understood. Intriguingly, it has been recently shown that γδ T cells are significantly increased in MDV-infected animals (5,16).In addition, these cells upregulate the expression of interferon-γ (IFN-γ) during early infection, suggesting that they may play a role in either the immune response against MDV or its pathogenesis (16).Furthermore, it was recently shown that peripheral blood mononuclear cells (PBMCs) activated with an anti-TCRγδ monoclonal antibody increase IFN-γ production and showed cytotoxic effect against MDV-infected cells (17).An adoptive transfer of these PBMCs containing activated γδ T cells reduced virus replication in the lungs and MDV-induced tumorigenesis in chickens.This suggested that activated γδ T cells may play a role in initiating immune responses against MDV during the early stages of infection (17). Despite recent advances, the role of γδ T cells in MDV pathogenesis remains poorly known, which is mostly due to the lack of γδ T cell-knockout chickens.Recently, we successfully generated a chicken line that lacks the γδ T cells (TCR Cγ −/− ) (18).We used these knockout chickens to study the role of γδ T cells in the MDV life cycle.Our data revealed that the absence of γδ T cells increases virus replication in the thymus and spleen during early infection.In addition, we observed a drastic increase in both disease and tumor incidence in infected animals.Our experiments thereby shed light on the role of these abundant T cell populations in the MDV pathogenesis. Absence of γδ T cells increases disease and tumor incidence Until now, the role of γδ T cells in the immune response against MDV and its pathogen esis remains poorly understood.Therefore, we infected genetically modified chickens that lack γδ T cells with very virulent MDV.This chicken line was recently generated and characterized (18).Throughout infection, the disease incidence was significantly increased in TCR Cγ −/− compared to the wild-type (WT) animals (Fig. 1A).Until the end of the experiment, 70% of the infected TCR Cγ −/− animals showed MDV-specific clinical symptoms compared to 37.5% of their WT hatch mates.Similarly, tumor incidence was increased by more than twofold in the absence of γδ T cells (45%) compared to WT (20%) (Fig. 1B), suggesting that γδ T cells play a protective role in MDV pathogeneses.To decipher if the absence of γδ T cells affects tumor dissemination, the number of tumor-containing organs per tumor-bearing animal was determined.Surprisingly, the average number of tumors in the infected TCR Cγ −/− animals was comparable to WT (Fig. 1C), suggesting that γδ T cells do not restrict tumor dissemination once tumors arise.Taken together, our data revealed that disease and tumor incidence is increased in the absence of γδ T cells, indicating that these cells play an important role in MDV pathogenesis and/or the immune response against the virus. γδ T cells are dispensable for MDV shedding and transmission to naïve birds As γδ T cells have a high frequency in the skin (19), we investigate the role of γδ T cells in controlling virus replication in the skin, shedding, and transmission.To achieve this, we quantified the MDV genome copies in feather shafts, dust, and the infection of contact animals.Intriguingly, MDV genome copies in the FFE of TCR Cγ −/− animals were comparable to WT animals (Fig. 2A), suggesting that γδ T cells are not involved in controlling MDV replication in the skin. Next, we evaluated the virus load in the dust.Consistently, MDV genome copies in the dust were comparable between both groups (Fig. 2B), indicating that γδ T cells do not influence virus shedding.In addition, we assessed if the absence of γδ T cells affects virus transmission.As MDV is efficiently shed into the environment after 14 days post-infection (dpi), we quantified MDV genome copies in the contact animals 21, 28, and 35 dpi (Fig. 2C).MDV was very efficiently transmitted to the naïve animals as all tested animals were already positive at 21 dpi.A comparable virus load was detected between the groups (data not shown).Taken together, these data reveal that γδ T cells present in the skin do not restrict MDV replication in the FFE, shedding, and transmission. Impact of the absence of γδ T cells on MDV replication and immune cell populations in the blood To determine why the disease and tumor incidence were increased in the absence of γδ T cells, we quantified virus replication in the blood at various time points.Surprisingly virus replication was comparable between the two groups (Fig. 3A), indicating that γδ T cells do not affect MDV replication in the blood.To determine if the absence of γδ T cells affects other lymphocyte populations, we quantified different populations including B cell, CD4 + , and CD8 + T cells in the blood of the infected and uninfected groups on 7, 10, and 14 dpi.B cell numbers were not significantly different between the groups (Fig. 3B).The recently described decrease in the number of B cells at 10 dpi was observed in both infected WT and TCR Cγ −/− birds (21).In addition, more B cells were detected in infected and uninfected TCR Cγ −/− chickens at 14 dpi.Similarly, CD8 + αβ T cell numbers were also not statistically significantly different (Fig. 3C), but again an increase was observed in infected and uninfected TCR Cγ −/− chickens at 14 dpi.No significant differences were found for numbers of CD4 + αβ T cells (Fig. 3D), however, at 14 dpi we found an increase only in infected TCR Cγ −/− animals.As MDV commonly transforms CD4 + T cells, this increase likely represents expanding tumor cells consistent with the increased tumor incidence in these chickens.Overall, this data highlights that γδ T cells do not influence the viral load in the blood and only have a minor effect on other immune cell populations in the blood. Absence of γδ T cells increases MDV replication in specific lymph organs To determine the role of γδ T cells in MDV replication in the primary lymphoid organs, we infected WT and TCR Cγ −/− animals and quantified MDV genome copies in the bursa, spleen, and thymus by qPCR.In all three organs, comparable MDV genome copies were detected at 7 dpi (Fig. 4A through C), indicating that γδ T cells are dispensable for the delivery of the virus to the lymphoid organs.In the bursa which contains mostly B cells, a comparable viral load was detected during the phase of lytic MDV replication.The viral load in the spleen and thymus was slightly increased in the absence of γδ T cells at 10 and 14 dpi (Fig. 4B and C).These higher infection levels may increase the likelihood of T cell transformation and contribute to the elevated tumor incidence observed in the absence of γδ T cells. DISCUSSION γδ T cells play a crucial role in the immune response against viral infections in mammals (23,24).They possess the ability to recognize and kill pathogens and tumor cells in an MHC-independent manner (25,26).In humans, γδ T cells have a frequency of about 5% of circulating T cells.In contrast, γδ T cells represent up to 50% of T cells in the blood of chickens (15,27).A recent study revealed that γδ T cells can spontaneously trigger cytotoxicity to kill virus-infected cells (15).Due to the highly cell-associated nature of MDV, cellular immune responses in general are thought to be crucial to combat the virus.A recent study suggested that γδ T cells are likely involved in the immune response against MDV (17), a link that we followed up in our manuscript. To investigate the role of γδ T cells in MDV pathogenesis and tumor formation, we infected chickens that lack γδ T cells with very virulent MDV (RB-1B strain) as suggested by Matsuyama-Kato et al. (17).This recently generated and characterized chicken line allowed us to address the role of γδ T cells in MDV pathogenesis.In our experiment, we observed that in the absence of γδ T cells, the disease incidence was significantly increased during the experiment.As tumors play a crucial role in the development of Marek`s disease, we determined if and how many infected knockout and WT animals developed tumors.Tumor incidence increased by more than twofold in the absence of γδ T cells (45%) compared to the WT group (20%).This is relatively low for a virulent MDV strain and is due to the high genetic resistance of the chicken line (LSL, white leghorn) against MDV (18). A recent study reported a delay in MDV tumor formation when PBMCs activated with an anti-TCRγδ monoclonal antibody were transferred into chickens.The study suggested that this delay is due to the upregulation of cytotoxic activity, which could restrict MDV reactivation (17).In humans, γδ T cells were reported to have anti-tumor function against several types of lymphoma (28)(29)(30) and serve as a promising cancer immunotherapy. Interestingly, the average number of visceral organs with gross tumors was compara ble between TCR Cγ −/− and WT animals.This suggests that γδ T cells do not restrict metastasis but only tumor development at an earlier stage. It is known that infected T cells can transport the virus to the skin, where MDV efficiently replicates in the FFE and is shed into the environment (7,31).Since γδ T cells have a high frequency in the skin (19), we investigated if the absence of these cells affected virus shedding.We quantified virus genome copies in the FFE, dust, and in naïve contact chickens.Surprisingly, comparable virus genome copies were detected in the feathers and dust of TCR Cγ −/− and WT chickens by qPCR.This highlighted that γδ T cells do not influence MDV replication and shedding from the FFE.In addition, MDV efficiently spread independent of the presence or absence of γδ T cells as all contact chickens were infected until day 21 of the experiment.These contacts were all WT chickens to ensure a comparable susceptibility to infection.The observation that virus genome copies were comparable between the groups indicates that comparable virus levels infected them in the same time frame.This is in agreement with a recent study that showed that MDV replication in the skin is not influenced by the infusion of PBMCs activated with an anti-TCRγδ monoclonal antibody (17). To assess why TCR Cγ −/− animals showed a higher disease and tumor incidence, we initially quantified virus replication in the blood of the infected animals over time.Intriguingly, the viral copies in the TCR Cγ −/− animals were comparable to WT, suggesting that γδ T cells are dispensable for virus replication in blood.In addition, we assessed the effect of the absence of γδ T cells on other immune cell populations in infected and uninfected animals at 7, 10, and 14 dpi.B cell populations were not significantly different between the groups (Fig. 3B).Only slightly more B cells were detected in infected and uninfected TCR Cγ −/− chickens at 14 dpi.CD8 + αβ T cell numbers were also not statistically significantly different (Fig. 3C), while an increase was observed in infected and uninfected TCR Cγ −/− chickens at 14 dpi.This is consistent with a previous study by von Heyl et al. that extensively characterized lymphocyte subsets in the blood of uninfected TCR Cγ −/− animals and did not observe any significant changes compared to their WT hatch mates (18).Similarly, CD4 + T cells were also not significantly different (Fig. 3D), while only an increase in infected TCR Cγ −/− was observed at 14 dpi.Since CD4 + T cells are the primary target for MDV transformation (3,32), this increase may be due to the expansion of tumor cells. Next, we assessed the role of γδ T cells in MDV lytic replication in the bursa, thymus, and spleen.This is particularly important, as MDV mostly replicates in these lymphoid organs, and transformation is thought to occur in them.In general, the virus was efficiently transported to the lymphoid organs as comparable levels were observed at 7 dpi, a commonly used time point for lytic replication.This indicated that γδ T cells do not play a role in the delivery of the virus to the primary lymphoid organs.The absence of γδ T cells did not affect virus replication in the bursa, likely because the bursa is mostly composed of B cells and only a few γδ T cells are present in the bursa that could affect MDV replication.Albeit not statistically significantly different, MDV replication was increased in the spleen and thymus in the absence of γδ T cells.These higher infection levels may increase the likelihood of T cell transformation and contribute to the elevated tumor incidence observed in the absence of γδ T cells. The increased virus load in the spleen and thymus but not in the blood, skin, or bursa, indicated that γδ T cells play a tissue-specific role in the immune response against MDV.This is consistent with a previous study that showed that γδ T cells have cytotoxic activity in the spleen but not in the blood (15). In conclusion, Our study provides crucial evidence that γδ T cells play an important role in MDV pathogenesis.Our data revealed a higher disease and tumor incidence in the absence of γδ T cells in MDV-infected chickens.Much higher viral loads were detected in the spleen and thymus in the absence of γδ T cells, indicating that γδ T cells restrict virus replication and/or tumor development.Overall, our data provide important insights into the role of this highly abundant cell population in the pathogenesis of this deadly pathogen. Animals and genotyping The γδ T cell-knockout chickens (TCR Cγ −/− ) were recently generated and completely lacked γδ T cells (18).γδ T cell-knockout chickens develop normally and had compara ble body weights compared to their non-transgenic hatch mates.Their immunological profile has been characterized intensively recently (18).Whole peripheral blood was collected from newly hatched chicks, and total DNA was extracted using the NucleoSpin 96 Blood core kit (Macherey-Nagel, Düren, Germany) according to the manufacturer's instructions.Genotyping has been performed by PCR using TCR-specific primers as published previously (18).Chicks were categorized into two groups: WT (TCR Cγ +/+ ) or KO (TCR Cγ −/− ).The primers used for genotyping are shown in Table 1. Animal experiment 1 To investigate the role of γδ T cells in MDV-induced pathogenesis, 1-day-old chicks were genotyped.Wild type (WT; n = 24) and γδ T cell knockout (TCR Cγ −/− ; n = 20) animals from the same parents were injected subcutaneously with 2,000 PFU of the very virulent RB-1B strain.To assess the natural transmission of the virus, 1-day-old VALO SPF (VALO BioMedia) chickens (n = 11 per group) were housed with the infected chickens.The two groups were housed separately and supplied with food and water ad libitum. To assess virus replication in the infected animals, peripheral blood was collected at 4, 7, 10, 14, 21, and 28 dpi.To quantify the virus genome copies in the skin of the infected animals, feather samples were collected at 4, 7, 10, 14, 21, 28, and 35 dpi.To quantify the shedding of MDV into the environment, dust was collected in the rooms at 10, 14, 21, 28, 35, and 42 dpi.To assess the infection of the contact animals, peripheral blood was collected at 21, 28, and 35 dpi.Chickens were monitored daily throughout the experiment for the development of MDV-specific symptoms, including ataxia, paralysis of the legs, wings, or neck, torticollis, and somnolence.Once chickens exhibited severe symptoms or at the end of the experiment (91 days), they were humanely euthanized and examined for gross tumors, and the spleens were collected to assess the virus load. Animal experiment 2 To determine if the absence of γδ T cells affects virus replication in the lymphoid organs, 1-day-old chicks were genotyped, divided into two groups, WT (n = 9) and TCR Cγ −/− (n = 8), and infected as described above.In parallel, uninfected control chickens (WT; n = 9, TCR Cγ −/− ; n = 6) were raised in a separate room.Blood samples were collected from infected and control animals at 7, 10, and 14 dpi.To assess the delivery to and replication in the lymphoid organs, MDV genome copies were quantified in the spleen, thymus, and bursa at these time points. DNA extraction and genomic quantification of the virus Whole-blood DNA was extracted using the NucleoSpin 96 Blood Core Kit (Macherey-Nagel, Düren, Germany) according to the manufacturer's protocol.DNA was also extracted from feathers and dust using a proteinase K lysis protocol described previously (20).DNA from organs was extracted using the innuPREP DNA mini kit (Analytik-Jena, Berlin, Germany) following the manufacturer's instructions.To quantify the virus load by qPCR, specific primers and probes (Table 1) for MDV ICP4 were used.The virus genome copies were normalized against the chicken-induced nitric oxide synthase (iNOS) gene (10,35,36). Statistical analysis Statistical analyses were performed using Graph-Pad Prism v9 (San Diego, CA, USA).The MD incidence graph was analyzed using the log-rank test (Mantel-Cox) test.Fisher's exact test was used to assess the MD incidence at the final necropsy (91 dpi).The tumor incidence and the average number of tumors per animal were analyzed using Fisher's exact test.MDV genome copies in the feather or dust were analyzed using the Mann-Whitney U test.MDV genome copies in the blood of experimentally infected and contact animals were analyzed using the Mann-Whitney U test and paired t-test, respectively.The immune cell counts were analyzed using the two-way ANOVA (Tukey's multiple comparisons tests).MDV genome copies in the bursa, spleen, and thymus were analyzed using the Wilcoxon-Mann-Whitney test. FIG 1 FIG 1 Absence of γδ T cells increases disease and tumor incidence.(A) Disease incidence in MDV-infected WT (n = 24) and TCR Cγ −/− chickens (n = 20).The percentage of chickens with clear clinical symptoms of Marek's disease, such as ataxia, paralysis torticollis, somnolence, and tumors (postmortem) is shown throughout the experiment (*P = 0.0396, Fisher's exact test).(B) Tumor incidence is shown as a percentage of the chickens with gross tumors, during the postmortem examination (P > 0.05, Fisher's exact test).(C) The average number of gross tumor-containing organs per tumor-bearing animal is shown with the standard deviation (error bars) (P > 0.05, Fisher's exact test).Asterisks indicate statistical significance. FIG 2 FIG 2 γδ T cells are dispensable for MDV shedding and transmission.(A) quantitative Polymerase Chain Reaction (qPCR) analysis of MDV genome copies in the FFE of WT (n = 8) and TCR Cγ −/− chickens (n = 8).Mean genome copies are shown per million cells with standard deviation (error bars) (P > 0.05, Mann-Whitney U tests).(B) Average MDV genome copies per 1 µg of dust collected from the dust filter from each group at the indicated time points (20) (P > 0.05, Mann-Whitney U tests).(C) Percentage of MDV-positive contact chickens (n = 8) detected by qPCR at the indicated time points. TABLE 1 PCR and qPCR primers and probes used in this study a For, forward primer; Rev, reverse primer.b FAM, 6-carboxyfluorescein; TAM, TAMRA.
2023-12-09T14:14:57.347Z
2023-12-06T00:00:00.000
{ "year": 2024, "sha1": "8b10f89f5b5944704d3adb3950300a332806061e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/mbio.00315-24", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8228927f0e3671951451e0784f42d5942de8f9a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1311437
pes2o/s2orc
v3-fos-license
Flowers visited by hummingbirds in the open habitats of the southeastern brazilian mountaintops : species composition and The hummingbird-visited plant community located on the open-habitat mountaintop of the Espinhaço Range was studied for two years (from August 2007 to July 2009) in Serra do Cipó National Park, Southeastern Brazil (19° 15’ S and 43° 31’ W). The floral characteristics and flowering period of the hummingbird-visited plants was monthly recorded along trails located in three vegetation types: (1) typical campos rupestres (TCR), (2) open fields (OPF), and (3) capões de mata (CAM). Hummingbird visitation was observed in 51 plant species, 22 ornithophilous and 29 non-ornithophilous species. The TCR showed the greatest number of species visited (N = 38), followed by the OPF (N = 18) and CAM (N = 17). Six species of hummingbirds were recorded visiting flowers: Augastes scutatus, Campylopterus largipennis, Colibri serrirostris, Chlorostilbon lucidus, Eupetomena macroura and Phaethornis pretrei. This study demonstrates that the species richness and the number of ornithophilous species visited by the hummingbirds at the study site are more similar to hummingbird-plant communities of the Atlantic Forest than to those of the Cerrado communities and other Brazilian highland open-habitat communities. The plant families most visited by hummingbirds were Bromeliaceae and Asteraceae. Although the Asteraceae family is rarely used as a food resource for hummingbirds in other high and lowland communities, in the study site this family is used mainly by the endemic hummingbird Augastes scutatus. We found a large overlap of flowering throughout the year among the species visited by the hummingbirds. Thus, the nectar availability supports these resident hummingbirds. The present study also showed that the studied hummingbird-plant community is composed of many species endemic to the campos rupestres of the Espinhaço Range, some of which are considered to be in danger of extinction, thus constituting a unique and threatened community. Thus, understanding hummingbird-plant pollination dynamics becomes fundamental to the conservation of the campos rupestres. Introduction Flowers adapted to hummingbird pollination are an important component of Neotropical plant communities, comprising 2% to 15% of angiosperm species in a given community (Feinsinger, 1983;Machado and Lopes, 2004;Ramírez, 2004;Rodrigues and Araujo, 2011).On the one hand, hummingbird-pollinated flowers show morphological and ethological adaptations that include a prevalence of red colour, a narrow tubular shape, incline position, lack of landing platforms, diurnal anthesis and large quantities of diluted nectar (Faegri and Pijl, 1980;Wilson et al., 2004;Machado and Rocca, 2010).On the other hand, hummingbirds are a group of specialised and highly aerial birds (Bleiweiss, 2009) endemic to the Americas, with high diversity in the Andes and in the highlands of southeastern Brazil (Stotz et al., 1996).Hummingbirds are the largest group of vertebrate pollinators in the Neotropics (Bawa, 1990). Many studies have shown that hummingbirds are generalists with respect to their use of floral resources as dietary supplements, including varying percentages of flowers supposedly adapted to pollination by other groups of animals (Araujo, 1996;Araujo and Sazima, 2003;Machado, 2009;Machado andRocca, 2010, Rodrigues andAraujo, 2011).The communities of plants used as food resources for hummingbirds have been relatively well studied in forests and savanna habitats of Brazil (see Rodrigues andAraujo, 2011 andAraújo et al., 2011). The montane open habitats in southeastern Brazil are recognised as important centres of endemism for Neotropical flora and fauna (Giulietti et al., 1997;Safford, 1999;Silva and Bates, 2002;Echternacht et al., 2011;Vasconcelos and Rodrigues, 2010).These open habitats include the campos rupestres scattered along the Espinhaço Range and the campos de altitude in the mountains of the Serra do Mar, Serra da Mantiqueira, and associated ranges (Giulietti et al., 1997;Safford, 1999). The campos rupestres is located at above 900 m altitude and are mainly associated with outcrops of quartzite, sandstone and iron ore (Vasconcelos, 2011).The campos de altitude are located at approximately 1,500 m in altitude and are associated with igneous or metamorphic rocks, such as granite and gneiss (Vasconcelos, 2011).The campos rupestres of the Espinhaço Range are located in the areas of contact between the Cerrado, the Caatinga and Atlantic Forest, whereas the campos de altitude are fully inserted into the Atlantic Forest domain (Giulietti et al., 1997;Safford, 1999).Although both of these open habitats present similar landscapes and share similar genera and species of plants, these two types of vegetation show differences in the biogeographic affinities of their flora (Giulietti et al., 1997;Safford, 1999;Alves and Kolbek, 2010).A previous study conducted in the open-habitat mountaintops showed that hummingbird pollination is surprisingly uncommon in campos de altitude (Freitas and Sazima, 2006), in contrast with studies that have reported a high number of hummingbird-pollinated plant species in campos rupestres (Vasconcelos and Lombardi, 2001;Machado et al., 2007). The present study examines a hummingbird-flower community located on the open-habitat mountaintop of the Espinhaço Range of southeastern Brazil.Thus, the aim of the study is to achieve the following goals: (1) to record the species richness and taxonomic composition of hummingbird-visited plants, independently of their supposed syndrome, in three habitats of the Espinhaço Range; (2) to characterise the floral morphology of these plants; (3) to investigate the availability of floral resources for hummingbirds; and (4) to determine the pollinator species among hummingbird-pollinated plants and the behaviour of these birds during pollination.We expect to establish a baseline dataset to contrast our findings with those published previously. Study site This study was conducted in a region known as the Alto do Palácio (hereafter AP,19° 15' S and 43° 31' W, at approximately 1350 m above sea level), which is located in the northern part of the Serra do Cipó National Park (SCNP) and comprises the southern portion of the Espinhaço Range (Rodrigues et al., 2005). The AP region is located in the eastern portion of Serra do Cipó and is characterised by a campos rupestres habitat and the strong influence of certain vegetation that is typical of the Atlantic Forest biome.In this region, the landscape is described as a mosaic consisting of the following habitats: (1) typical campos rupestres (hereafter TCR), which are areas of rocky outcrops with herbaceous vegetation and shrubs; (2) open fields (hereafter OPF) composed predominantly of herbaceous species; and (3) capões de mata (hereafter CAM), which are small areas of dense forest-like vegetation associated with wetter areas.For more detailed description of habitats see Rodrigues and Rodrigues (2011). The region experiences extreme variations in rainfall, with particularly wet summers (from November to January) and extremely dry winters (from June to September) (Figure 1, Rodrigues et al., 2011).Usually, there is a soil water deficit from May to August, which coincides with the coldest months of the year (hereafter referred to as the dry season).In the rainy season, which lasts from November to March, there is an excess of water in the soil coinciding with the warmest months of the year (Rodrigues et al., 2011). Data collection We made 24 trips to the AP, from August 2007 to July 2009, for a total of 120 days of fieldwork.The flowering of hummingbird-visited plants (independent of their pollination syndrome) was recorded monthly, in trails measuring 10 m wide and 1,800 m in length, of which 1,200 m are located in the OPF and 600 m in the TCR (comprising a total sample area of 12,000 m 2 in the OPF and 6,000 m 2 in the TCR).The sampling in the CAM occurred during systematic walks along the forest edge and within two areas of the CAM (comprising a total sample area of 400 m 2 ).The size of the area sampled in each of the distinct plant physiognomies reflects their representation in the AP. Data concerning the growth form, numbers of open flowers per individual per day and floral characteristics (corolla shape, length and colour, concentration and volume of accumulated nectar) were recorded.The corolla effective length (sensu Wolf et al., 1976) and corolla diameter were measured in fresh flowers or material collected and fixed in 70% alcohol with a manual caliper.Voucher specimens from the plant species were deposited in the Herbarium of the Departamento de Botânica, ICB-UFMG (BHCB). The pollination syndromes of the hummingbird-visited plants were determined using the predominant colours of corolla and bracts, the corolla morphology, the presence of odour, and the period of anthesis (see Rodrigues and Araujo, 2011).Typical ornithophilous species have odourless flowers with diurnal anthesis and tubular, red, pink, yellow or orange corollas (cf.Faegri and Pijl, 1980). The remaining species visited by the hummingbirds were classified as non-ornithophilous.This non-ornithophilous species were further divided into mellitophilous (flowers adapted for bee pollination), sphingophilous (flowers adapted for sphingid pollination) and chiropterophilous (flowers adapted for bat pollination) species, according to the floral characteristics described by Faegri and Pijl (1980), and entomophilous, which is a species that can be pollinated by insects of two or more taxonomic groups.Additionally, four other species were classified as chiropterophilous-ornithophilous (flowers with transitional characteristics between the ornitophily and chiropterophily syndromes), according to the floral characteristics described by Sazima et al. (1994) and Sanmartin-Gajardo and Sazima (2005). All flowering individuals potentially visited by hummingbirds were recorded.The phenological patterns of the species were categorised according to Newstrom et al. (1994).An analysis of variance (ANOVA) with a Tukey test a posteriori was conducted to verify that the number of flowering species per month differed among distinct habitats. The flower density was calculated monthly for each plant habitat using the total number of flowers per total sampling area (flowers/m 2 ) (Araujo, 1996).The flower density (flowers/m 2 ) was transformed to log (n+1), and an ANOVA with a Tukey test a posteriori was conducted to verify that the flower density differed among the habitats.A Mann-Whitney U test was performed to compare the total density of the flowers per month between the ornithophilous and non-ornithophilous species. The sugar concentration and volume of nectar were quantified in flowers and in Paliavana sericiflora Benth.(Gesneriaceae), also in flower buds, bagged the day before and measured between 10:00 and 13:00 h the following day.The nectar volume was measured using a Hamilton microsyringe, and the sugar concentration was measured using a Instrutherm pocket refractometer 0-32% (cf.Galetto and Bernardello, 2005).Flowers that showed sugar concentrations higher than 32% had their volume diluted with a equal portion of water.The idealised sample size consisted of 20 flowers from three different individuals; however, intrinsic limitations of each species (such as the individual density of the study area and/or plants that produce only one flower every two or three days) and limitations of the study resulted in varied sample sizes.The sugar concentration and nectar volume were converted into mg of sugar/µl of nectar according to Galetto and Bernardello (2005). Observations of hummingbird visitors were made monthly using a direct focal plant observed for 40 to 180 min.The observations were recorded mainly between 06:00 AM-10:00 AM and 03:00 PM-06:00 PM.When possible, more than one focal individual was observed (modified from Rocca- de-Andrade, 2006).Casual observations of visits, the plant species visited and the species of hummingbird visitors were also noted.The visits were classified as 1) legitimate when the hummingbirds contacted the anthers and/or stigmas in a way that would result in pollination or (2) illegitimate when the hummingbirds did not contact the anthers and/or stigmas.The hummingbirds were identified by direct visual observations during their visits to flowers, and photographs were taken during the visits.The time, number of visits and behaviour of the hummingbirds during the visits were recorded.The occurrence and outcome of agonistic interactions between the hummingbirds were also recorded.Were considered as agonistic interactions, aggressive manifestations between hummingbirds, such as chasing or pecking (Machado and Rocca, 2010).Hummingbirds with evident sexual dimorphism were considered separately in the analyzes. The hummingbird visit frequencies (number of flowers visited per plant min -1 number of observed flowers -1 ) were transformed to log (n+1).A Mann-Whitney U test was used to compare the hummingbird visit frequencies between the ornithophilous and non-ornithophilous flowers species. An analysis of covariance (ANCOVA) was performed to determine whether the number of flowers observed and the category of plant species visited (ornithophilous or non-ornithophilous) affected the number of foraging bouts.These data were transformed to log (n+1). The species visited by hummingbirds were distributed among 41 genera and 20 families.The Asteraceae (10 species, 19.6%) and Bromeliaceae (10 species, 19.6%) families represented the most visited species.The other families were represented by four, two or one species (Appendix 1). More than one third of the species (43%) visited by the hummingbirds in the study area were endemic to the Espinhaço Range.Eight species were classified in the list of endangered species in Minas Gerais (Minas Gerais, 1997) and/or Brazil (Brasil, 2008).Three of these species (Chronopappus bifrons (DC.ex Pers.)DC. -Asteraceae, Piptolepis leptospermoides (Mart.ex DC.) Sch.Bip.-Asteraceae and Pilosocereus aurisetus (Werderm.)Byles and G.D. Rowley -Cactaceae) are considered to be critically endangered (Appendix 1). Characteristics of plants visited by hummingbirds Most of the species visited by the hummingbirds (55%) were shrubs.Only one species was a tree, another was a vine shrub, and 41% of the species (N = 21) were herbs.Four herbs were exclusive epiphytes, and another four were exclusive lithophytes (Table 1).Many of the visited species had bracts or petals of bright colours, such as lilac (17.6%), yellow (19.6%), red (13.7%) or pink (19.6%), but flowers with colours less attractive for hummingbirds, such as white (17.6%) and green (5.8%), were also visited (Appendix 1).Most of the species visited (84.3%) presented tubular flowers, with a highly variable mean corolla length among these species.A smaller corolla length (4.4 ± 1.1 mm) was observed in the entomophilous species Lessingianthus roseus (Mart.ex DC.) H. Rob.(Asteraceae), and a larger corolla length (49 ± 5.65 mm) was observed in the ornithophilous species Rhodophiala cipoana Ravenna (Amaryllidaceae).The mean diameter of the corolla also varied widely among species, measuring 0.75 ± 0.35 mm in the entomophilous species Trixis vauthieri DC. (Asteraceae) and 25.1 ± 2.5 mm in the Paliavana sericiflora (Gesneriaceae; Appendix 1). The mean number of open flowers per individual per day ranged from 1 ± 0 in R. cipoana to 1006 ± 2018 in E. erythropappus and was significantly higher in the nonornithophilous species (p = 0.046, N = 50) (Appendix 1). The nectar production was highly variable between the species visited (Table 1).Some species (Eremanthus crotonoides, E. erythropappus and P. leptospermoides -Asteraceae; Gaylussacia brasiliensis (Spreng.)Meisn.and G. hispida DC. -Ericaceae; Erythroxylum vaccinifolium Mart.-Erythroxylaceae; and Vochysia microphylla G. Shimizu and K. Yamamoto -Vochysiaceae) produced small nectar amounts that were difficult to measure with the collection method used.Among species in which it was possible to obtain these measurements, the mean nectar volume ranged from 0.5 ± 0 μl in entomophilous T. vauthieri to 260.0 ± 0 μl in P. aurisetus species with intermediate attributes between chiropterophily and ornithophily syndromes. Flowering seasonality The flowering phenology at the population level of most of the species visited by the hummingbirds (69%) showed annual or supra-annual patterns with an intermediate duration of one to five months (sensu Newstrom et al., 1994), during which seven of these species, which flowered for only one month, potentially represented annual or supraannual patterns with brief flowering (sensu Newstrom et al., 1994).The other species showed continual patterns with flowering periods longer than five months (Figure 2). Throughout the period of study, ornithophilous and nonornithophilous resources were available to the hummingbirds.In 2008, the first year of the study, the largest number of flowering species was recorded from March to July, and in 2009, the second year, the largest number was recorded from March to May (Figure 2).The number of flowering species per month differed among the vegetation types (ANOVA: F 2,69 = 30.492,p < 0.001), with the highest number observed in the TCR (9 ± 3.3, N = 24), followed by the OPF (5.6 ± 3, N = 24, P = 0.000) and the CAM (2.7 ± 1.6, N = 24, p = 0.000) (Figure 3A). The flower density in the ornithophilous and nonornithophilous species was similar (U = 220, p = 0.161). Higher densities in the ornithophilous flowers were recorded from January to April and August 2008 and from February to March 2009, while the highest densities in the non-ornithophilous flowers were recorded in August 2007 and August and October 2008 (Figure 4).The total flower densities differed among the vegetation types (ANOVA: F 2,69 = 3.921, p = 0.024), with the higher densities in the CAM (0.28 ± 3.41, N = 24) than in the OPF (0.017 ± 0.03, N = 24, p = 0.025) and similar densities observed between the TCR (0.165 ± 0.259, N = 24) and the CAM (p = 0.815) and between the TCR and the OPF (p = 0.105) (Figure 3B). Hummingbird visits Although 13 species of hummingbirds were recorded in the AP during the study period (Rodrigues, 2011), only six species visited flowers.Of these, five species belonged to The only plant species that received only illegitimate hummingbird visits was Hillia parasitica Jacq.(Rubiaceae), the flowers of which were visited by the males of A. scutatus.The flowers of P. sericiflora also received illegitimate hummingbird visits from the males of A. scutatus.Illegitimate hummingbird visits were also recorded in the ornithophilous species Agalinis angustifolia (Mart.)D'Arcy (N = 13, 50% of visits) and N. strigillosus (N = 13, 12% of visits), which were visited by the males and females of A. scutatus.All of the illegitimate hummingbird visits involved contact with the holes in the base of the corolla of the flowers and/ or buds, in the case of P. sericiflora. The hummingbirds often visited the flowers in traplines (sensu Feinsinger and Colwell, 1978), that is, shifting through the area in search of resources, visiting flowers at intervals between 10 and 30 minutes and subsequently disappearing from the clumps of flowers after visitation.We observed that only the C. serrirostris and the males of A. scutatus defended the CAM and TCR territories, respectively.Usually, these species were perched in the same areas and chased other approaching hummingbirds (frequently another individual of the same species).The area defended by the males of A. scutatus always contained three or more species of flowering plants at the same time. The number of feeding bouts of hummingbirds increased with the total number of flowers per focal plant observed (ANCOVA: F 1,213 = 47.520,p < 0.001).However, the number of hummingbird feeding bouts did not vary among the ornithophilous and non-ornithophilous species (ANCOVA: F 1,213 = 1.938, p = 0165) (Figure 5). Forty-nine agonistic interactions were recorded between the hummingbirds.Most (75.5%) of the interactions were observed between the males of A. scutatus.This hummingbird also displaced its females A. scutatus (N = 4) and was displaced once by C. largipennis and C. serrirostris.Moreover, one agonistic interaction was observed between the females of A. scutatus and between the females of C. lucidus, and three interactions occurred between the individuals of C. serrirostris.The females of C. lucidus displaced P. pretrei once. Richness and composition of plants visited by hummingbirds The richness of the hummingbird-visited species in the Alto Palacio (AP) is similar to that reported in studies conducted in six localities of the campos rupestres in the southern area of the Espinhaço Range (53 species; Vasconcelos and Lombardi, 2001).However, the richness is higher than that recorded in the degraded area of the campos rupestres in the southern area of the Espinhaço Range (10 species; Vasconcelos and Lombardi, 1999).This observation might be related to the fact that the campos rupestres of the AP are in a protected area that is adequately preserved and that serves as a refuge for numerous plant species, many of which are endemic and/or threatened. In the campos rupestres studied by Vasconcelos and Lombardi (1999), many of the plant species might have become extinct because of the degradation of the area caused by mining.Moreover, the richness of hummingbird-visited species in this study is also higher than that recorded in the campos rupestres in the northern area of the Espinhaço Range (36 species; Machado et al., 2007).This observation might be related to the higher number of vegetation types sampled in the AP and the floristic differences between the AP (area of influence of the Atlantic Forest) and the area studied by Machado et al. (2007) (area of influence of the Caatinga biome). Compared with studies conducted in the lowland habitats in other sites in Brazil, the richness of the hummingbirdvisited species in the AP is similar to that reported in the Atlantic Forest area (50 species; Araujo, 1996).However, this richness is higher than that recorded in the cerrado sites ( 14 Furthermore, the number of ornithophilous species recorded in the AP was lower than that reported in the campos rupestres in the southern region of the Espinhaço Range (32 species), which is probably because of the increased number of areas (six sites) sampled by Vasconcelos and Lombardi (2001) and the methodology used, in which the record of plants visited was obtained by observing the hummingbirds and not by direct observation of the plants.The same pattern recorded for the number of hummingbird-visited species in the AP, when compared with others in the open-habitat mountaintop (Vasconcelos and Lombardi, 1999;Machado et al. 2007), was recorded for the number of ornithophilous species (more species in the AP).The number of ornithophilous species visited by hummingbirds in this study is also higher than that recorded by Freitas and Sazima (2006) (five species) in the campos de altitude in the Serra da Bocaina.Although this open-habitat mountaintop has a high proportion of plant species (see Freitas and Sazima, 2006), it is poor on ornithophilous species.According to Freitas and Sazima (2006), the ornithophilous species constitutes secondary nectar sources for the hummingbirds that usually find their main nectar sources in the surrounding high-altitude forests. The number of ornithophilous species reported by Freitas and Sazima (2006) in the campos de altitude (five species) is similar to that recorded in the open fields (OPF) of the present study, which seems to be related to the similar floristic characteristics between these habitats. Moreover, the number of ornithophilous species recorded in the AP was higher than that reported in the other communities of Brazil, such as in the cerrado sites (seven species − Araújo et al., 2011; six species − Rodrigues and Araujo, 2011 and five species − Silberbauer-Gottsberger and Gottsberger, 1988), the Pantanal (six species − Araujo and Sazima, 2003) and the Caatinga (12 species − Machado, 2009).These numbers were likely found because the AP region is influenced by Atlantic Forest vegetation and therefore represents an area of humid campos rupestres.Many plant species are specialised for hummingbird pollination in cold and rainy environments, often prevailing at high elevations (Cruden, 1972;Stiles, 1978;Bleiweiss, 1998;Dalsgaard et al., 2009), primarily because of the endothermic physiology of hummingbirds, which enables the permanence of these pollinators in cold periods at high altitudes (Cruden, 1972;Bleiweiss, 1998). The proportion of non-ornithophilous species (63.9%) visited by hummingbirds in this study was similar to that reported in other studies (e.g.Araujo, 1996;Araujo and Sazima, 2003;Machado et al., 2007;Rodrigues and Araujo, 2011).The high proportion of non-ornithophilous species visited by the hummingbirds demonstrates the high degree of generality of these birds concerning the use of floral resources (Araujo and Sazima, 2003;Rodrigues and Araujo, 2011) and their increased memory and exploratory capacity (Pike, 1978;Machado et al., 2007).These characteristics permit this bird group to locate and use resources with features that are frequently inconspicuous and not adapted to hummingbird pollination. The high degree of richness in the plant species visited by the hummingbirds in the TCR might be related to the complexity of this vegetation type.The TCR has a vegetation complexity that is intermediate between the CAM and the OPF (see Rodrigues and Rodrigues, 2011).As a result, they present rocky outcrops, which allow the occurrence of various plant families including Velloziaceae, Amaryllidaceae, Cactaceae, Bromeliaceae, Asteraceae, Ericaceae and Orchidaceae (Rapini et al., 2008).Many species of these families (e.g., Amaryllidaceae, Cactaceae, Bromeliaceae, Ericaceae and Orchidaceae) are typically ornithophilous, with all or most of the species being exclusive to this vegetation type. However, despite the total number of plant species visited by hummingbirds and the higher number of species flowering monthly in the TCR, the flower density was highest in the CAM.The forest vegetation density of the CAM provides a greater vertical stratification for the habitation of many epiphytic ornithophilous species.This fact and the lower area extension of the samples from the CAM probably accounted for the high flower density recorded in this vegetation type. The CAM had the most dense and humid vegetation types sampled and the greatest proportion of ornithophilous species, mainly epiphytic bromeliads, whereas the OPF contained open vegetation, growing in shallow and dry soil, and hence had the lowest proportion of ornithophilous species.Thus, the differences in the proportion of ornithophilous and non-ornithophilous species between these vegetation types reflects the differences in their floristic composition (Rapini et al., 2008), which are related to the biogeographical and edaphic factors and humidity differences between them. Although the Asteraceae family is rarely used as a food resource for the hummingbirds in other high and lowland communities (Snow and Teixeira, 1982;Snow and Snow, 1986;Araujo, 1996;Rocca-de-Andrade, 2006;Araujo and Sazima, 2003;Rodrigues and Araujo, 2011;Freitas and Sazima, 2006), in our study site and in the study by Vasconcelos and Lombardi (2001), this family is used mainly by the endemic hummingbird Augastes scutatus.In many localities of campos rupestres in the Espinhaço Range, Asteraceae is one of most representative families, and in the Serra do Cipó, it is the most abundant in species richness (Giulietti et al., 1987).Thus, the use of the Asteraceae species by the hummingbirds in the campos rupestres seems to be related to an association of this abundant resource with A. scutatus, the endemic specie of Espinhaço Range (Vasconcelos and Rodrigues, 2010) and abundant hummingbird at this site (Rodrigues, 2011). Characteristics of plants visited by hummingbirds The hummingbirds visited many flowers with tubular corollas, but in the AP, the flowers with larger amplitude of corolla length and diameter were visited, as recorded in other studies (Vasconcelos and Lombardi, 2001;Roccade-Andrade, 2006;Rodrigues and Araujo, 2011).This feature appears to be related to the high proportion of non-ornithophilous flowers visited by the hummingbirds, as well as the different bill lengths of these birds in the present study site (Rodrigues, 2011). The use of pale-coloured flowers by the hummingbirds in the AP also confirms the idea that these birds have an increased memory capacity (Pike, 1978), being able to associate other colours, than bright such as red, yellow and purple, with nectar sources.Moreover, as reported in other studies (Araujo and Sazima, 2003;Rodrigues and Araujo, 2011), the characteristics of sugar concentration, nectar volume and the total amount of sugar in nectar were similar among the non-ornithophilous and ornithophilous species.Therefore, the hummingbirds at the AP site also visited pale-coloured non-ornithophilous flowers with nectar offerings similar to those found in the ornithophilous flowers.However, the amount of nectar from the flowers of some plants that the hummingbirds visited in the AP was extremely low, and therefore, it was not possible to collect samples.The low amount of nectar available per flower is probably offset by the high number of flowers per individual in these species (mean < 24 flowers/ individual).A similar pattern was recorded in the campos rupestres in the northern portion of the Espinhaço Range (Machado et al., 2007). Flowering seasonality We found a large overlap of flowering throughout the year among the species visited by the hummingbirds, which was consistent with reports from other high areas (e.g., Vasconcelos and Lombardi, 1999;Machado et al., 2007;Machado, 2009) and lowland areas (e.g., Araujo and Sazima, 2003;Rocca-de-Andrade, 2006).Thus, the nectar availability supports the species of resident hummingbirds at the study site (Rodrigues, 2011).On the other hand, the resident hummingbirds provide a reproductive advantage to the plants as potential pollinators in the area. Although the density of flowers was similar between the ornithophilous and non-ornithophilous plants, higher densities of ornithophilous flowers were normally recorded between January and April, which is likely related to the rainy season.It seems that these higher densities are a common feature of ornithophilous species in other areas (Sazima et al., 1996;Araujo, 1996;Buzato et al., 2000;Araujo and Sazima, 2003;Rodrigues and Araujo, 2011).However, higher densities of non-ornithophilous flowers were recorded in August and October, and most of the flowering species visited by the hummingbirds were recorded at the end of the rainy season and the start of the dry season.This recording period showed increasing numbers of young Augastes scutatus, the most common hummingbird in the area (Rodrigues and Rodrigues, 2011). Hummingbird visits With the exception of H. parasitica, which always received illegitimate visits, most species of hummingbirds were able to contact the reproductive parts of the flowers during their visits to the other plants, both ornithophilous and non-ornithophilous, thus indicating that hummingbirds possibly pollinate these plant species.Moreover, during the focal observations, we noted that most of the nonornithophilous species received a low frequency of visits or were not visited by other groups of pollinators.Thus, at Alto Palácio, the hummingbirds are probably acting as effective pollinators of many non-ornithophilous species, as previously observed at other sites (Arizmendi and Ornelas, 1990;Araujo, 1996;Araujo and Sazima, 2003;Rodrigues and Araujo, 2011), where Augastes scutatus was determined to be responsible for the greatest number of non-ornithophilous species visits.However, more detailed data concerning the pollen transfer efficiency of the hummingbirds that forage non-ornithophilous species are necessary and might help demonstrate that hummingbird pollination in neotropical communities is still underestimated and restricted to ornithophilous plants communities (Machado et al., 2007). The similarity in the frequency of visits by hummingbirds to ornithophilous and non-ornithophilous flowers at the AP site might be explained by the high number of open flowers per individual per day in the entomophilous and mellithophilous species (Araujo and Sazima, 2003;Rodrigues and Araujo, 2011), as well as the nectar offerings at the end and beginning of the day for flowers with intermediate characteristics between the chiropterophilous and ornithophilous syndromes.In addition, the use of these species (chiropterophilous-ornithophilous) by the hummingbirds appears to be common in the campos rupestres of the Espinhaço Range (8% of plants visited by the hummingbirds in this study, 12% in the campos rupestres of the northern portion of the Espinhaço Range; Machado et al., 2007) compared with other open-habitat mountaintops, such as the campos de altitude (no species; Freitas and Sazima, 2006), the Atlantic Forest (no species; Rocca- de-Andrade, 2006), Pantanal (no species; Araujo and Sazima, 2003) and cerrado sites (no species; Araújo et al., 2011;Rodrigues and Araujo, 2011). Many hummingbird species are commonly recorded defending feeding territories in the campos rupestres (Vasconcelos and Lombardi, 2001;Jacobi and Antonini, 2008).At our study site, only the C. serrirostris and the males of A. scutatus were observed to defend territories and participate in the highest number of agonistic interactions.However, the territories of these observed species might be attributable to the territory defense during the reproductive period (Rodrigues, 2011) and not the amount of floral resources available in the area. Conclusions Although some authors initially considered the campos rupestres as a portion of the Cerrado biome (Eiten, 1992;Silva, 1995;1998), this study demonstrates that the species richness and the number of ornithophilous species visited by the hummingbirds at AP are more similar to hummingbirdplant communities of the Atlantic Forest (Araujo, 1996;Buzato et al., 2000;Rocca-de-Andrade, 2006) than to those of the Cerrado communities (Silberbauer-Gottsberger and Gottsberger, 1988;Araújo et al., 2011;Rodrigues and Araujo, 2011) and other Brazilian highland open-habitat communities (Freitas and Sazima, 2006;Machado et al., 2007).This observation is consistent with the idea that the campos rupestres is a vegetation type apart from Cerrado, occurring in contact zones between the Cerrado, Atlantic Forest and Caatinga biomes (Vasconcelos and Lombardi, 2001;Vasconcelos, 2008;Rodrigues et al., 2011).Therefore, these similarities between the plant communities visited by the hummingbirds at the AP and the communities of the Atlantic Forest result from the strong influence of the Atlantic Forest vegetation on Alto do Palácio.Studies such as this, conducted in other areas of the campos rupestres that are influenced by Atlantic Forest, Cerrado and Caatinga vegetation, are necessary to confirm this hypothesis. As previously reported at other sites (Feinsinger, 1976;Stiles, 1978;Araujo, 1996;Araujo and Sazima, 2003;Rocca-de-Andrade, 2006;Rodrigues and Araujo, 2011), this study showed the hummingbirds use of flower species with characteristics related to pollination by other pollinator groups.These facts support the idea that the combinations of floral traits of real plant species rarely conform to traditional pollination syndromes (Ollerton et al., 2009). Figure 1 . Figure 1.Monthly rainfall and mean temperature for Alto do Palácio, Serra do Cipó National Park, MG, southern Brazil.Data from August 2007 to July 2003. Figure 2 . Figure 2. Flowering seasons of plant species visited by the hummingbirds in the first year (August 2007 to July 2008, dotted lines) and second year (August 2008 to July 2009, solid lines) during the study at the Alto do Palácio, Serra do Cipó National Park, MG, southern Brazil. Figure 3 . Figure 3. Log (N=1) of total density of flowers per month and total number of flowering species per month over the study period in the capões de mata (CAM), open fields (OPF) and typical campos rupestres (TCR) sampled at Alto do Palácio, Serra do Cipó National Park, MG, southern Brazil. Figure 4 . Figure 4. Density of the ornithophilous and non-ornithophilous plant species at Alto do Palácio, Serra do Cipó National Park, MG, southern Brazil. species − Rodrigues and Araujo, 2011; 10 species − Araújo et al., 2011) and in the southern Pantanal (21 species − Araujo and Sazima, 2003) environments with more seasonal relative humidity than in the AP. Figure 5 . Figure 5. Log (n +1) of the number of feeding bouts relative to log (n +1) of the number of open flowers per individual of the non-ornithophilous and ornithophilous plant species observed and visited by the hummingbirds at Alto do Palácio, Serra do Cipó National Park, MG, southern Brazil. 11 Mean number of open flowers per plant per day; 2 Internal corolla length, from base to opening (effective corolla, sensuWolf et al., 1976); 3 Number of flowers visited per minute per number of observed flowers; #Plants which occurred in the three habitats; *Pollination syndrome intermediary between chiropterophily and ornithophily syndromes.Mean number of open flowers per plant per day; 2 Internal corolla length, from base to opening (effective corolla, sensu Wolf et al., 1976); 3 Number of flowers visited per minute per number of observed flowers; #Plants which occurred in the three habitats; *Pollination syndrome intermediary between chiropterophily and ornithophily syndromes.Mean number of open flowers per plant per day; 2 Internal corolla length, from base to opening (effective corolla, sensu Wolf et al., 1976); 3 Number of flowers visited per minute per number of observed flowers; #Plants which occurred in the three habitats; *Pollination syndrome intermediary between chiropterophily and ornithophily syndromes.Appendix 1. Continued... Table 1 . Volume and concentration of nectar accumulated and the amount of sugar in the nectar of the ornithophilous and non-ornithophilous species (in bold) visited by the hummingbirds in the Alto do Palácio, Serra do Cipó National Park, MG, southern Brazil.X = mean, SD = standard deviation and N = number of sampled flowers.
2018-02-19T20:31:40.472Z
2014-08-01T00:00:00.000
{ "year": 2014, "sha1": "4b35578391a8f4c543893d593b8c324fa371568b", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjb/a/4xSGmGRwjmhHcX46sDvpwmv/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4b35578391a8f4c543893d593b8c324fa371568b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213228309
pes2o/s2orc
v3-fos-license
Secure Energy-Efficient Resource Allocation Algorithm of Massive MIMO System with SWIPT : In this paper, we consider the resource allocation problem to maximize the minimum (max–min) user’s secure energy efficiency (SEE) in downlink massive multiple-input multiple-output (MIMO) systems with simultaneous wireless information and power transfer (SWIPT). First, transmission power and power splitting ratio are designed to achieve the max–min user’s SEE subject to harvested energy threshold, the constraints of transmission power, and power splitting ratio. Secondly, the optimization problem is non-convex and very difficult to tackle. In order to solve the optimization problem, we converted to a series of parameter optimization subproblems by fractional programming. Then, we employ the first-order Taylor expansion and successive convex approximation (SCA) method to solve parameter optimization problems. Next, a secure energy-efficient resource allocation (SERA) algorithm with the bisection method is proposed to find the max–min SEE of the system. Finally, simulation results show the effectiveness and superiority of the SERA algorithm. Introduction With the rapid development of mobile internet technology, the broadcast characteristics of wireless information have led to severe information leakage problems. Physical layer security has increasingly attracted research interests in 5th-Generation (5G) mobile communication because it can ensure safe protection from information theory even when eavesdroppers have a powerful computing ability [1]. Massive multiple-input multiple-output (MIMO) is a promising technology in 5G which can be used to improve the spectrum efficiency and security of wireless communication. Moreover, massive MIMO can be used to enhance the security and power transfer efficiency of a simultaneous wireless information and power transfer (SWIPT) system because it can beamform the energy and information more accurately with large array gain [2,3]. Secure problems in massive MIMO systems have been investigated in References [4][5][6][7][8][9][10][11]. The authors in Reference [4] studied the impact of residual hardware impairment (HWI) on massive MIMO secure communication and showed the HWI has a negative impact on information security. In Reference [5], S. Timilsina et al. studied the effect of active eavesdropping on physical layer security and revealed that active eavesdroppers had a stronger ability to eavesdrop confidential information with higher pilot power. In Reference [6], Z. Chu et al. studied a MIMO secrecy channel with SWIPT. The minimum harvested energy is maximized under the secrecy rate requirements. In Reference [7], H. D. Tuan et al. considered the problem of power allocation to maximize the worst links' secrecy System Model As shown in Figure 1, a single-cell downlink massive MIMO system with SWIPT consists of one BS connected with K users and one eavesdropper [17][18][19]. The number of transmitting antennas at BS is N, which satisfies N >> K. Each user and the eavesdropper are equipped with a single antenna. The BS simultaneously serves K users in the same time-frequency resource block [20]. Especially, it is assumed that the active eavesdropper perfectly knows the pilot sequence transmitted by the mth user node [21], where m ∈ {1, ..., K}. Without loss of generally, we suppose that each time slot has a unit length. Each time slot has two parts. In the first time slot τ, users use the energy harvested by the previous time block to transmit pilot information to the BS. The active eavesdropper also transmits pilot information to the BS. In the 1 − τ time slot, BS transmits information or energy to users and the eavesdropper. The user's terminal receiver decodes the signal into two parts; one part is used for information decoding, and the other is used for energy harvesting. The eavesdropper mainly is used for information decoding [15]. The BS-to-user and BS-to-eavesdropper channels are denoted by G, h, which is given by are the small scale Rayleigh fading and where D m,m = β m , m = {1, ..., K} and ϑ are the path loss from BS-to-user and BS-to-eavesdropper channels, respectively. The path loss is modeled as β m = l −r m and ϑ = d −r , where r is the path loss exponent factor, l m is the distance between the BS and user m, and d is the distance between the BS and eavesdropper [5]. In TDD massive MIMO, the users-to-BS and eavesdropper-to-BS channels are estimated at the BS via uplink pilots transmitted. The BS stores the channel state information (CSI) through channel reciprocity. The superimposed pilot signal received at BS is where p u and p e express the average transmit powers from the users and the eavesdropper; s = {s 1 , ..., s m , ...s K } T , where s m s H m = 1 and s m s H m = 0 for m = m and m, m ∈ {1, ..., K}. n is the noise with independent and identically distributed (i.i.d.) ∼ CN (0, 1) distribution for each element. Assuming that BS uses the minimum mean squared error (MMSE) estimation, the BS-to-user channel is obtained as follows [5,19]: To prevent the eavesdropper from stealing confidential information, the BS introduces AN subsidiary communications [13]. The BS uses the power splitting ratio to information power and AN transmission power [5,19]; the transmission information at BS is given as follows: where P t represents the transmit power; ϕ = diag(ϕ 1 , ϕ 2 , ..., ϕ K ), where ϕ m is the power splitting ratio coefficient of the BS for user m(m = 1, · · · , K); channel matricesŴ ∈ C N×K and V ∈ C N×K denote the precoding matrix for the data and AN [13,22] is an AN sequence. The users utilize the power splitting ratio to decode information from BS simultaneously, where one part is used to harvest energy and the other part is decoded into useful information, and the eavesdropper mainly wiretaps useful information from BS [23]. Therefore, the mth user and the eavesdropper receiving useful information is given as follows [5,19,20]: where ρ = {ρ 1 , ..., ρ m , ..., ρ K } express the power splitting receiver from users. The first item denotes the desired signal, the second term is the beamforming gain uncertainty, the third term is the inter-user interference, and the remaining terms are AN leaked to legitimate users and the additive white Gaussian noise (AWGN) at the mth user node. The mth user harvest energy is given by where η m (0 < η m ≤ 1) is the energy conversion efficiency for user m, which transfers the harvested energy into electric energy stored by the users and eavesdropper. Provided that the worst-case uncorrelated additive noise distributed as independent Gaussian noise has the same variance [5], the achievable rate of the mth user can be lower bounded by the following expression where γ m is given by where γ m,o is the effective noise which is defined as follows: Provided that the eavesdropper eliminates the other users interference terms [5,19], the upper bounded achievable rate of the mth user signal leaked into eavesdropper can be written as where γ e,m is expressed as Hence, the mth user achievable secure rate is as follows [5,19]: Using wishart matrix to simplify Equations (8)- (14) as in References [5,19], the harvesting energy by user m can be written as follows: The achievable secure rate of user m is given by [5,19] The achievable rate of the mth user signal leaked into eavesdropper is simplified as follows [5,19]: where Problem Formulation and Energy-Efficient Algorithm With the above analysis for achievable rate and energy for user and eavesdropper, the max-min SEE optimization problem of the system can be expressed as In P 1 , the objective is to maximize and minimize SEE of the system. C 1 and C 3 are the transmit power constraints, and C 2 is the energy harvested constraint for user m. ε(P t ϕ m (1 − τ) + p u τ) and P c,m denote the power consumption of the mth user and circuit consumption, and ε denotes the power consumption coefficient. The optimal solution to Equation (18) can be solved by fractional programming [24] to find zero roots of the following parameter optimization problem F(λ) [25]: According to the conclusions in References [24,25], the optimal solution to Equation (19) is λ * such that F(λ * ) = 0 is held. We define Γ m as Due to the non-smoothness of the objective function, we introduce a new variable t to transform the equivalent non-smooth optimization problem in Equation (20) into a smooth optimization problem. Equation (18) can be equivalently reformulated as , ∀m P 2 has four optimization variables. First, fix BS' power splitting ratio coefficient ϕ m and the transmit power P t are used to obtain a ρ m suboptimal solution. Then, use the suboptimal solution of ρ m iterative optimization power splitting ratio coefficient ϕ m and the BS transmit power P t . Finally, update variable t. Firstly, optimize ρ m . Because ρ m is only associated with R l m (P t , ϕ m , ρ m ) from Equations (16) and (20), which is monotonic decreasing function about ρ m , ρ m depends on constraint C 2 . Then, optimize ϕ m , and P t is a hard problem. In Equation (21), the difference of two convex functions is non-convex. Therefore, we need to replace the non-convex part in Equation (21) by linear approximation and we rewrite constraint C 5 ; let Finally, we use the difference of convex functions algorithm (DCA) from References [14,26]. P 3 is as follows: In Equation (25), C 5 is the difference between two concave functions, which can be effectively solved by sequential convex programming. At step n, we get an iterative power allocation. We use first-order Taylor expansion h m (P t , ϕ m ) We further transform P 3 into , ∀m P 4 now is a smooth and standard convex optimization problem, and thus, it can be efficiently solved by successive convex approximation (SCA) method [15]. Up till now, the convex programming P 4 is capable of successively approximating the solution of the original SEE optimization problem. We will employ the bisection method in one-dimension to update λ where ρ * m is the optimal values of ρ m in Equation (22) and where P * t and ϕ * m are the optimal values of P t and ϕ m from Algorithm 1. Solve (27) to obtain P * t and ϕ * m Set n = n + 1, and P n t = P * t and ϕ n m = ϕ * m Calculate F n = min m [ f m (ϕ n m , P n t ) − h m (ϕ n m , P n t )] until F n − F n−1 < ζ Based on the above discussion, we propose a SERA algorithm for the max-min SEE in massive MIMO systems by Algorithm 2. Then, we characterize the computational complexity of the proposed algorithm to get a better insight into the computational complexity of the SERA algorithm. The SCA method complexity is O(IK 3 2 log 1 ς ) [27]; ς is the convergence precision, and I denotes the value of iterations. Therefore, the total computational complexity of the SERA algorithm is about O(I 2 K 3 2 log L ς 2 ), and L is the length of the value. Simulation Results In this part, simulation results are presented to show the effectiveness of the proposed algorithm. The simulation parameters are given as follows. The number of users is K = 4. The coordinates of BS are (40,0)m; the users are randomly distributed in the rectangular area [−20, 20] × [−20, 20] m. The path loss model is adopted with path loss exponent r = 3.3. The background noise is σ 2 m = σ 2 e = σ 2 z = −90 dBm [14], and the user fixed circuit power loss is P c,m = 5 dBm. Figure 2 compares the minimum SEE of each algorithm for different P max . When the transmit power P max of the BS increases, the minimum SEE obtained by the min user' SEE maximization and users' system SEE maximization gradually increases to remain unchanged; this is because the increase of the BS transmit power resistant against eavesdroppers achieves a balance in the information rates leaked into the eavesdropper. Moreover, as can be seen from the secure rate maximization algorithm, with the increase of power, the ultimate SEE decreases. Therefore, it shows the stability of the SERA algorithm. Figure 3 shows the minimum SEE of the three algorithms for the different number of BS antennas. The minimum SEE is monotone increasing with the antenna number of BS for three algorithms. This is because of the array gain due to the multi-antenna diversity. As the number of BS antennas increases, the secure rate of three algorithms increases faster than the power consumption. In addition, it can be observed that the minimum SEE performance of the SERA algorithm is better than the other two algorithms. In Figure 4, we show the convergence performance of the different P max . It can be seen that the SERA algorithm can quickly converge. In the three cases under consideration, the number of iterations that the algorithm converges to the optimal value is within four times. When the number of BS antennas is N = 100, the numerical results reveal that the performance of the P max = 30 dBm has increased about 20.1% and 41.5% more than P max = 25 dBm and P max = 15 dBm. In Figure 5, we show the minimum SEE versus the eavesdropper transmit power under different algorithms. When the number of BS antennas is N = 100 and BS transmit power P max = 30 dBm, as the eavesdropper transmit power increases, the minimum SEE gradually decreases. From Equation (4), increasing the eavesdropper transmit power impacts the channel estimation. This leads to a decrease in the SEE and an increase in the information rates leaked into the eavesdropper. Figure 6 shows the minimum SEE of the three algorithms for a different number of users. The minimum SEE is monotone decreasing with the number of users. The minimum SEE obtained by SERA algorithms is about 3% and 62.2% compared with the users' system SEE maximization and secure rate maximization algorithm when the number of users K is 4 and BS maximum transmit power is P max = 30 dBm. In addition, it can be observed that the minimum SEE performance of the SERA algorithm is better than the other two algorithms. Conclusions In this paper, we consider downlink massive MIMO systems employing AN auxiliary communication for the physical layer. The users use the energy harvested by the previous time block for channel estimation. At the BS, the power splitting ratio is used to reasonably allocate the information transmission power and the AN transmission power; the eavesdropper's eavesdropping ability is reduced while ensuring the integrity of the secret information transmission. The non-convex SEE maximization problem is first converted into a series of parameter optimization subproblems, which is then solved by the SCA method. A SERA algorithm with joint power splitting ratio and BS transmitted power allocation is proposed to max-min SEE of the systems. Simulation results show that the proposed algorithm can provide higher SEE compared with other benchmark algorithms. Author Contributions: Z.W. conceived and designed the system model; X.W. analyzed the data; Z.F. contributed the formal analysis; X.Y. performed the simulation and wrote the paper. All authors have read and agreed to the published version of the manuscript.
2020-01-02T21:46:53.731Z
2019-12-25T00:00:00.000
{ "year": 2019, "sha1": "7b1c55212715221d755811ea5f9c49e2aa8fad22", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/1/26/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0f3470ee7dbb0695e791cf4d20bd43c33d36afe1", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
59485018
pes2o/s2orc
v3-fos-license
The LQR Controller Design of Two-Wheeled Self-Balancing Robot Based on the Particle Swarm Optimization Algorithm The dynamics model is established in view of the self-designed, two-wheeled, and self-balancing robot.This paper uses the particle swarm algorithm to optimize the parameter matrix of LQR controller based on the LQR control method to make the two-wheeled and self-balancing robot realize the stable control and reduce the overshoot amount and the oscillation frequency of the system at the same time. The simulation experiments prove that the LQR controller improves the system stability, obtains the good control effect, and has higher application value through using the particle swarm optimization algorithm. Introduction The two-wheeled and self-balancing robot belongs to a multivariable, nonlinear, high order, strong coupling, and unstable essential motion control system, and it is a typical device of testing various control theories and control methods; therefore, the research has great theoretical and practical significance.Because it has the advantages of simple structure, stable running, high energy utilization rate, and strong environmental adaption, it has the broad application prospects whether in the military field or in the civilian field. Since 1980s, the scholars of various countries have conducted the system research on the two-wheeled selfbalancing robot.The two-wheeled and self-balancing robot control system based on the fuzzy control can overcome the instability and nonlinear nature of the system, but it relies on the expert's experience too much [1,2].The optimal LQR controller is designed on the basis of establishing the system structure model; the correctness and effectiveness of the LQR controller are verified, but it is difficult to determine the weighted matrix and [3][4][5].The genetic algorithm is successfully applied to the parameter optimization of the LQR controller of the inverted pendulum system, and it achieves the good control effect.However, the parameters are difficult to adjust, and it is easy to fall into the local optimization [6]. This paper concerns the self-designed and two-round self-balancing robot as the research object, which uses the Newtonian mechanics equation method and the linear method near the balance point to establish the linearized mathematical model of the system.In view of the mathematical model of the system, LQR controller is designed based on the particle swarm optimization and makes full use of the searching capability of the particle swarm algorithm to optimize the matrix and matrix of the LQR controller.It gains the global optimal solution of matrix and of the LQR controller so as to design the optimal state feedback control matrix and overcome the disadvantages of relying on the experience and the trial and error in the selection of matrix and of the general LQR control design, making up for the inadequacy of big workload.This method has better control effect by simulation tests and comparison. The Dynamics Model of the Two-Wheeled Self-Balancing Robot The two-wheeled and self-balancing robot structure is mainly composed of the body and the two wheels, and the robot is the coaxial two wheels, driven by the independent motor; the parameters of quality, moment of inertia, and radius of the two wheels are regarded as the same, so the body center of gravity is inverted above the axletree and it keeps balance through sports.The two-wheeled and self-balancing robot can be regarded as the vehicle-mounted inverted pendulum, so the dynamic system analysis process is more complex.This paper separates the wheel from the pendulum analysis first in the process of modeling, and then it deduces the dynamics state equation of the two-wheeled self-balancing robot through the simultaneous two parts [7,8]. The two wheels are regarded as the research object, Figure 1 is a diagram revolver force analysis.According to the revolver force equation can be obtained in the following according to the Newton's law [9] and the rotational torque formula [10]: The right wheel force equation is as follows: After finishing, it can obtain Among them, is the weight of the wheel; is the moment of inertia of the wheel; is the radius of the wheel; Ẍ is the wheel acceleration of axis; and are the right and left wheel torque; and are the axis forces of the left and right wheels with the car body; and are the interatomic forces of the right and left wheels with the ground; and is the angle of wheel around the axis direction.The car body of the two-wheeled and self-balancing robot is modelled as an inverted pendulum; the car body force analysis is shown in Figure 2. Using Newton's second law, the horizontal force is as follows: Among them, is the displacement of the car body centre of gravity relative to the ground, Using Newton's second law, the force of the vertical direction of the car body is as follows: The sum of the torques of the car body mass center is as follows: On the small angle scope, 2 ≈ 0, sin ≈ , cos ≈ 1, the linearized equations are gotten after the linearization process: Among them, is the angle of the car body deviating from the axis direction; is the moment of inertia of the car body; is the weight of the car body; and is the height of which the car body is apart from the shaft.The output torque of the wheel is = = (/) = ( /) − (− /) θ ; the state equation of the two-wheeled selfbalancing robot is obtained: The output equation is as follows: ] . (10) The Design of the Self-Balancing Robot LQR Controller. The LQR method is the mature controller design method in the development of modern control theory [11]; optimal control is to seek the control amount * () to make the system reach the steady state and guarantee the performance index to take the minimum value: Among them, * = − −1 = −, and is the solution of the algebraic equation + +− −1 = 0 of the matrix Riccati [12].In (13), the matrix and matrix are mutually restricted, and the size of the value is proportional to the anti-interference ability of the system; increasing the value, the anti-interference ability of the system is enhanced, and the adjustment time of the system is shortened.However, at the same time, the oscillation of the system is strengthened, and the consumption of energy increases.The increase of value makes the energy consumed by the system less, but the adjustment time increases.Therefore, the design key is to find the right weight matrix and matrix .As long as we make sure of the matrix and matrix , the state feedback matrix is the only confirmation.However, the selection of the matrix and matrix entirely depends on the experience and trial and error method in the process of LQR controller design, so the subjectivity is larger, resulting in the imperfection of the controller design and affecting the control effect. The Parameter Optimization Principle of the LQR Controller Based on the Particle Swarm Optimization Algorithm. As a new optimization algorithm has been developed in recent years, the particle swarm optimization algorithm is abbreviated as the PSO.The particle swarm optimization algorithm is a kind of evolutionary algorithms, and it starts from the random solutions and searches for the optimal solution through the iterative algorithm [13]. Assuming particles are composed of a group in dimensional space, among them, the position and velocity of particle in the space are = ( 1 , 2 , . . ., ), V = (V 1 , V 2 , V 3 , . . ., V ), = 1, 2, . . ., , the best position that the particle experiences is denoted by bset (), and the best position that all the particles in the group experience is denoted by bset ().The whole particle swarm updates the velocity and position through tracking the individual extreme value and the optimal value [14].The particle optimization process is expressed in Among them, is the inertia weight; 1 is the weight coefficient of the optimal value that the particle tracks its history; 2 is weight coefficients that particle track the optimal value; and and are the random numbers changing in [0, 1]. is the individual optimal solution of the particle after the iterations; is the global optimal solution of the group in the iterations.To make the algorithm a more accurate search scope, the movement speed of the particle is limited in [−V max , V max ]; if V max is too large, the particle will fly over the optimal solution; if it is too small, it is easy to fall into the local optimum.Assuming that the particle position is defined as the interval [−V max , V max ], the two-wheeled selfbalancing robot state variables [, ẋ , , θ ] are regarded as the particles, and the particle's position and the initial value of the speed are produced at random in a certain range.The fitness function is an important link in using the particle swarm optimization algorithm, and it is the standard of the whole particle swarm algorithm iterative evolution.Because what we have designed is the quadratic optimal control regulator, we adopt the linear quadratic performance index formula (10) as the fitness function.The is a symmetric positive semidefinite matrix of 6 * 6; is a constant positive definite matrix.In order to simplify the problem and make the weighted matrix a clear physical meaning, we choose the weighted matrix as the diagonal matrix, so that the performance index can be represented as Among them, 1 , 2 , 3 , and 4 are the weights of the position, speed, angle, and angular velocity of the twowheeled self-balancing robot, respectively. is the square weight of the control amount in the objective function. The Parameter Optimization Steps of the LQR Controller Based on the Practical Swarm Optimization Algorithm Step 1. Initialize the particle swarm.Set the speed coefficients 1 , 2 , the maximum evolution algebra gen, the size of the group pop, and the location of the initial search point and its speed, and each particle has the value of the current position. Step 2. Calculate the fitness value [] of each particle. Step Step 5. Update the particle's speed V and position according to formula (12). Step 6.If it meets the end condition (the error is good enough or it reaches the maximum cycle times), it will exit, or it returns to Step 2. The Simulation Experiment of the LQR Controller Based on the Particle Swarm Algorithm The parameter symbols, description, and the actual value of the two-wheeled self-balancing robot are shown in Table 1. The actual parameters of this system are substituted into the state equation; the actual state equation is obtained as follows: The initial state of self-balancing robot system is as follows: to make experiment, at the same time, selecting the inertia weight formula [15]: Among them, max = 1, min = 0.3, and gen = 30; it indicates the iterative number of the algorithm evolution, and is the current evolution algebra. For an initial population of 40 * 8 matrix, the four dimensions in the front represent particle updated location and the four dimensions in the latter represent particle updated speed.The particle swarm updated position curve is shown in Figure 3.The particle motion curve shown in Figure 3 is not lost regularity, so there is only one particle position in critical condition.The PSO updated rate curve is shown in Figure 4. Figure 4 shows the particle movement speed can be controlled, not beyond the intended scope.Selecting the inertia weight formula is appropriate. PSO algorithm in this case has a total of 50 times iterative and adaptive values.The number of iterations with the curve is shown in Figure 5. Seen from Figure 5, and parameters after 21 iterations to achieve the optimization. The global optimal solution can be gotten through the particle swarm algorithm programming: The dynamic response curves of the two kinds of algorithms of the LQR controller and the LQR controller based on the particle swarm algorithm are shown, respectively, in Figure 6. The simulation curves of Figure 6 show that when the initial conditions of self-balancing robot are zero, the LQR The initial position The initial position The initial position The initial position control and LQR control after the particle swarm optimization algorithm can make the system stable.However, the latter algorithm has the advantages of stable short time, less overshoot with fewer shocks.In order to verify the LQR algorithm based on PSO optimization is better than that LQR algorithm in the literature [4] and literature [7].LQR algorithm comparison and each algorithm control indicators are shown in Table 2.According to the above results, three kinds of algorithms can all achieve the stability of the system.The proposed algorithm on the overshoot and oscillation frequency is out of the more obvious advantages.That is because the algorithm can find the optimal solution of the LQR matrix and matrix . Conclusion Using the particle swarm algorithm to optimize the selection of weighted matrix and matrix can overcome the blindness of selecting matrix and matrix in the traditional LQR optimal control.This paper uses the characteristics that the particle swarm optimization algorithm can achieve an intelligent search, gradual optimization, and rapid convergence.Therefore, it is not easy to fall into local optimum, but easy to be implemented on the basis of the linear model of the two-wheeled and self-balancing robot to obtain the global optimal solution of the , , achieving the optimal LQR controller design through the MATLAB simulation experiments.It can be found that the design response speed of the LQR optimal controller is faster with less overshoot amount, and it can keep the steady-state error zero, so the control effect is better. Figure 2 : Figure 2: The analysis of car body force. Figure 5 : Figure 5: and iterative optimization Figure based on PSO algorithm. Figure 6 : Figure 6: Based on self-balancing robot angle PSO optimized response. Table 1 : The parameter of the robot. 4tep4.Compare the fitness value [] with the global extremum best of each particle; if [] > best (), use [] to replace best . 3. Compare the fitness value [] with the individual extremum best () of each particle; if [] > best (), use [] to replace best (). Table 2 : The effective comparison table between other methods and proposed methods.
2018-12-22T03:52:36.890Z
2014-06-11T00:00:00.000
{ "year": 2014, "sha1": "4eeb8fb1c40655d19ac1b3da08c546dde65abe65", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2014/729095.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4eeb8fb1c40655d19ac1b3da08c546dde65abe65", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
54835653
pes2o/s2orc
v3-fos-license
Doppler fading communication channel performance simulation Fading is commonly used to describe the properties of the communication channel. Large efforts have been made to describe characteristics of the channel in wireless communication system. The performance of wireless signal propagation in a conventional environment needs Doppler fading channel schemes by assuming a perfect knowledge about the channel frequency response at both the transmitter and the receiver sides. Two models were used to depict the characteristics of Doppler fading channel in term of source velocity, where the first is free space transmission model consists of two antennas at a given distance (r) to calculate the received signal strength for transmission links at relative motion between the transmitter and receiver antennas. The second model is simulation based on Matlab (2013B) to compute and plot the received signal envelope taking into account the source velocity over the multipath fading. The analysis and employing the information of Doppler fading can enhance the characteristics of channel estimation. INTRODUCTION The relative motion between the transmitter and receiver introduces Doppler shift.It affects the received signal frequency and causes frequency broadening.The transmission radio link between the transmitter source and destination receiver varies from flat earth model to multipath propagation mechanisms which the electromagnetic waves are severely obstructed by the mountains, high buildings and skyscrapers (Bernard, 1997;William, 1974;Proakis and Masaoud, 2008;Rappaport, 1999;Clarke, 1968;Dent and Bottomley, 1993).The different mechanisms of multipath propagation such as diffraction, scattering, reflection and refraction (Keller, 1962) play as important factors to create multiple propagation paths. Fading can be defined as the fluctuations in received electromagnetic wave as a consequence of multipath signal components.Many different replicas of the received electromagnetic waves can be arrived to the receiving end.These replicas came from different paths and interference constructively or destructively according to their amplitude, phase and time delay. Fading can be classified into fast fading or slow fading.In addition, fading be classified into flat or frequency selective.However, the fast fading is more draw attention to the communication engineers because the fluctuations may affect dramatic problems in communication system E-mail: riyadhkhlf@gmail.com,riyadhkhalaf@engineering.uodiyala.edu.iq. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License reliability (Kumar and Grover, 2012;Wang, 2011). There are two models to determine the instantaneous value of signal strength; large scale model which is used to calculate the average power of received signal based on the distance between the transmitter and the receiver.The other is small scale model of fading which is used to calculate the fluctuations of average power of the received signal (Rappaport, 1999). High speed railway is good example about commercial practical application for global demand in the modern communication system (Fan et al., 2012). At the receiver, the reduction of the delay time and the different mobile speeds is obtained by applying variable coding or updating power density (Abdi et al., 2008;Zhang and Abdi, 2009); however, Doppler spread is used to decrease the surplus handoffs (Stuber, 2011).Special autoregressive fading channel was suggested in (Nissila and Passpathy, 2006).The restrictions that cause path loss and multipath such as noise, distorsion, inter symbol interference are results of relative motion between the transmitter and receiver (Goldsmith, 2005).There are many schemes of Doppler spread estimation such as level crossing rate (Chen et al., 2011), covariance (Huang et al., 2013), power spectral density (Muralidhar et al., 2007) and maximum probability (Bellilil and Affes, 2013).Fading models work depend on assumption that the total fading arises from set of uncorrelated partial signal which have identically distributed of amplitudes and uniformly distributed of random phases (Xiao et al., 2003(Xiao et al., , 2006)). Finally we list two important factors that play as important impacts in multipath wave propagation.There are the relative movements between the transmitter and receiver and the signal bandwidth (Matthias, 2002). Experimental details The experimental setup shown in Figure 1 consists of microwave function generator (Gunn Oscillator), horn antenna, E-field probe, oscilloscope, 2 coaxial cables with BNC/BNC connectors, metal plate with moving parts and stand bases.The horn antenna is connected to the Gun Oscillator by 2 m coaxial cable.The function of E-field probe is as separate mixer and positioned at distance of 3 cm approximately in front of horn antenna.The metal plate will be moved in the range of 20 cm to the 23 cm in the steps of 2 mm.The oscilloscope corresponding received voltage in each step will be stored in data file. The relation of reflected voltage and the distance between the metal plate and the horn antenna was shown in Table 1.The received voltage was varied in sinusoidal form along the distance from 190 mm to 230 mm. In the setup, the calculation of Doppler frequency depends on the metal plate velocity which was 100 cm/s and the half wavelength ( was equal to 1.6 cm which is the distance between two adjacent fades, therefore the Doppler frequency is equal to (Rappaport, 1999) Where T is time duration over which the channel's response to a sinusoid has a correlation greater than 0.5, Vr is relative velocity and fd is Doppler frequency. Simulated details The fading loss is the combination of two factor effects; first is the multi path propagation and second is Doppler frequency shift.These factors will produce random fluctuation in received signal at receiver side.The mathematical model to find the envelope of the received signal (R) for the fading loss depends on the parameters in following equations (Haykin and Moher, 2005): Where we have the three random variables an, φn, and θn.an are the amplitude coefficients which are Gaussian distributed and the phase coefficients φn, and θn are the nth multipath arrival angle with direction of wave movement supposed to be a uniform distribution values such that 0 ≤ φn and θn ≤ 2π, where N is the number of scatter paths, fd is the Doppler frequency shift and R is the signal envelope. For simulation, the standard deviation is taken 0.001 therefore, the Doppler fading shift at receiving site thus shifted with respect to the transmission frequency signal is 62.5 Hz, when the vehicle velocity (V) is 9.375 m/s, the carrier frequency (f) is 2 GHz and the speed of light (C) is 3*10 8 m/s as shown in Equations 5 and 6: It is necessary to mention that the value of wavelength in this method which is equal to 0.15 m is different from the value of the wavelength at that practical method as a result of our selection of different relative velocities in the two cases. RESULTS Figure 2 shows the relationship between the received voltage and distance change from 195 mm to 235 mm.Thus, the distance travelled by metal plate (mobile) in the time interval corresponding to the two adjacent dips (small scale fading) is on the order of a half wavelength.Moreover, from Figure 2 and Equation 5 the coherence time is that required to traverse a distance of λ/2 when travelling at constant mobile velocity. Using the Matlab program to include the parameters that define the Doppler fading, the results of received signal are drawn.We can plot the results as demonstrate as shown in Figures 3, 4, 5 and 6.Figures depict the relation between received voltage in (dB) as function of time.It would be useful to compare the received signal envelope through transmitting of the signal from source to destination taking into account the multiplicity of paths toward the destination which is named as Multi-path propagation which is the most crucial type of transmission link problems.So, the scattered signal as consequence of colliding signals with others will be received in the destination side from different paths.Bringing up the signal from the first direct path, and then followed by the reflected signals. DISCUSSION In simulation, figures illustrate the effect of increasing the multipath fading numbers on the received signal.It is clear that the amount of fading signal will be increased with respect to increasing the number of scatter paths from N=5 to N=20 in step of 5. We have set the threshold voltage to -55 dB; thus, the increasing of scatters will cause the envelope of the received signal to go below the level of threshold.On other hand, the outage probability can be introduced by the ratio of the samples number under the threshold level to the number of total samples.It is possible to increase the outage probability by increasing the threshold power.The degradation of the received signal is consequence of dynamically changed multipath and Doppler effects. Conclusion Since the Doppler fading is one of the major restrictions in transmission system, we have implemented experimental setup of free space transmission link and Matlab simulation model for Doppler fading.We can conclude that, the number of scatter paths affects the signal, where the increasing of scatter paths result distortion of the signal occurs due to interactions of the many copies of receiving at different times.Doppler frequency affects the received signal by pulse mutilation, irreducible bit error rate and dispersion .The difference between the spectrum in the figures based on the fact that the theoretical spectrum assumes that the number of scatters is large sufficient to implement the Central Limit Theorem.Moreover, there is a true Gaussian distribution on the scattering amplitudes and a true uniform distribution on the angles.However, we will report in the future the development of a generic model for other types of fading such as Rayleigh, Rican, Nakagami-N and Nakagami-q. Figure 2 . Figure 2. The relation of received voltage with respect to change of the distance. Table 1 . The corresponding reflected voltage to the change of distance.
2018-12-11T12:36:08.587Z
2017-04-22T00:00:00.000
{ "year": 2017, "sha1": "d829395f57795c77b6d8c981c2e1f0192186d985", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/IJPS/article-full-text-pdf/49F67E464184.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d829395f57795c77b6d8c981c2e1f0192186d985", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225508889
pes2o/s2orc
v3-fos-license
Hybrid Electric Vehicle Characteristics Change Analysis Using Mileage Interval Data : In this work, the relationship between the accumulated mileage of a hybrid electric vehicle (HEV) and the data provided from vehicle parts has been analyzed. Data were collected while traveling over 70,000 km in various paths. The collected data were aggregated for 10 min and characterized in terms of centrality and variability. It has been examined whether the statistical properties of vehicle parts are di ff erent for each cumulative mileage interval. When the cumulative mileage interval is categorized into 30,000–50,000, 50,000–60,000, and 60,000–70,000, the statistical properties contributed in classifying the mileage interval with accuracy of 92.68%, 82.58%, and 80.65%, respectively. This indicates that if the data of the vehicle parts are collected by operating the HEV for 10 min, the cumulative mileage interval of the vehicle can be estimated. This makes it possible to detect abnormality or characteristics change in the vehicle parts relative to the accumulated mileage. It also can be used to detect abnormal aging of vehicle parts and to inform maintenance necessity. Furthermore, a part or module that has a significant change in characteristics according to the mileage interval has been identified. Introduction There has been continuous development and distribution of high-performance electric vehicles, including smart cars featured with artificial intelligence and partial self-driving functions. These are based on innovative technologies in various fields, such as semiconductor and information communication technology. These technologies, in combination with social demands, have led to the progress of a hyperconnected society, resulting in changes in perception in the economy, society, and culture. A sharing economy represents these changes, and new business models must be developed in new vehicle-sharing systems in large cities. These changes will continue to be incorporated in innovations in the automotive industry, related upstream or downstream industry, and automobile culture in the future. The scope of innovation in the automotive industry will be expanded to cars owned by corporates as well as those owned by individuals. It will include an effective and safe maintenance and management system so that periodic maintenance and post-repair management will be transformed into preventative maintenance and predictive management. Therefore, techniques such as condition-based maintenance (CBM) and prognostic health monitoring (PHM) are being gradually applied to the automotive industry. Besides CBM and PHM, various other methodologies and approaches have been discussed to predict the condition of cars, based on real driving data. Because it is physically limited to save, transmit, and analyze all types of data for a car in real-time, it is feasible to transmit the analyzed data through self-examination. Hence, the development of a basis for self-examination and derivation of a standardized analysis method using real driving data is crucial. With the commercialization of electric vehicles, including HEVs and smart vehicles capable of partially autonomous driving, the complexity of vehicle functions and structures is increasing. Therefore, various studies have been conducted to monitor and predict the conditions of vehicle parts, modules, and systems as well as the driver's condition for driving safety and maintenance. For example, since the health level of a HEV battery is complexly linked to a driving environment and various parts, a new analysis method based on data is required. It was difficult to collect data from vehicles in real-time, and hence, the data of vehicles based on the simulation were significantly different from the actual values. Nowadays, many studies are using the real driving data of vehicles. However, the studies on HEVs are in the beginning stage. There were few attempts to identify the car mileage by using the in-vehicle controller area network (CAN) data. The US government has been conducting a project called Advanced Vehicle Testing Activity, which studies the mutual interconnection method between plug-in hybrid electric vehicles (PHEVs) and electric grids, evaluates vehicle monitoring, and facilitates the commercialization of PHEVs [1]. The US Department of Energy has been performing real-time monitoring and data analysis of hybrid cars and fuel cell vehicles through the Hydrogen Program and HyFLEET-CUTE Project [2,3]. Rezvanizaniani et al. [4] provided an overall review of PHM technology and practical solutions, including hybrid electric battery life prediction. You et al. [5] and Guo et al. [6] proposed an efficient method to estimate the state of a hybrid electric battery with high accuracy, using in-vehicle data. Mi et al. [7] estimated and verified the SOH of a hybrid electric battery using in-vehicle data and a genetic algorithm. Laayouj et al. [8] conducted a study to predict the remaining useful life of a hybrid electric battery using physical models and in-vehicle data. Wakita et al. [9], Miyajima et al. [10], Nishiwaki [11], and Kwak et al. [12] studied the discrimination of drivers by collecting the telecommunication data of vehicle parts. Given that the data value of vehicle parts varies according to the driver's driving pattern, they constructed machine learning models to discriminate drivers. Meng [13] modeled drivers' behavior changes by using a probabilistic model based on the data obtained from accelerators, brakes, and steering wheels, and predicted the users' driving patterns. Choi [14] proposed a model to detect drivers' carelessness using CAN data. Wahab [15] found that the pattern of stepping on the accelerator and the pressure of stepping on the brakes are prominent factors in determining drivers' profiling. Kedar-Dongarka [16] classified the drivers' characteristics into conservative, neutral, and aggressive by using the data retrieved from accelerators, brakes, and gears. Enev et al. [17] classified the road condition into the driving section and parking section based on the position of brake pedals, angle of steering wheel, long-term acceleration, speed of rotation, speed of driving, gear shifting, position of accelerator pedals, speed of engine, maximum engine torque, fuel consumption rates, and position of fuel control valve. In this study, certain criteria for determining the vehicle condition according to the mileage have been developed by analyzing long-term driving data of a HEV. We also utilized a platform for collecting real-time driving data provided by an on-board diagnostics II (OBD-II) port for the long term. The driving data of major parts, such as motors, inverters, and high-voltage batteries, of the test HEV in terms of seasons and paths have been collected by using a real-time driving monitoring system. The changes in the conditions of the parts and modules according to the cumulative mileage-based collected data have been analyzed. A model has been constructed for identifying the mileage interval by using these variables and identified a part or module that has a significant change in characteristics according to the mileage interval. This study can be used for estimation of aging status, detection of malfunctions, and lifetime prediction of parts and modules by classifying and identifying the change in characteristics according to the cumulative driving mileages of hybrid cars. Furthermore, this study can be used for the development of optimized efficient HEV control strategies, and for the preventive maintenance of shared cars by monitoring characteristic changes of the parts and modules according to the mileage with the real driving data. Data Collection System To obtain the driving data of a hybrid car, a real-time driving monitoring system has been utilized, which can monitor and save various data of the driving car on normal roads, rather than test roads, through wireless communication. With this system, the data of driving cars and driving circumstances can be collected in real-time and control signals from the car and data of the detailed parts can be obtained through the OBD-II diagnosis port. Figure 1 shows the configuration and operation concept of the real-time driving monitoring system, which comprises three parts. The first part is a vehicle information-collecting device, which obtains the driving data of cars and parts. The second part is a real-time wireless transmission terminal, which receives the data from the vehicle information-collecting device, transmits the data to the server, and sets monitoring items and devices according to the setting received from the server. The third part is a machine-to-machine (M2M) network platform for collecting the transmitted data and indicating through a domestic mobile communication network and an operation center for saving the driving data and monitoring the driving condition of the test vehicle. The data collected from the vehicle are transmitted to the server of the operating institutes after encryption and compression in real-time, through an M2M mobile communication network. The transmitted data are systematically saved in each database of the project according to the vehicles and provide the driving data of cars and parts and analysis of circumstances for user needs. Test Vehicle and Data Specification The following are the requirements for selecting a test vehicle. First, among commercialized hybrid cars, a vehicle's reliability and safety should already be verified through various evaluations. This means that a certain period has passed after commercialization. Second, the vehicle should have a significant number of sales for further research. Finally, a vehicle that provided diagnosis data by OBD-II was selected. Table 1 shows the specifications of a selected Toyota Prius III hybrid car, which is a five-seater passenger car of full hybrid drive type with a 60-kW electric motor and a 1.8-L gasoline engine. Among the data collected from the test vehicles, only the diagnosis data provided by OBD-II, without separate sensors or DAQ have been utilized. In this study, only the data of physical parameters, such as speed, torque, temperature, voltage, and electric current, as well as some control target values, were significant. Thus, simple status signals, such as control signals and On/Off, were excluded. Additionally, information data, such as error codes for vehicle diagnosis and maintenance, were eliminated. Driving Condition While distribution by region is significant in a fleet test for over 100 vehicles, it is vital to set up a proper driving scenario for the one target hybrid car in this study. The driving scenarios were set up to reflect the hybrid effect through repetitive driving on highways, city roads, and combination zones. As there are differences per country between statistics such as average driving distance per day and parking time, we set up and realized a real experiment environment by achieving driving conditions that approximately met the Korean national statistics. The parking space was unified as outdoor parking. In each driving test per zone, the hybrid mode was set to the general Normal Mode, and the number of passengers was set to one identically. The test vehicle drove at a legal speed limit, and the traffic conditions, such as road congestion, were not considered apart. To reflect seasonal effects, such as summer and winter, and weather effects, such as snow and rain, data have been collected over five years and for a total distance of 70,000 km. The average driving temperature was 32-36 • C in summer and −5-2 • C in winter. Table 2 listed the diagnosis data provided by OBD-II of a test vehicle. Figure 2 shows the main structure of the chosen hybrid electric test vehicle, where the internal combustion engine and two motors generate optimal driving torque and regenerative braking torque according to the driving situation. MG1 is a generator connected to the engine and supports MG2 through an inverter. MG2 uses the energy of the HEV battery to drive the vehicle. The engine and MG1/MG2 motors perform an optimized hybrid driving strategy by controlling the energy flow in conjunction with the compound gear unit. Methodology Here, it has been studied whether it is possible to determine the mileage interval of a HEV, depending on the fact that the driving data pattern varies according to the cumulative mileages. Through this study, a vehicle's parts whose statistical properties change drastically according to the mileage intervals can be found and a mileage interval where the characteristics of parts change considerably can be discovered. An abnormal status of a HEV can be predicted based on these. Alternatively, these can be used as a basis for comparative analysis. Data Exploration To determine the change in the value for the main parts according to the mileage, we analyzed the characteristics of the auxiliary battery per cumulative mileage. As shown in Figure 3, as the mileage increased, the average voltage level decreased, and the width of the voltage fluctuation increased. In addition, the battery temperature per hour increased as well. Thus, it has been found that the parts or modules of the hybrid car had different characteristic values per mileage. According to the mileage, the average value could vary, and the variability could be increased. Based on these observations, we derived features that can deduct the difference in part characteristics per mileage. Features The characteristics of data distribution are measured using factors such as centrality, variability, and normality. Centrality is evaluated from the mean, median, and mode. Variability of the data is measured from the maximum, minimum, mean, range, and variance. For determining the normality, skewness and kurtosis are used to measure whether the data follow a normal distribution. Because the aims of this study are not a prediction of distribution or statistical verification, the normality is excluded. We derive diverse feature in terms of the centrality and variability. A mean was used to measure the centrality and minimum, maximum, and standard deviation for variability. Because the vehicle parts data can be randomly reported within a specific range, the mean is more significant than a particular value with high frequency. As the range was derived from the maximum and minimum, and the variance was derived from the standard deviation, these two variables were excluded to reduce the dependency between the variables. After the segmentation of data collected in seconds, the variability of the vehicle part characteristics has been measured within a specific interval. For obtaining the features of the data, the interval was set to 10 min. As it depends on the made by the decision maker, we set 10 min to consider the length of intervals and the quantity of data. If we set a long interval, the variability of data can be measured for a sufficient period. Instead, because only particular driving data were used, the data collected within an interval could be reduced. For example, when driving data were collected for one hour, features of six intervals were derived by a 10-min interval. Classifier In this study, an algorithm was built to derive important features to determine the mileage of a hybrid, using the random forest algorithm. This algorithm is a type of ensemble learning method and is a model extended from a decision tree model, which builds an assemble tree with sampling data and a selected feature set [18]. Each tree, with different datasets and feature sets, increases the prediction accuracy of the detection model. The random forest algorithm was adopted because it can avoid overfitting and generate feature importance from built decision trees. The random forest splits the dataset and then, builds a model from selected data and tests the model against the unselected data while generating a model. The random forest algorithm has a testing process with data unused in training and provides out-of-bag errors. This make the random forest algorithm to have a chance to avoid overfitting in the training process that the model is fine fitted to the train samples, but is not fitted to the test samples. The importance of features is measured in terms of the Gini index. The Gini index is calculated by subtracting the sum of squared probabilities of each class, in our case, the mileage interval, from one. The Gini index gives the highest value when all classes have the same probabilities, while it gives the low value when a certain probability dominates other probabilities. The algorithm split the dataset along the attribute of a certain feature, calculated the probability of classes within each attribute, and generated the summation of the probabilities. The variable that can split data into classes using the attribute level gives the high Gini index. where x is a feature and k is an attribute of x. Table 3 shows the data collected for each mileage interval after transforming the data in seconds into a 10-min interval. We visualized the characteristics of the statistical values for voltage (VB), current (IB), temperature (Temp. of Batt.), and state of charge (SOC) of the battery, which are the main parts of a hybrid car. Figure 4 shows the distributions of statistics per mileage interval. As the mileage increases, the average and maximum of the statistics converge to a particular value, while the deviation increases. In agreement with the results of the analysis, the statistics in this study had a different distribution according to the mileage interval, and thus, were suitable variables for checking the changes in the part characteristics per mileage interval. Using data from 34 parts, we retrieved 136 features, including mean, standard deviation, maximum, and minimum, as the learning data. Algorithm Performance The random forest provides the out-of-bag errors by testing the built model from selected data against the unselected data, while generating a model. The results of the learning model are shown in Tables 4-6. The evaluation of the learning model used the accuracy evaluation index. We first set up the mileage interval into four levels, such as 30,000, 50,000, 60,000, and 70,000, and then, set up the binary classification problem for each pair of two consecutive levels. The learning results are shown in Tables 4-6. Since, in this study, we experimented with driving until 71,000 mileage, we must collect additional driving data on the mileage, and find an interval where the characteristics of a part show differences. Overall, the algorithm exhibited degraded performance in detecting the 60,000 km class. For three pairs of checkpoints, the algorithm exhibited good performance; however, as the mileage increased, the accuracy rather decreased. These results indicate that the characteristics of car parts change at the checkpoints. Feature Analysis After building an algorithm that classified the mileage by the characteristic values of hybrid car parts, important variables that were used for the construction of the algorithm and the relation between them has been examined. The statistical values of the critical variables change clearly according to the mileage, and thus, should be checked per mileage interval. Feature Importance The following shows the importance of the variables that worked mainly in differentiating the mileage interval. In the random forest algorithm, the importance of variables is derived by incNodePurity, which is measured by an impurity of nodes that is reduced after branching a tree by a specific variable. The impurity is calculated by the summation of the residual products of each node. The importance of the variables is visualized in Figure 5. The first figure represents the feature importance in recognizing the change of mileage interval from 30,000 to 50,000 km. In this case, 'intake air temperature mean' has the highest value in importance. The scree point is found when the slope of importance decrease has changed as features are arranged according to the Gini index (the intercept point between the orange and the blue lines in Figure 5a). Key features from the highest Gini index to the point of this scree point are selected. Table 7 summarizes the important variables with high Gini index before the scree point. The results indicate that the characteristics of vehicle parts that significantly change depend on the mileage interval. When the mileage interval shifts from 30,000 to 50,000 km, the intake air temperature and inverter temperature for MG1 and ambient temperature, are mainly changed. For the shift of mileage interval from 50,000 to 60,000 km, ambient temperature no longer exhibits differences, but the intake air temperature and inverter temperature for MG1 show big differences between two intervals. For the shift from 60,000 to 70,000 km, the changes in the auxiliary battery voltage and the inhaling air temperature are newly observed. The analysis results show that the major temperature-related variables are closely related to the mileage interval. This is because the change in mileage interval reflects the characteristics of the driving environment, such as the atmospheric temperature according to the season. This is a self-evident result in real driving situations. At the same time, it also clearly shows that the temperature change characteristics of the system and module by the thermal management strategy are linked to mileage. This is because the stresses of automotive parts and modules are related to temperature and the thermal management system controls the temperature in response. Since the voltage of the battery is affected by aging and thermal management of the battery, it is obvious that the battery voltage is also affected. Change in Major Part Characteristics The cumulative distribution function (CDF) of major variables that exhibit a high Gini index so that the major changes are found within the intervals of 30,000-50,000, 50,000-60,000, and 60,000-70,000 km is displayed. The CDF plot illustrates how the values of a variable compose the entire dataset by displaying the cumulative density probability. Through the CDF plots, it can be observed how the features form the distribution. For example, the mean of the auxiliary battery voltages (denoted as Auxiliary.Battery.Vol.mean) has various values for 30,000 km, but a limited value near 14.4 for 50,000 km, which is higher than the value for 30,000 km. When comparing 30,000 and 50,000 km, major features exhibit much different distributions. The intake air temperature, ambient temperature, and inverter temperature show high skewness for 30,000 km, but the range of values increases for 50,000 km, as shown in Figure 6a. Figure 6b,c show the CDF plots of important variables that have been changed significantly in the 50,000-60,000 km and 60,000-70,000 km mileage intervals. In this interval, a change in several parts occurred compared to the other intervals. The most important variables included auxiliary battery voltage and temperature, engine intake temperature, motor inverter temperature, and battery intake temperature. Considering the principal parts whose characteristics vary according to the mileage interval, changes in the battery intake temperature and auxiliary battery temperature, followed by the voltage and temperature of the battery pack can be found. Conclusions To trace major changes in the parts of a hybrid car with increasing mileage, which has not been investigated enough, we collected real driving data over 70,000 km with various paths under various conditions. We collected data provided by OBD-II injected into CAN. Among the collected data, we selected significant data of physical parameters and analyzed them. After aggregating CAN data collected in seconds into those in ten minutes, we measured centrality and variability, and verified whether these statistic features vary according to the mileage interval of a hybrid car. We set up the checkpoints as 30,000, 50,000, 60,000, and 70,000 km, and performed a pair-wise comparison using a machine learning algorithm. The statistical properties are classified by the mileage interval with accuracy of 92.68%, 82.58%, and 80.65%, respectively. The high accuracy means that the correlation with the mileage section can be estimated by analyzing the driving data for a certain time (10 min) of the hybrid vehicle. In addition, we found that the statistics of the data per part do not increase or decrease consistently in a mileage interval. Contrary to the fact that the mean and maximum of part data converge to a specific value, the deviation seems to increase. For the battery, which is the main part of a hybrid car, it is found that the voltage (VB), current (IB), temperature (Temp of Batt), and amount of charge (state of charge) of the battery decrease at the mileage interval from 30,000 to 50,000 and increase again at that over 50,000 km. However, as the mileage increases, the deviation in values increases further and gives a low accuracy in classifying the mileage interval. By utilizing these analysis methods and the results, it can be used for maintenance strategies according to the prediction of aging of vehicles or parts. In addition, it is expected to be used for stress factor analysis and PHM to improve reliability through mileage accumulation and correlation analysis with major physical variables. In this study, we performed a fundamental analysis through the analysis of data from a vehicle. To develop various service models based on the exact changes in car parts according to the driving mileage, the data analysis should be extended to multiple vehicles so that a reference model is developed based on that data. In the future, we will expand the analysis against other types of hybrid vehicles such as plug-in hybrid. With continuous research on a reference model, it is expected to be used for a customized maintenance per vehicle according to the driving mileages.
2020-08-13T10:07:40.819Z
2020-08-10T00:00:00.000
{ "year": 2020, "sha1": "ff6398353214c89320f9501445b2f416cdeec6f6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/16/5533/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2e3bc94c355a870c9de15db602514b00ab22c6f1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
260526554
pes2o/s2orc
v3-fos-license
Molecular Dynamics Simulation of Nanoscale Channel Flows with Rough Wall Using the Virtual-Wall Model Molecular dynamics simulation is adopted in the present study to investigate the nanoscale gas flow characteristics in rough channels. -e virtual-wall model for the rough wall is proposed and validated. -e computational efficiency can be improved greatly by using this model, especially for the low-density gas flow in nanoscale channels. -e effect of roughness element geometry on flow behaviors is then studied in detail. -e fluid velocity decreases with the increase of roughness element height, while it increases with the increases of element width and spacing. Introduction Micro/nano-electromechanical systems (MEMS/NEMS) have received considerable attentions over the past two decades.Fluid flows are usually encountered in these systems [1][2][3].Fluid transport and interaction with these systems serve an important function in system operations [4].Understanding the behaviors and manipulations of fluids within nanoscale confinements is significant for a large number of applications [5][6][7]. e effect of the wall serves as a distinct feature of fluid flow in micro/nanoscale-confined devices [8][9][10].e wall plays an increasing role in fluid flow when decreasing the flow characteristic length scale.Barisik and Beskok found that, in a channel with 5 nm in height, 40% of the channel is immersed in the wall force field [11].erefore, the fluid transport characteristics, such as momentum and energy, significantly deviate from predictions of kinetic theory [11]. erefore, the effect of this near-wall force field on the nanoscale channel flow must be understood and evaluated. Molecular dynamics simulation (MD) investigates the interactions and movements of atoms and molecules, using N-body simulation [12]. is method has been employed by many researchers in the past to study the liquid flow in nanochannels [13][14][15][16].Recently, the MD simulation is also adopted to investigate the gaseous flow in nanoscale-confined channels [11,[17][18][19].Barisik and Beskok [11,17] investigated shear-driven gas flows in nanoscale channels to reveal the gaswall interaction effects for flows in the transition and free molecular regimes.Hui and Chao [18] studied the gas flows in nanochannels with the Janus interface and found that the temperature has a significant influence on the particle number near the hydrophilic surface.Recently, Babac and Reese [19] investigated classical thermosize effects by applying a temperature gradient within the different-sized domains. In some MD simulations, idealized-wall models are considered.e interactions of fluid-wall atoms are usually considered as functions, for example, the diffuse and specular reflections, Maxwell's scattering kernel [20], or Cercignani-Lampis model [21].ese idealized-wall models are feasible in some specific situations.However, when we study the detailed flow behaviors in the rear-wall region, the atomic-wall model must be considered.But the atomic-wall model is expensive both in computational time and memory.In confined channel flows, most atoms are requisite to describe the atomic wall.e number of wall atoms is much larger than that of fluid molecules.is drawback is particularly fatal for the gas flow.For example, Barisik et al. [22] studied a nanoscale Couette flow at K n � 10. e simulation box is 162 nm × 3.24 nm × 162 nm.In their study, the number of gas molecule is 4900, while the number of wall atom is 903003.As a result, most of the computational resource is consumed on the computation of wall atoms. Recently, Qian et al. [23] proposed a virtual-wall model for the MD simulation to reduce the computing time.e unit cell structures are infinite repetitive in the atomic wall.As a result, the force on a fluid molecule from wall molecules is periodical. is force was first calculated and stored in memory.During the simulation, when a fluid molecule moves into the near-wall region, the force on this fluid molecule from wall molecules can be determined directly, according to the position of the molecule relative to the wall.e near-wall region here refers to the region near the wall with distance smaller than the cutoff radius.Excessive calculations of fluid-wall interactions can be avoided, and the computing time can be reduced drastically. e time reduction is more significant for lower fluid density in nanoscale channels. In present study, the virtual-wall model is adopted to describe the rough wall.e remainder of this paper is organized as follows.Section 2 introduces the MD simulation and the virtual-wall model.Section 3 describes the application of this model to the rough wall.Finally, Section 4 elaborates the conclusions of the study. MD Simulation and Virtual-Wall Model In the present MD simulation, interactions between fluidfluid atoms and fluid-wall atoms are both described using the truncated and shifted Lennard-Jones (LJ) 12-6 potential given as follows: where r ij is the intermolecular distance between atoms i and j, ε is the potential well depth, σ is the atomic diameter, and r c is the cutoff radius.Lorentz-Berthelot mixing rule [24] is employed to calculate the LJ parameters between fluid-wall atoms. In the virtual-wall model, the force on a fluid atom from wall atoms can be expressed as where N is the number of wall atoms which interact with the fluid atom.e atomic wall is composed of FCC lattices and the unit cell structures in repetition.When wall atoms are fixed to their lattice point, the force on the fluid atom is periodic in both x and z directions.For example, the force of a fluid molecule located at x, y, and z is exactly the same as the force of the same molecule located at x + iL, y, and z + kL, where i and k are integers and L is the lattice constant.If the force distributions in the unit cuboid domain (L × r c × L) are known, then the force can be determined anywhere else. is is the core concept of the virtual-wall model. e virtual-wall model for the smooth wall is first examined.Without losing generality, gas argon flow confined between FCC platinum walls is considered.e walls are along the xz plane and the simulation box are periodic in both x and z directions.For argon-argon interactions, σ Ar and ε Ar are 0.3405 nm and 119.8kB , respectively.For argon-platinum interactions, σ Ar-Pt is 0.3085 nm and ε Ar-Pt is 64.8kB , according to the Lorentz-Berthelot mixing rule [24].In this study, r c is set as 0.851 nm, which is approximately equal to 2.5σ Ar .e masses of argon and platinum atoms are 6.64 × 10 −26 kg and 3.24 × 10 −25 kg, respectively.ese parameters have been validated in previous studies [25,26].e simulation box is set to be 40.9nm × 17.1 nm × 40.9 nm in x, y, and z directions.A force of 0.008ε Ar /σ Ar is acted on each gas molecule as an external force [27] to drive the gas to flow in the nanoscale channel.e atomic-wall model is also carried out here to make a comparison.e thickness of the wall is 1.18 nm, which is larger than the cutoff radius.e lattice constant of the FCC platinum lattice is 0.393 nm. In the MD simulation, the neighbor-list method is used to calculate the force between atoms while the velocity-Verlet algorithm is adopted to integrate the equations of motion [28]. e timestep in the simulation is set to be 10.8 fs. e first 1 million steps are used to equilibrate the system, and another 5 million steps are used to accumulate properties in the y direction, with the bin size to be 0.0614 nm. e Langevin thermostat method [29] is employed to control the gas temperature before equilibrium.Only thermal velocities are used to compute the temperature and pressure.e above parameters and techniques are adopted in all simulations. e open-source MD code called large-scale atomic/molecular massively parallel simulator (LAMMPS) [30], developed by Sandia National Laboratories, is adopted to carry out the MD simulations. e density and velocity profiles across the nanoscale channel calculated using the atomic-and virtual-wall models are compared in Figure 1.Perfect agreement between these two models can be found, which indicates that the virtualwall model works well in the MD simulation. In order to compare the computational time, these two simulations are performed on a single Inter i7-4790K CPU processor. e computational time for the virtual-wall model is 0.4 h, while for the atomic-wall model, the time is 67.5 h.e virtual-wall model is much more efficient in the present case. Rough Wall Simulations 3.1.Virtual-Wall Model for the Rough Wall.From the micropoint of view, all walls are rough.Surface roughness plays an important role in fluid flow and heat transfer [31].So, in the present study, the virtual-wall model is adopted to describe the rough wall. In the present study, platinum atom cuboids on the smooth atomic wall are used to represent the roughness element, as illustrated in Figure 2. e roughness element is periodic in both x and z directions.e geometry of the roughness element is shown in Figure 2 e spaces between two elements in x and z directions are both L. In order to perform the virtual-wall model, a unit cuboid is first introduced, as shown in Figure 2. e rough wall can be considered as the close-packed array of this unit cuboid. e size of the unit cuboid is L × H × L, where H � h + r c .Fluid molecules interact with wall atoms only when they are located within these cuboids.When fluid molecules are outside these cuboids, the distances are larger than r c and no interactions between fluid and wall atoms are needed. e cuboid is periodic in both x and z directions.erefore, the force of a fluid molecule located at x, y, and z is exactly the same as the force of the same molecule located at x + iL, y, and z + kL, where i and k are integers.If the force distribution in the unit cuboid domain (L × H × L) is known, then the force on a molecule anywhere else can be deduced. e unit cuboid domain is then divided into MX × MY × MZ bins, and the forces in each bin are calculated and stored in the memory [23].During the simulation, the corresponding force of a fluid molecule located in the near-wall region is called directly from memory according to its position. e virtual-wall model for the rough wall is first validated.Argon molecules are supposed to flow between nanoscale rough platinum walls.e simulation setup is the same as in Section 2. For the roughness element, h � l � 2a and L � 4a, where a is the lattice constant of the FCC platinum lattice, which is 0.393 nm.In the simulation, gas density is set to be 7.17 kg/m 3 .e Knudsen number, which is defined as the ratio of gas mean free path to the channel height, is 0.95, and the flow is in transition regime.In order Journal of Nanotechnology to make a comparison, the atomic-wall model is also carried out here.In the simulation, 3087 gas argon atoms and 218406 wall platinum atoms are used. e density and velocity profiles of the virtual-wall model are shown in Figure 3. ese profiles are compared with the corresponding atomic wall simulation.Perfect agreement between these two models can be found, which indicates that the virtual-wall model works well for the gas flows in rough wall channels. e gaseous flows in nanoscale channels with smooth and rough walls are first compared.e schematic diagram of channel geometry is shown in Figure 4(a).ree channels are investigated.e outer channel and the inner channel are both smooth, with channel heights equal to H′ and H′ − 2h, Journal of Nanotechnology respectively.Here, h is the height of the roughness element.e third channel is rough, with the channel height equal to H′, and the roughness element height is equal to h.In the simulation, H′ is 15.35 nm and h is 0.786 nm.So, the height of the inner channel is 13.67 nm. e other parameters are kept the same as in Section 2. e velocity profiles for these three channels are shown in Figure 4(b).It can be found that the velocity of the rough channel is much smaller than those of smooth channels.It is well known that, in nanoscale channel flows, the wall plays an extremely important role in the fluid flow.Here, in the rough channel, the total surface area is much larger than those in smooth channels because of the existence of roughness elements.As a result, the collision probability between fluidwall atoms is larger and more fluid molecules are affected by the wall in the rough channel.So, the fluid velocity of gas in the rough channel is smaller.e effect of roughness is of great importance to nanoscale channel flows. Roughness Element with Different Heights. e influences of roughness element geometry on flow behaviors are then studied.Roughness elements with different heights are first studied.e widths l and the spacing L of the roughness element are kept the same, while the element height h is variable. e velocity profiles of the rough wall with different element heights are shown in Figure 5.It can be found from the figure that the fluid velocity decreases with the increase of element height.is is because the total surface area is larger at higher element height.According to the explanation in Section 3.1, the wall effect is larger at higher element height.So, the fluid velocity is smaller. Fitting curves are obtained for each velocity profile at different roughness element heights, based on the gas velocity in the central part of the channel.From the fitting curves, we can deduce the slip velocity on the wall conveniently.It can be found from Figure 5 that the slip velocity also decreases with the increase of element height. Roughness Element with Different Widths.Roughness elements with different widths are then studied.e height h and spacing L of the roughness element are kept the same, while the width l is variable.ree roughness element widths (l � a, 2a, and 3a) are considered. e velocity profiles at different roughness element widths are shown in Figure 6.It can be found from the figure that the element width has a great influence on the velocity profile. e fluid velocity increases with the increase of element width.e total surface areas are the same in these three cases, so are the wall effects, according to Section 3.1.However, at large roughness width, for example, l � 3a, the gap between two roughness elements is small.As a result, it is hard for the gas molecules to enter into the gap, because of the repulsive force between fluid-wall atoms, according to (1). at is to say, the effective surface area diminishes.So, the fluid velocity increases in the rough channel with the increase of the element width. e fitting curves obtained for each velocity profile at different roughness element widths are also shown in Figure 6.It can be found that the slip velocity increases with the increase of the element width. is is because the total surface area is smaller at larger element spacing.According to the explanation in Section 3.1, the wall effect is smaller at larger element spacing, so the fluid velocity is larger. e corresponding fitting curves for each velocity profile at different roughness element spacings are also shown in Figure 7. e results show that the greater the spacing, the larger the velocity slip. Conclusions e wall plays an extremely important role in the nanoscale channel flows.In the present study, MD simulation is carried out to investigate the nanoscale gas flows in rough channels. e virtual-wall model for the rough wall is proposed, and its validity is confirmed.e computational efficiency can be improved greatly by using this model, especially for the lowdensity gas flow in nanoscale channels.e effects of roughness element geometry on flow behaviors are then studied in detail. From the simulations, we found that the total surface area is of great importance in nanoscale channel flows.e fluid velocity is inversely proportional to the total surface area.e fluid velocity and velocity slip decrease with the increase of roughness element height, while they increase with the increase of element width and spacing. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. 6 Journal of Nanotechnology (b). e height of the roughness element is h, and the widths in x and z directions 2Journal of Nanotechnology are both l. Figure 1 : Figure 1: Comparisons between the atomic-and virtual-wall models for the smooth wall: (a) density profile; (b) velocity profile. Figure 2 : Figure 2: Schematics of the rough wall and the unit cuboid domain: (a) axonometric view; (b) side view. Figure 3 : Figure 3: Comparisons between the atomic-and virtual-wall models for the rough wall: (a) density profile; (b) velocity profile. Figure 4 : Figure 4: Comparison of gas flows in nanoscale smooth and rough channels.
2021-11-04T16:06:07.448Z
2018-06-24T00:00:00.000
{ "year": 2018, "sha1": "ad07ee2ffcccad1075540a1deaa7965276a176a0", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jnt/2018/4631253.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f2c51b5f77799b1962974b9a77b5d6cce509fe81", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [] }
258635862
pes2o/s2orc
v3-fos-license
Mono-(2-ethylhexyl) Phthalate (MEHP)-Induced Telomere Structure and Function Disorder Mediates Cell Cycle Dysregulation and Apoptosis via c-Myc and Its Upstream Transcription Factors in a Mouse Spermatogonia-Derived (GC-1) Cell Line As a typical environmental endocrine disrupting chemical (EDC), di-(2-ethylhexyl) phthalate (DEHP) is thought to be related to reproductive disorders, especially in males. Growing evidence suggests that various EDCs may result in an impaired telomere structure and function, which is associated with male infertility. However, the adverse effect of DEHP on telomeres in male reproductive cells has rarely been studied, and the related mechanisms remain unclear. In this study, we tested the effects of mono-(2-ethylhexyl) phthalate (MEHP), the primary metabolite of DEHP, on telomere dysfunction in mouse spermatogonia-derived cells (GC-1) and the potential role of TERT and c-Myc in MEHP-induced spermatogenic cell damage. Results showed that MEHP induced cell viability inhibition, G0/G1 phase cell cycle arrest, and apoptosis in GC-1 cells in a dose-dependent manner. Shortened telomeres, reduced telomerase activity, and decreased expression of TERT, c-Myc, and upstream transcription factors of c-Myc were also observed in the MEHP-treated cells. In conclusion, it can be concluded that TERT-mediated telomere dysfunction may contribute to MEHP-induced G0/G1 phase cell cycle arrest and apoptosis in GC-1 cells through the impairment of c-Myc and its upstream transcription factors. Introduction Di-(2-ethylhexyl) phthalate (DEHP), as one of the most widely studied phthalate derivatives, is frequently used as a plasticizer in polyethylene (PE) plastics and polyvinyl chloride (PVC) plastics [1]. The non-covalent binding of Phthalic acid esters (PAEs) to plastic molecules makes them susceptible to escaping from plastic materials under certain conditions and are constantly being released into the surrounding environment, polluting the air, water, and soil [2]. Therefore, humans are inevitably exposed to DEHP through breathing, dietary intake, and dermal absorption [3]. Accumulating evidence indicates that PAEs have a definite reproductive toxic effect. Specifically, epidemiological studies have shown that exposure to DEHP is associated with a decrease in semen quality and a reduction in serum testosterone levels [4,5]. Previous studies in animal models have clearly demonstrated that direct exposure to DEHP has a toxic effect on the male reproductive system, resulting in testicular atrophy, decreased sperm count, and reduced sperm viability [6,7]. However, less is known about the genotoxic effect on the male reproductive system caused by DEHP. Generally, DNA damage caused by environmental or endogenous genotoxic agents represents a serious survival challenge for cells [8]. DEHP and its metabolites have been well-documented to directly provoke oxidative stress, disrupt the DNA integrity of sperm, induce DNA damage, and result in reproductive toxicity [9,10]. Telomeric DNA is composed of a repetitive sequence 5 -TTAGGG-3 , enriched in guanine base G, which makes telomeres more vulnerable to breakage by reactive oxygen species attack. Keeping the telomere length at a certain level is an imperative prerequisite for cells to be able to continuously divide. During DNA replication in somatic cells, telomere loss occurs in each round of cell division due to the inability of DNA polymerase to replicate chromosome ends and prevent cell proliferation infinitely by inducing differentiation, cycle arrest, replicative senescence, or apoptosis [11,12]. To avoid telomere loss, telomerase is indispensable. Telomerase is a ribonucleoprotein that includes the telomerase reverse transcriptase (TERT) and the telomerase RNA (TERC). Suppressing TERT markedly diminishes telomerase activity, shortens the telomere length, and increases apoptosis. Conversely, activating TERT expression obviously improves telomerase activity and promotes telomere elongation and cell proliferation. Together, these findings indicate that TERT is a rate-limiting factor for telomerase activity and performs a critical role in maintaining telomerase function [13,14]. There are multiple transcription factor binding sites in the promoter region of the TERT gene including c-Myc, Mad1, SP1, and estrogen response elements. c-Myc is one of the essential transcription factors, which has been demonstrated to be involved in cell proliferation and growth as well as in the processes of differentiation and apoptosis. Chromatin immunoprecipitation shows that c-Myc and Max-formed heterodimers can directly interact with the hTERT promoter [15]. The estrogen/estrogen receptor, another transcription factor on the TERT gene promoter, can also indirectly activate TERT by upregulating the expression of c-Myc [16]. The ability of c-Myc to activate hTERT gene expression and telomerase activity leads to c-Myc-dependent cell immortalization. These studies indicate that TERT is a direct target of the c-Myc protein. The stability of the telomere structure and function contributes to protect germ cells from growth inhibition, senescence, apoptosis, and even death. Telomerase and TERT are highly expressed in germ cells, especially in spermatogonia [17]. It is well-documented that telomeres are very sensitive to the external environment and that various environmental and occupational exposures related to air pollution [18], persistent organic pollutants [19], endocrine disruptors [20], and heavy metal contaminants [21] can disrupt telomere dynamic homeostasis and induce telomere length shortening. Thus, in conjunction with the literature, an important conclusion can be drawn-telomeres may be a significant target for genetic damage induced by various environmental pollutants. However, research on telomere damage caused by environmental pollutants is limited not only to the respiratory system and peripheral blood lymphocytes, but also to the male reproductive system. Telomere homeostasis is essential for the formation of spermatozoa [22]. Telomerase are sensitive to environmental influences. Environmental pollutants such as phthalates [1], polycyclic aromatic hydrocarbons (PAHs) [23], brominated flame retardants [24], PM2.5 [25], and fluoride [26] have been clearly proven to induce telomere damage in the male reproductive system, which may be associated with abnormalities in telomere structure and function such as shortened telomere length and reduced telomerase activity in spermatogenic cells. However, only a few papers have reported that PAEs can cause telomere disruption in spermatogenic cells. These studies stay at the effect level and do not delve into the essential reasons for telomere damage caused by these pollutants, so the underlying molecular mechanisms are unclear and still need to be further explored. Therefore, based on the fact that telomerase activity and TERT are highly expressed in spermatogonia, we chose GC-1 mouse spermatogonia as the subject of our study. This experimental design will focus on the mechanisms associated with telomere damage in male reproductive damage caused by PAEs. By establishing in vitro cellular assays to explore the impacts of MEHP on germ cell toxic effects and telomere injury and delve into the mechanisms underlying this damaging influence caused by MEHP, this study will Toxics 2023, 11, 448 3 of 14 provide more evidence for the prevention of male reproductive toxicity induced by the compounds of PAEs. Cell Culture and Treatment The mouse spermatogonia-derived GC-1 spd (ts) cells (GC-1 cells) were purchased from the American Type Culture Collection (ATCC, Rockville, MD, USA). The cells were cultured in DMEM medium (HyClone, Logan, UT, USA) with 10% FBS and 1% penicillinstreptomycin at 37 • C in a humidified atmosphere containing 5% CO 2 . Cells were treated with MEHP (50, 100, 200, and 400 µM) dissolved in DMSO. Control cells received DMSO (0.05% final concentration) alone. GC-1 cells presented a semi-adherent and semi-suspended state under natural growth conditions. Each cell culture experiment was repeated at least three times. Cell Viability The cytotoxicity of MEHP to GC-1 cells was assessed using the Cell Counting Kit-8 (CCK-8; DOJINDO, Kumamoto, Japan) assay. According to the manufacturer's recommendations, the cells were seeded into a 96-well plate at a density of 10,000 cells per well for 24 h and then treated with different concentrations of MEHP for 48 h. The cells were then incubated with CCK-8 solution for another 2 h. The cell viability was expressed as the optical density detected at 450 nm using an enzyme labeling instrument. Cell Cycle Assay Cell cycle was evaluated using the cell cycle and apoptosis kit (Beyotime, China) as described by the manufacturer's instructions. After MEHP exposure, cells were collected, washed, and then fixed overnight in ice-cold 70% ethanol at 4 • C. Fixed cells were washed with cold PBS, incubated with 0.5 mL propidium iodide staining solution at 37 • C for 30 min, and the cell distribution was then detected and analyzed by flow cytometry (AccuriC6, BD Biosciences, San Jose, CA, USA) in 2 h. Apoptosis Assay Apoptosis was evaluated using the Annexin V-FITC Apoptosis Detection Kit from BD Pharmingen (San Jose, CA, USA) following the manufacturer's instructions. After MEHP treatment, cells were harvested and incubated with Annexin V-FITC and propidium iodide (PI) for 15 min at room temperature in the dark. Subsequently, the Annexin V-positive cells were analyzed by flow cytometry in 30 min. Telomere Length Measurement Genomic DNA was isolated from cells using a DNA/RNA Isolation Kit (Omega Bio-Tek Inc., Norcross, GA, USA) according to the manufacturer's instructions. The relative telomere length of cells was then measured by determining the ratio of the telomere repeat copy number to the single copy gene copy number (T/S ratio) using real-time quantitative PCR (RT-qPCR). RT-qPCR was performed with the SYBR Master Mix in a final reaction volume of 20 µL containing 20 ng of genomic DNA. The primer sequences are shown in the Supplementary Materials (Table S1). Real-Time Quantitative PCR (RT-qPCR) RNA extraction was conducted according to the manufacturer's protocol. The Total DNA/RNA Isolation Kit (Omega Bio-Tek Inc., Norcross, GA, USA) was used to extract RNA and the iScript cDNA Synthesis Kit (BIO-RAD, USA) was used to reverse transcribe the RNA to cDNA followed by real-time quantitative PCR using the SYBR Master Mix. The final reaction volume was 20 µL, and the relative expression of genes was analyzed by the 2 −∆∆CT method. Beta-actin (β-actin) was used as the housekeeper gene. The primer sequences are shown in the Supplementary Materials (Table S2). Western Blot Analysis After treatment, the cells were harvested and lysed in IP lysis buffer (Beyotime, China) on ice for 30 min. The protein concentration was determined by the BCA assay (Beyotime, China), and 30-40 µg of protein was separated by 12% SDS-PAGE and transferred to a PVDF membrane (Millipore, Bedford, MA, USA). The membranes were then soaked in blocking buffer (5% skim milk) at room temperature for 2 h and incubated with primary antibodies overnight at 4 • C. After three washes, the membranes were incubated with HRP-conjugated secondary antibodies for 1.5 h at room temperature. The membranes were then washed three times again, and the signal was detected using an enhanced chemiluminescence (ECL) detection kit. Telomerase Activity Measurement The telomerase activity of the GC-1 cells was determined using the mouse telomerase (TE) ELISA kit (meibiao, Jiangsu, China). GC-1 cells were collected and resuspended with PBS. The supernatant was obtained by centrifugation after repeated freeze-thaw. Then, the supernatant as well as a series of dilutions from the standard substances were added to the wells coated with mouse TE-specific antibodies. The HRP conjugate reagent was added to each well and incubated at 37 • C for 60 min followed by five washes. Chromogenic agents A and B (50 µL) were added to each well and incubated at 37 • C for 15 min away from light. The reaction was terminated by a stop solution, and the OD value was measured at 450 nm. Statistical Analysis All experiments were repeated at least three times, and the resulting data were expressed as the mean ± standard deviation (SD). SPSS 25.0 software was used for the data analysis. One-way analysis of variance (ANOVA) was used to compare the differences between the experimental and control groups. A value of p < 0.05 indicated that the variance was statistically significant. MEHP Reduces Cell Viability and Induces G 0 /G 1 Phase Cell Cycle Arrest and Apoptosis in GC-1 Cells In the present study, the survival of GC-1 cells after treatment with varying concentrations of MEHP (50, 100, 200, and 400 µM) for 48 h was detected by the CCK-8 assay. As shown in Figure 1, MEHP reduced the cell viability in a dose-dependent manner when the concentration was higher than 100 µM compared with the DMSO-treated control. The cell viability of the 100 µM MEHP group decreased to about 94%, while those of the 200 and 400 µM MEHP groups were reduced to about 86% and 67%, respectively (p < 0.05). To clarify whether the MEHP-induced decrease in the cell viability of GC-1 cells was related to cell cycle arrest or apoptosis, we examined the cell cycle and apoptosis using PI fluorescent staining or Annexin V-FITC combined with flow cytometry, respectively. Our study revealed that MEHP treatment for 48 h induced G 0 /G 1 phase cell cycle arrest in GC-1 cells and that the percentage of cell cycle arrest was positively correlated with the MEHP concentration, with significant effects in the 200 and 400 µM groups ( Figure 2A). Moreover, MEHP treatment also caused cellular apoptosis in a dose-dependent manner following 48 h of exposure ( Figure 2B). The results of the Western blot analysis further supported the results of the flow cytometry. The levels of two important regulatory proteins (cyclindependent kinase 4 [CDK4] and CYCLIN D1) that are necessary for the transformation of the G 0 /G 1 phase to S phase obviously decreased. Meanwhile, after MEHP exposure, the expression level of Bax, a key protein that promotes apoptosis, markedly increased, while the expression level of anti-apoptotic protein Bcl-2 was significantly reduced ( Figure 2C). Similarly, as shown in Figure S1, the mRNA expression levels of Cdk4, Cdk6, Ccnd1, and Rb1 declined with the increase of MEHP. Together, our results demonstrate that MEHP resulted in a decrease in cell viability and an increase in G 0 /G 1 phase cell cycle arrest and apoptosis in the GC-1 cells. MEHP Reduces Cell Viability and Induces G0/G1 Phase Cell Cycle Arrest and Apoptosis in GC-1 Cells In the present study, the survival of GC-1 cells after treatment with varying concen trations of MEHP (50, 100, 200, and 400 μM) for 48 h was detected by the CCK-8 assay. As shown in Figure 1, MEHP reduced the cell viability in a dose-dependent manner when the concentration was higher than 100 μM compared with the DMSO-treated control. The cell viability of the 100 μM MEHP group decreased to about 94%, while those of the 200 and 400 μM MEHP groups were reduced to about 86% and 67%, respectively (p < 0.05) To clarify whether the MEHP-induced decrease in the cell viability of GC-1 cells was re lated to cell cycle arrest or apoptosis, we examined the cell cycle and apoptosis using P fluorescent staining or Annexin V-FITC combined with flow cytometry, respectively. Our study revealed that MEHP treatment for 48 h induced G0/G1 phase cell cycle arrest in GC 1 cells and that the percentage of cell cycle arrest was positively correlated with the MEHP concentration, with significant effects in the 200 and 400 μM groups (Figure 2A). Moreo ver, MEHP treatment also caused cellular apoptosis in a dose-dependent manner follow ing 48 h of exposure ( Figure 2B). The results of the Western blot analysis further supported the results of the flow cytometry. The levels of two important regulatory proteins (cyclin dependent kinase 4 [CDK4] and CYCLIN D1) that are necessary for the transformation o the G0/G1 phase to S phase obviously decreased. Meanwhile, after MEHP exposure, the expression level of Bax, a key protein that promotes apoptosis, markedly increased, while the expression level of anti-apoptotic protein Bcl-2 was significantly reduced ( Figure 2C) Similarly, as shown in Figure S1, the mRNA expression levels of Cdk4, Cdk6, Ccnd1, and Rb1 declined with the increase of MEHP. Together, our results demonstrate that MEHP resulted in a decrease in cell viability and an increase in G0/G1 phase cell cycle arrest and apoptosis in the GC-1 cells. MEHP Induces Telomere Structure and Function Disorder in GC-1 Cells Because cell growth arrest and apoptosis have been proven to be mediated by t induction of telomere dysfunction (i.e., short telomeres), we analyzed the effect of MEH on the telomere length in male germ cells after 48 h of exposure at different concentratio The relative telomere length was assessed using RT-qPCR by determining the T/S rat The expression levels of the target proteins CDK4 and CYCLIN D1 that regulate the cell cycle from the G 1 phase to S phase and apoptosis-related proteins Bax and Bcl-2 were examined using Western blot and quantified relative to β-actin by densitometric analysis of the band. Results are expressed as the mean ± SD, n = 3. * p < 0.05, ** p < 0.01, *** p < 0.001 versus the control group treated with DMSO. MEHP Induces Telomere Structure and Function Disorder in GC-1 Cells Because cell growth arrest and apoptosis have been proven to be mediated by the induction of telomere dysfunction (i.e., short telomeres), we analyzed the effect of MEHP on the telomere length in male germ cells after 48 h of exposure at different concentrations. The relative telomere length was assessed using RT-qPCR by determining the T/S ratio. Specifically, telomeres were significantly shortened in cells treated with 200 or 400 µM of MEHP. However, although the telomere length was shortened in the 100 µM dose group, the variation was not statistically significant ( Figure 3A). We further detected the changes in the telomere-related multiprotein complex (shelterin). The results showed that the mRNA expression levels of shelterin including Trf1, Trf2, Pot1, Rap1, and Tin2 decreased, especially in the 400 µM dose group, but there was no appreciable change in the Tpp1 mRNA expression in all dose groups ( Figure 3B). To explore the effect of MEHP on the telomerase activity, the GC-1 cells were subjected to telomerase (TE) ELISA analysis after treatment with MEHP. The results demonstrate that telomerase activity was obviously reduced in the 200 and 400 µM exposure groups ( Figure 3C). The telomerase reverse transcriptase (TERT) is a pivotal determinant of telomerase activity and telomere length maintenance. RT-qPCR analysis showed that the mRNA expression of Tert was downregulated in a dose-dependent manner after 48 h of MEHP treatment ( Figure 3D). Furthermore, MEHP also lowered the protein level of TERT in all dose groups ( Figure 3E). Collectively, these data suggest that MEHP can cause telomere dysfunction in GC-1 cells, and lead to the inhibition of telomerase activity and decrease in TERT expression. MEHP Inhibits c-Myc Expression in GC-1 Cells c-Myc is a proto-oncogene that promotes cell proliferation and growth as well as a positive regulator of telomerase activity [27,28]. Specifically, the c-Myc protein can directly activate TERT through a protein dimer formed by binding to the Max protein [15]. Estrogen, another positive regulator of telomerase activity, can also indirectly promote TERT expression by activating the expression of c-Myc [16]. These results suggest that TERT is a direct target of c-Myc. As shown in Figure 4A, the mRNA expression levels of c-Myc and Max were reduced in a dose-dependent manner after the GC-1 cells were exposed to various concentrations of MEHP for 48 h. Simultaneously, the protein expression levels of c-Myc and Max also decreased with the increasing concentrations of MEHP ( Figure 4B). Effect of MEHP Exposure on c-Myc Upstream Transcription Factors in GC-1 Cells The transcription factors of c-Myc were queried in the Cistrome database (http: //cistrome.org/db, accessed on 13 October 2022), TRRUST database (transcriptional regulatory relationships unraveled by sentence-based text mining, http://www.grnpedia.org/ trrust, accessed on 13 October 2022), and the Genecards database (https://www.genecards. org/, accessed on 13 October 2022). The Cistrome database is a resource containing human and mouse cis-regulatory information derived from DNase-Seq, ChIP-Seq, and ATAC-Seq chromatin profiling assays to obtain the information on gene regulatory analysis. The TR-RUST database currently contains 8015 interactions between 748 TF genes and 1975 non-TF genes. The p values were calculated with the hypergeometric test. A p value < 0.05 was deemed as a statistically significant difference. Afterward, we performed the R package of "ggplot2 [3.3.6] and VennDiagram [1.7.3]" to visualize the results of the unique and common parts between each group. As shown in the Venn diagram in Figure 5A, we found that five transcription factors, namely, CTCF, STAT3, ESR1, C-JUN, and FOXA1, could be detected in the three databases above-mentioned. First, we assessed the mRNA levels of these five genes by RT-qPCR after the GC-1 cells were exposed to different concentrations of MEHP for 48 h. The outcomes revealed that MEHP treatment reduced the mRNA expression of Ctcf, Stat3, Esr1, C-jun, and Foxa1 in a dose-dependent manner ( Figure 5B). In addition, the Western blot results showed that the protein expression levels of CTCF, ESR1, and C-JUN decreased with an increase in the concentrations of MEHP, especially in the 200 µM and 400 µM dose groups ( Figure 5C). Collectively, these data indicate that the MEHP-induced reduction of c-Myc may be related to the damage to upstream transcriptional regulators of c-Myc. treatment with MEHP. The results demonstrate that telomerase activity was obviously reduced in the 200 and 400 μM exposure groups ( Figure 3C). The telomerase reverse transcriptase (TERT) is a pivotal determinant of telomerase activity and telomere length maintenance. RT-qPCR analysis showed that the mRNA expression of Tert was downregulated in a dose-dependent manner after 48 h of MEHP treatment ( Figure 3D). Furthermore, MEHP also lowered the protein level of TERT in all dose groups ( Figure 3E). Collectively, these data suggest that MEHP can cause telomere dysfunction in GC-1 cells, and lead to the inhibition of telomerase activity and decrease in TERT expression. TERT expression by activating the expression of c-Myc [16]. These results suggest that TERT is a direct target of c-Myc. As shown in Figure 4A, the mRNA expression levels of c-Myc and Max were reduced in a dose-dependent manner after the GC-1 cells were exposed to various concentrations of MEHP for 48 h. Simultaneously, the protein expression levels of c-Myc and Max also decreased with the increasing concentrations of MEHP (Figure 4B). Effect of MEHP Exposure on c-Myc Upstream Transcription Factors in GC-1 Cells The transcription factors of c-Myc were queried in the Cistrome database (http://cistrome.org/db, accessed on 13 October 2022), TRRUST database (transcriptional regulatory relationships unraveled by sentence-based text mining, http://www.grnpedia.org/trrust, accessed on 13 October 2022), and the Genecards database (https://www.genecards.org/, accessed on 13 October 2022). The Cistrome database is a resource containing human and mouse cis-regulatory information derived from DNase-Seq, ChIP-Seq, and ATAC-Seq chromatin profiling assays to obtain the information on gene regulatory analysis. The TRRUST database currently contains 8015 interactions between 748 TF genes and 1975 non-TF genes. The p values were calculated with the hypergeometric test. A p value < 0.05 was deemed as a statistically significant difference. Afterward, we performed the R package of "ggplot2 [3.3.6] and VennDiagram [1.7.3]" to visualize the results of the unique and common parts between each group. As shown in the Venn diagram in Figure 5A, we found that five transcription factors, namely, CTCF, STAT3, ESR1, C-JUN, and FOXA1, could be detected in the three databases above-mentioned. First, we assessed the mRNA levels of these five genes by RT-qPCR after the GC-1 cells were exposed to different concentrations of MEHP for 48 h. The outcomes revealed that MEHP treatment reduced the mRNA expression of Ctcf, Stat3, Esr1, C-jun, and Foxa1 in a dose-dependent manner ( Figure 5B). In addition, the Western blot results showed that Discussion Male reproduction is a complex process influenced by chemical and socio-psychobehavioral factors that act through different mechanisms. DEHP is a common environmental endocrine disruptor that widely exists in our daily lives. Humans can access DEHP through the gastrointestinal tract, lungs, and skin, with the gastrointestinal tract as the primary route of absorption. After oral ingestion, a small proportion of DEHP is absorbed directly in its original form and the majority is hydrolyzed to the mono-ester metabolite MEHP in the gastrointestinal tract by pancreatic enzymes and intestinal lipases. A portion (B) GC-1 cells were exposed to different concentrations of MEHP for 48 h, and the mRNA expression levels of Ctcf, Stat3, Esr1, C-jun, and Foxa1 were detected by RT-qPCR. Beta-actin (β-actin) was used as the housekeeper gene. (C) After 48 h of MEHP treatment, the relative protein levels of CTCF, STAT3, ESR1, C-JUN, and FOXA1 in the GC-1 cells were detected by Western blot, and the expression levels of these target proteins relative to β-actin were quantified by densitometric analysis of the bands. The results are expressed as the mean ± SD, n ≥ 3. * p < 0.05, ** p < 0.01, *** p < 0.001 versus the control group treated with DMSO. Discussion Male reproduction is a complex process influenced by chemical and socio-psychobehavioral factors that act through different mechanisms. DEHP is a common environmental endocrine disruptor that widely exists in our daily lives. Humans can access DEHP through the gastrointestinal tract, lungs, and skin, with the gastrointestinal tract as the primary route of absorption. After oral ingestion, a small proportion of DEHP is absorbed directly in its original form and the majority is hydrolyzed to the mono-ester metabolite MEHP in the gastrointestinal tract by pancreatic enzymes and intestinal lipases. A portion of MEHP is absorbed directly into the blood through the intestine and then distributed via the blood to the liver, kidneys, fat, testes, and other tissues. The other part of MEHP is metabolized in the liver by cytochrome P450 or the UDP-glucuronosyltransferase (UGT) enzyme to the secondary metabolites 2CX-MMHP, 5CX-MEPP, 5OH-MEHP, and 5OXO-MEHP [29,30]. Approximately 67% of DEHP in the body is excreted in the urine as MEHP, and thus MEHP can be used as a biomarker of DEHP exposure levels. Both DEHP and MEHP have toxic effects, and the toxicity potency of MEHP is more than 10 times that of DEHP. In testicular tissue, MEHP, the primary metabolite of DEHP, cannot be further metabolized, accumulates significantly in testicular tissue, and exerts reproductive toxicity. In recent years, a large number of studies have shown that DEHP has toxic effects on the male reproductive system. Specifically, DEHP induces significant changes in testicular histomorphology, decreases the testicular organ coefficient and sperm count, increases testicular cell apoptosis, and enhances the oxidative stress level of testicular tissue [9,31,32]. A study by Zhu et al. showed that DEHP can cause structural disruption of testicular tissue, the shedding of germ cells, and the reduction in sperm cell numbers in male mice, which is consistent with the conclusions of previous studies [1]. Although the damaging effects of DEHP on the male reproductive system have been confirmed, the molecular events of spermatogenic cell damage and the underlying molecular mechanisms remain unclear and need to be elucidated. Therefore, in this study, we used MEHP, the active metabolite of DEHP, to explore its direct effects on germ cells. Telomeric DNA is a non-coding sequence rich in guanine G and is vulnerable to damage by reactive oxygen species, making it more susceptible to internal and external factors [33]. Available epidemiological evidence suggests that both DEHP and MEHP, two environmental endocrine disruptors, are associated with changes in the cell telomere length. An epidemiological study in China showed that prenatal exposure to certain phthalates was associated with shortened telomeres in the cord blood of newborns [34]. However, an epidemiological study in the United States revealed a positive correlation between urinary MEHP concentration and peripheral blood telomere length [35]. The discrepancy conclusions may be related to differences in exposure concentration, duration, tissue, species, etc. These findings suggest that telomeres may be one of the important targets of PAE compounds. Telomere homeostasis is essential for spermatogenesis. When the unique telomere maintenance mechanism of germ cells has been disrupted, cells will undergo devastating effects such as growth inhibition, senescence, and death. Therefore, sperm telomere length can be considered as a new biomarker of male infertility [36]. A growing number of studies have shown a tight bond between male infertility and telomere damage. Zhu et al. [1] found that the destructive effects of DEHP and MEHP-induced germ cell senescence and morphological and structural abnormalities in testicular tissue may be associated with shorter telomeres and lower TERT expression in experimental animal and cellular models. Ling et al. [37,38], based on a cohort study on the reproduction of university students, discovered that higher levels of PAHs in urine were associated with shorter sperm telomere lengths and lower sperm mitochondrial DNA copy numbers. Ling et al. [23] further demonstrated at the cellular and animal levels that benzo[a]pyrene and its metabolite BPDE induced telomeric DNA breakage, shortened telomeres, and reduced telomerase activity and TERT expression in GC-2 cells, leading to the senescence and apoptosis of spermatogonia. There is also a link between male reproductive damage caused by pollutants such as brominated flame retardants [24] and fluoride [26] and the disruption of telomere structure and function in germ cells. These findings suggest that telomere disruption is an important target for male reproductive toxicity induced by various environmental pollutants. Damaged telomeres can be identified at the onset of meiosis, and cells with impaired telomeres will be removed from the germ cell precursor pool. During spermatogenesis, telomerase keeps high activity in germ cells to maintain the stability of sperm chromosomes and ensure that complete chromosomes can be passed on to the offspring [39]. In the present study, we found that GC-1 cells treated with different concentrations of MEHP for 48 h significantly induced cell proliferation inhibition, G 0 /G 1 phase cell cycle arrest, and apoptosis. As telomere disorders have been proven to be associated with male infertility, in this experimental design, we also investigated the effect of MEHP on telomeres in GC-1 cells. The results showed that after the GC-1 cells were exposed to different concentrations of MEHP for 48 h, the relative telomere length of cells was significantly shortened, the telomerase activity was decreased, and the mRNA expression levels of telomere-binding protein complexes Trf1, Trf2, Pot1, Rap1, and Tin2 were reduced. Telomerase reverse transcriptase (TERT) is a critical determinant of telomerase activity and telomere length maintenance. MEHP also decreased the expression of TERT mRNA and protein in GC-1 cells in a dose-dependent manner. Our results are similar to those of Zhu et al. These results suggest that the cytotoxic effects of MEHP-induced GC-1 cells may be associated with a decrease in TERT expression. In a sense, TERT expression may prevent MEHPinduced cell cycle arrest and apoptosis by maintaining telomere function. c-Myc is a transcription factor binding site located in the promoter region of the TERT gene, which binds to the Max protein to form a c-Myc/Max protein dimer. This dimer can conjugate to the specific site E-box on the DNA chain to acetylate histones, promote transcription, and up-regulate the expression of TERT, thus activating telomerase activity and positively regulating telomere length [15]. c-Myc can also indirectly activate the expression of TERT through the E 2 /ER pathway [16]. These hints suggest that c-Myc is a key factor in regulating TERT transcriptional expression. Our study found that MEHP reduced the mRNA and protein expression of c-Myc and Max in GC-1 cells in a dose-dependent manner, which may be associated with telomere impairing effects in male reproductive damage. This result indicates that G 0 /G 1 phase cell cycle arrest and apoptosis in GC-1 cells induced by MEHP exposure may be related to c-Myc-mediated TERT-associated telomere damage. To clarify the reason for the reduction in MEHP-induced c-Myc expression in GC-1 cells, we used the Cistrome DB, TRRUST, and Genecards databases to screen for c-Myc transcriptional regulators. We found a total of 37 genes in the Cistrome DB database, 74 genes in the TRRUST database, and 209 genes in the Genecards database. Visualizing the results for the unique and common sections between the groups, we found that five genes, CTCF, STAT3, ESR1, C-JUN, and FOXA1, were detected in the three databases. All five transcription factors are directly linked to male reproductive health. STAT3 is a member of the signal transduction and transcription activator protein family. The expression level of STAT3 in spermatogenic tubules is closely related to spermatogenesis [40]. JUN, as a classical transcription factor, is stably expressed in the nucleus of spermatogenic cells, widely exists in A1 spermatogonial cells, and is involved in the biological process of spermatogonial initiation and differentiation. Related research has revealed that the conditional knockout of JUN caused significant reproductive abnormality in mice such as prolonged female estrous cycles and reduced male sperm counts [41]. CTCF has a critical role in the development of male germ cells. Studies have discovered that the suppression of CTCF protein expression in spermatocytes can cause spermatogenesis disorders and infertility [42]. In conjunction with the present study, we detected that MEHP treatment inhibited the mRNA expression of Ctcf, Stat3, Esr1, C-jun, and Foxa1 in GC-1 cells, while the protein expressions of CTCF, ESR1, and C-JUN were also significantly blocked. These findings indicate that the detrimental effects of MEHP on c-Myc may be associated with the impaired expression of CTCF, ESR1, and C-JUN, the upstream transcription factors of c-Myc. Given what is known about phthalates and their anti-androgenic activity, we comment on the observation of ESR1. In this study, we observed a dose-dependent decrease in mRNA and the protein expression levels of ESR1 in the GC-1 cells after 48 h of MEHP treatment, particularly in the 200 and 400 µM MEHP exposure groups. The ESR1 gene is an indispensable part of the development of the reproductive system, and its normal expression plays an important regulatory role in male reproductive function [43]. Related studies have shown that the ESR1 gene is involved in the process of spermatogenesis and sperm maturation. After the ESR1 gene is knocked out, the sperm concentration and motility in epididymis decrease significantly. Similarly, the absence of ESR1 can lead to male infertility, which may be associated with abnormalities in the spermatogenic epithelium and spermatogenesis [44]. In part, these discoveries may account for the fact that impaired ESR1 expression probably contributes directly to the damaged male reproductive system. In summary, based on an in vitro MEHP staining model of mouse spermatogoniaderived GC-1 cells, we discovered that MEHP exposure may induce cell cycle dysregulation and apoptosis by mediating abnormal telomere structure and function in spermatogonia. The telomerase reverse transcriptase (TERT) may be involved in this damaging process to some extent. Further studies suggest that the reduced expression of TERT may be associated with a decrease in the expression of c-Myc, a transcriptional regulator that regulates TERT activity, and that there is a possible link between the blocked expression of c-Myc and diminished expression levels of CTCF, ESR1, and C-JUN, which are transcriptional regulators of c-Myc signaling. Our findings implicate that MEHP may inhibit c-Myc signaling through specific transcriptional regulators, activate TERT-mediated telomere damage, and lead to G 0 /G 1 phase cell cycle arrest and apoptosis in GC-1 cells. However, this conclusion, only based on the present study, has limitations. There may be other causes independent of telomere damage to explain the cytotoxic damage induced by MEHP treatment. This discovery has, to some extent, enriched the molecular mechanism of MEHPinduced male reproductive damage, providing a new perspective and a unique way of thinking to study the reproductive damage in males. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/toxics11050448/s1, Table S1: Primers' sequences for telomere genes and reference genes; Table S2: Primers' sequences for various genes; Figure S1: Changes in the mRNA expression levels of Cdk4, Cdk6, Ccnd1, and Rb1 in GC-1 cells after 48 h of MEHP treatment were analysed by RT-qPCR.
2023-05-12T15:20:11.514Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "c25f1a4cb674e1408658b464ff67dc14c4597a47", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/toxics11050448", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d2f89e7422195c026076e4132cc8f98c9d9a2f4", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [] }
210517488
pes2o/s2orc
v3-fos-license
Performance evaluation of compound parabolic concentrator with evacuated tube heat pipe The keen interest of harvesting solar energy is increasing because of its clean, green and cheap nature. The pv modules collect the energy from the sun and produce electricity. The solar thermal collectors collect the solar energy in the form of heat. Compound parabolic concentrator is a type of solar thermal collector, which has a combined parabolic structure concentrating the solar radiation on a single line focus where the receiver tube is placed. The performance can be further increased by placing the evacuated tube, which traps the heat. The heat is efficiently conducted by using the heat pipe. A heat pipe is a heat-transfer device that transfer the heat between two solid interfaces. It combines the principles of both thermal conductivity and phase transition. The experimental study is carried out to find out the performance characteristics. The results according to the various parameters such as mass flow rate, tilting angles were plotted to find the better performance. The result shows that the maximum efficiency of the CPC is achieved between 12.00hrs. and 15.00hrs. The intensity of radiation during maximum efficiency is 1000 W/m2 to 1050 W/m2. Introduction The sun's energy is available in most of the place, which is used for the advancement of technology. The solar energy is harvested using PV modules and solar thermal collectors for domestic and industrial applications at temperatures of 60°C to 300°C [1]. Solar thermal collectors as well as Solar concentrators are the devices used for concentrating solar energy for various purposes such as Solar water treatment technologies based on CPC [2], air heaters using solar concentrators [3], solar cookers based on CPC [4], Water heaters based on solar collectors [5], steam generators based on CPC [6]. Solar thermal collectors use heat-absorbing panels to absorb sunlight directly. The solar collectors commonly refer to solar hot water panels, but may refer to installations such as parabolic troughs [7] and solar towers. Here the solar energy is directly used for heating purpose [8] and in large it can be used to produce steam that can run the generator to produce electricity [6]. The types of solar collectors are imaging and non-imaging types. The compound parabolic concentrator is a non-imaging type concentrator which ha ability to concentrate rays to smaller absorber surface [10]. The sunray falling on the absorber is not focused, therefore concentration is achieved with a CPC design [11]. IOP Conf. Series: Earth and Environmental Science 312 (2019) 012008 IOP Publishing doi: 10.1088/1755-1315/312/1/012008 2 The compound parabolic collector is a combined parabolic structure, which concentrates the solar radiation on a single line focus [12]. The rays falling on the CPC is reflected to a single line focus based on the "edge ray principle" [10]. They are called "non-imaging" because they do not produce any optical image of the source. Compound parabolic concentrators (CPCs) is designed as stationary solar collectors to achieve relatively high temperature operations with high cost effectiveness [13]. The light rays coming from the edges of the sources are redirected to the edges of the receiver. This confirms that all light rays coming from the inner points in the source will make contact with the receiver as shown in figure 1. Figure 1. Edge ray principle The rays falling on the CPC should be reflected to the receiver, so a reflective material is used which reflects the rays to the evacuated tubes. The reflective material should have high reflectivity. Evacuated tube collectors are devices, which consists of cylindrical absorbing surface in which the vacuum is created between concentric glass tubes made up of borosilicate glass [16]. The outer layer is transparent allowing the light rays to pass through with minimal reflection. The inner layer is coated with a special selective coating (Al-N/Al) [17], which provides excellent solar radiation absorbing properties. The evacuated tube absorbs the solar energy reflected from the CPC and converts into the heat energy [18]. Thus, vacuum acts as an insulator, which does not lose the heat. To maintain vacuum between two glass layers a barium getter is used. This barium layer absorbs any CO, CO 2 , N 2 , O 2 , H 2 O and H 2 out-gassed from the evacuated tube during storage and operation, thus maintaining the vacuum. A heat pipe is a heat-transfer device that transfer the heat between two solid interfaces. It works on both the principles thermal conductivity and phase transition [8]. Due to the very high heat transfer coefficients for boiling and condensation, heat pipes are highly effective thermal conductors. Therefore, when the heat pipe is introduced in the evacuated tube there would be more heat transfer resulting in a high efficiency solar thermal collector. Nano fluids can be used inside the heat pipe to increase the heat transfer [9]. The nana fluids such as CuO, graphene oxide nano fluids, etc. can be used inside the heat pipe [14]. Methodology Description of various materials used and methodology used to make different parts detailed below. Components The components of the CPC are listed below 2.2.1 CPC The application of CPC is concentrating and illumination, the CPC is made by reflective material on its side wall. The base of the CPC is made of wood. The CPC dimensions were collected with the help of MATLAB statistical software and the truncation is done based on the experiment conducted by the M.M Isa [15], which results in the reduction of material size for economical purpose. The outline of the CPC is drawn on the plywood and is cut with the help of cutting wheel. The plywood acts as the supporter of the CPC and holds the Sheet metal, evacuated tube and heat pipe. The performance of CPC is increased with respect to the light intensity, which was proved by the experiment conducted by Ankur Geete. In addition, Giovanni casino tested CPC for larger acceptance angle (>30°) which shows better results depending on the solar intensity. Evacuated tube The solar absorber tube has two concentric glass tubes closed at one end with an annular vacuum space and a selective surface absorber on the outer surface (vacuum side) of the inner tube. In heat pipe evacuated tube collectors, a sealed heat pipe, usually made of copper to increase the collector's efficiency in cold temperatures, and is attached to a heat absorbing reflector plate within the vacuum sealed tube. Heat Pipe. The manufacturing of heat pipe involves selection of materials, cleaning process, checking for leaks, creating vacuum, and filling working fluid, instrumentation works and the testing of the heat pipe. The material selected for the heat pipe is copper because of its high thermal conductivity and it does not corrode. Also it has high melting point and copper allows heat to quickly pass through it. Therefore, it is used in quick heat transfer applications. The cleaning of metal is necessary to remove any grease and impurities associated with the metal. The metal is placed in pool of hydrochloric acid and sodium-di-chromate. It will remove the grease and impurities. The copper tube is checked by allowing the helium inside the copper pipe. The flow of helium shows any cracks and holes if any present in the copper pipe. There may be chances for micro cracks in the copper pipe which can be detected using helium. Then the vacuum is created inside the copper tube with the help of vacuum pump. The vacuum pump sucks the air inside the copper pipe and maintains the pressure inside the copper pipe at 0.0004 bar. Then the acetone is filled inside the copper pipe to 40% of total volume. Then the pipe is sealed with the help of hydraulic press. The thermocouple wires were fixed to the copper heat pipe to check the temperature at various points. The thermocouple wires were welded to the surface of copper heat pipe and sealed by silicon paste. Then the thermocouple wires are connected to the temperature indicator. Thickness 1mm Length 1400mm Working fluid Acetone The working fluid used here is Acetone which is colorless, volatile, flammable liquid. Its chemical formula is CH 3 -CH 3 -CO. the boiling point of acetone is about 56°C. The ph value of acetone is 7 i.e. neither acidic nor basic it is neutral. The acetone has excellent compatibility with copper and it has lower boiling point which is suitable for heat pipes. Aluminium foil. The Aluminium foil is the reflective material used to reflect the falling radiation to the receiver tube. The reflectivity of the material is important and it shouldn't absorb the falling radiation. The Foil is made of Al8011 type Aluminium material. Fabrication process The fabrication process listed is done with the help of conventional machines available. The components and methods used to fabricate the components were discussed in the table. Mild steel pipes of size 1inch were brought and cut using cutting wheel. Then the stand is made by welding the parts. Then the CPC structure were drawn on the plywood and cut using cut wheel. The sheet metal is placed over the plywood and screwed. The evacuated tube is placed on the CPC and hold with the help of clamp. The reflective material is fixed above the CPC. Then the heat pipe is placed inside the evacuated tube and the thermocouples were attached to the temperature indicator. The base of the CPC is placed above the stand and a screw rod is fixed to adjust the angle of the CPC. Testing methodology 2.5.1 Mass flow rate. The various mass flow rates such as .016 kg/s, .025 kg/s, .033 kg/s, .041 kg/s, .05 kg/s were tested and the corresponding temperature were taken out. The water gets more heat when the mass flow rate is minimum. When the mass flow rate is maximum, the heat is transferred to more volume of water, which results in low temperature increase. Tilting angle. The effect of a solar tilt angle on energy output may be up to 20% percent compared to flat plate collectors. Thus, various tilting angles like 10°, 15°, 20°, 30° and 45° and their corresponding results were plotted. The optimum tilting angle is based on the latitude and altitude of the location. The optimum angle for the Coimbatore is 11° facing south direction. This is the angle where the efficiency of the CPC is high. The CPC is placed such that it facing south because that is generally where they would receive most sunlight. This is because India lies in northern hemisphere. Due to the tilt of the earth and elliptical orbit around the sun, the sun apparently moves from east to west from the south portion of the sky. Therefore, the roofs have high exposure to sun from the south direction. Results and Discussions The CPC was designed and fabricated as explained. The testing is done based on the testing methodology. Overall specifications with economic analysis, and test results found were detailed. Solar radiation intensity The radiation intensity has a direct impact on the thermal efficiency of the CPC. If the radiation intensity is high, then the thermal efficiency of the CPC also high. The solar radiation intensity varies with the time. It increases steadily up to 14:00hrs and then decreases gradually. The performance of CPC is evaluated by exposing it to the indoor and outdoor conditions. The CPC is placed to faces south. The CPC is kept at different tilting angles such as 0°, 11°, 30°, 45°. The temperature obtained at each tilting angle is noticed and the effective one is found. The solar radiation intensity varies with time. At first intensity is low which increases up to 14hrs and then it gradually decreases. The radiation intensity over the testing time period is explained below in figure 5. The efficiency of the CPC is increased greatly with the high intensity of radiation. The intensity of radiation is not constant. It differs each day due to atmospheric conditions. The maximum radiation intensity observed during the experiment was 1050W/m 2 . The average intensity of radiation is about 933W/m 2 . Figure 5. solar radiation intensity The heat pipe requires some time to attain start up temperature. When heat pipe attains its start-up temperature, acetone evaporates and takes away the heat to the condenser portion. Acetone condenses and returns back to the evaporator portion. The phase change of liquid to vapor and vapor to liquid takes place in a cycle. The heat is transferred to the water flowing through the condenser portion. The heat absorbed by the heat pipe during the time is described in the figure 6. There are four thermocouples used which represents the temperature at various portions described in the table. Temperature of the condenser portion The evaporator and condenser portion of the heat pipe are attached with thermocouple. Thus the temperature indicator shows the temperature at various portions. The temperature was noted at a time interval of every two minutes then the water flow is done. Again changing the tilting angle, the reading was taken. This repeated throughout the experiment. Figure 6. Temperature rise in heat pipe Performance of CPC at varying tilting angles The heat absorbed by the CPC at varying tilting angles like 0°, 11°, 30° and 45° are compared. The results show that increasing the tilting angle increases the efficiency of the CPC [19]. At high tilting angles the heat absorbed by the CPC increases up to 13:00hrs uniformly and then the efficiency decreases. At some angle the efficiency is nearby constant throughout the time period. Thus angle with constant efficiency throughout the day is chosen as optimum tilting angle. The results show that the optimum tilting angle is nearly 11° which is shown in figure 8. The CPC is facing towards south because this direction receives more heat throughout the day. The optimum tilting angle varies with the altitude. The efficiency can be increased by using tracking mechanism. The figure 7 represents the performance of CPC at tilting angle 0°. The temperature of CPC is increased constantly up to 14:30hrs and then decreases. The figure 9 represents the performance of CPC at tilting angle 30°. At 30° the temperature increases steadily and then decrease after 14:00hrs. The reason for this is the sun moving towards the west the intensity of radiation falling on the CPC decreases. At 45° the temperature attains maximum value up to 14:00hrs. Then temperature drops rapidly. Because the radiation falling on the sun is minimum after 14:00hrs. The figure 10 represents the thermal performance of CPC at 45°. Figure 8. Thermal performance of CPC at 11° tilting angle In the above chart, the CPC receives less heat at first and then the temperature of CPC increases to maximum value at 13:30hr and starts to drop. During 9:00 to 13:00hrs the efficiency is similar to other tilting angles. But after 13:00hr the efficiency of CPC is higher for 11°. Thus, the overall efficiency of CPC is high at the tilting angle 11°. 10 T1 37 37 38 38 39 41 42 42 41 39 T2 101 102 115 123 128 142 149 147 140 132 T3 98 100 114 122 127 139 145 143 138 127 T4 68 78 86 91 118 131 137 135 129 Figure 9. Thermal performance of CPC at 30° tilting angle The experiment shows that at angle 11° the CPC receives heat throughout the experimental period and the overall efficiency is high for the whole day. The angles 30° and 45° shows the CPC receives more heat up to 14:00hrs and after 14:00hrs the heat absorbed by the CPC drops gradually. Thus optimum tilting angle for the compound parabolic concentrator thermal collector is 11°. Thermal performance of CPC at indoor condition The thermal performance of CPC at indoor condition was done which was represented in figure 11. The tungsten-halogen light acts as the light source for the CPC. The tungsten halogen light produces more heat and the radiation intensity of the tungsten halogen light is more than 2000W/m 2 . The indoor testing does not produce the results obtained in the outdoor condition. The halogen light intensity can be varied by varying the supply voltage.
2019-10-03T09:06:42.273Z
2019-10-02T00:00:00.000
{ "year": 2019, "sha1": "ed5bea2fba32f50e33fc47ba4825e120db898fc9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/312/1/012008", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fa3b55c929bbe2a8dbad124f049842664df51c4a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
14343529
pes2o/s2orc
v3-fos-license
Secretion of a proteolytic anticoagulant by Ancylostoma hookworms Hookworms of the genus Ancylostoma secrete an anticoagulant that both inhibits the clotting of human plasma and promotes fibrin clot dissolution. This anticoagulant activity is attributable to a 36,000 dalton proteolytic enzyme. The protease can degrade fibrinogen into five smaller polypeptides that intrinsically have anticoagulating properties, covert plasminogen to a mini-plasminogen-like molecule, and hydrolyze a synthetic peptide substrate with specificity for elastolytic enzymes. It is hypothesized that the parasite uses this enzyme to prevent blood clotting while feeding on villous capillaries. BY PETER J. HOTEZ From the Laboratory of Medical Biochemistry, The Rockefeller University, New York 10021 Human hookworm disease, a clinical condition caused by Ancylostoma duodenale or Necator americanus infection, affects up to 630 million people in the developing world (1). Using their buccal cavities and hooklike teeth, the adult parasites attach themselves to villi in the small intestine. Each worm can then extract up to 0.20 ml of blood per day causing intestinal blood loss and ultimately iron-deficiency anemia and hypoalbuminemia (2,3). To date the biochemical mechanism by which hookworms prevent blood coagulation while feeding remains unexplained. Previous studies have shown that extracts of the dog hookworm Ancylostoma caninum can prolong prothrombin time (PT) 1 (4)(5)(6), with variable effects on partial thromboplastin time (PTT) (4)(5)(6), and interfere with collagen-or ADP-induced platelet aggregation, as well as inhibit the action of factor Xa (6). The recent finding of a proteolytic enzyme with anticlotting properties from the giant leech Haementeria ghilianii (7) led us to examine whether a similar proteolytic anticoagulant exists in the Ancylostoma hookworms. Data presented here suggests that the Ancylostoma hookworms secrete a 36,000 dalton protease which both interferes with fibrin clot formation and promotes fibrin clot dissolution. This proteolytic enzyme could be critical for continuous exsanguination from villous capillaries and therefore represents a potential target for immunological intervention. Materials and Methods Hookworms~ Third-stage infective filariform (La) larvae of A, duodenale were the gift of Dr. Gerhard Schad, University of Pennsylvania, School of Veterinary Medicine, Philadelphia (8). Briefly, 1,000-1,500 La larvae were administered to 10-wk-old beagles reared helminth-naive (White Eagle Laboratories, Doylestown, PA) and immunocompromised with a daily oral dose of 5 mg prednisolone. 42 d after infection, the entire length of the small intestine was removed, slit longitudinally, and suspended in 0.85% NaCI at 37°C. Within 2 h the majority of adult worms released their grasp and were collected at the bottom of the cylinder. The living worms were individually rinsed in saline and were either used immediately or stored at -80°C. La larvae of A. caninum were initially the gift of Dr. G. Schad, but were later cultured from embryonated eggs in the feces of infected pups (9). The L3 larvae (1,500 2,000) were administered to mongrel pups, aged 2-12 mo. Electrophoretic Separation of Proteolytic Actwities. The protein composition of the homogenates and ES products were analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) (12) after silver staining (13). Proteolytic activity in the gels was visualized by a modified procedure of Granelli-Piperno and Reich (14). Aliquots of hookworm homogenates or ES products were added to 20 ~1 of a buffer containing 10% glycerol, 3% SDS, and 0.0625 M Tris-HC1 buffer, pH 6.8, and placed in an 80°C bath for 30 s. Samples were loaded onto 10% SDS-polyacrylamide slab gels not more than 0.75 mm in thickness and subjected to electrophoresis. After electrophoresis the gel was gently rocked in 2.5% aqueous Triton X-100 for 40 min at 22°C, rinsed thoroughly with distilled water, and then rocked in distilled water for an additional 30 min. At this time the get was removed and overlayed onto an agar plate containing casein (5. aeSI-Fibrinogen Plates. Multi-well Linbro plates (Linbro Scientific, Inc., Hamden, CT) coated with 125I-fibrinogen were prepared by the method of Unkeless et al. (15). Radioactivity in solution was determined with a Packard Auto-Gamma Scintillation Spectrometer (model 3002; Packard Instrument Co., Downers Grove, IL). Electrophoretic Separation of Fibrinogen Fragments. A solution of bovine fibrinogen (6.7 mg/ml) was incubated at 37°C in the presence of ES products (0.2 mg/ml). As controls, an equal amount of fibrinogen was incubated either alone or with plasmin. At indicated times aliquots were removed, added to buffer containing 10% glycerol, 3% SDS, and 0.0625 M Tris-HCl buffer, pH 6.8, and boiled for 5 min. The samples were subjected to SDS-PAGE. Anticoagulant Activity of ES-generated Fibrinogen Degradation Products. Bovine fibrinogen (6.7 mg/ ml) was incubated either alone or with 0.01 vol of ES products (0.20 mg/ml protein) for 12-24 h at 37°C. PT were measured with various concentrations of citrated plasma and ES-generated fibrinogen degradation products or fibrinogen alone. Electrophoretic Separation of Plasminogen Fragments. Plasminogen was purified from human plasma by affinity chromatography (16). The plasminogen (2.3 mg/ml) was incubated at 37°C in the presence of ES products (0.05 mg/ml protein). As controls, an equal amount of plasminogen was incubated either alone or with 1.0 mU urokinase. At indicated times aliquots were removed, added to buffer containing 10% glycerol, 5% 2-mercaptoethanol, 3% SDS, and 0.0625 M Tris-HCl buffer, pH 6.8, and boiled for 5 rain. The samples were subjected to SDS-PAGE. Assessment of Elastolytic Actwity. Elastolytic activity of hookworm homogenates and ES products was determined using a synthetic peptide substrate covalently linked to a fluorescent leaving group (17) Results Clotting Times. To determine whether Ancylostoma hookworms had an effect on fibrin clot formation, homogenates of A. duodenale were added to samples of normal citrated human plasma that were then assayed for PT and PTT. The addition of aliquots of the homogenates prolonged PT and PTT in a concentration-dependent manner (Fig. 1). A prolongation of PT was also observed with ES products of A. Electrophoretic Separation of Proteolytic Activity. The composition ofA. caninurn homogenates were analyzed by separation on SDS-PAGE (Fig. 2 A). In lane a, ~40 major bands appeared after silver staining. Since it was suspected that Ancylostoma hookworms might secrete their anticoagulant, ES products were also analyzed. Compared with the crude homogenates, ES products contained fewer proteins (Fig. 2 A, lane b). All 12 of the secreted proteins could be identified in the homogenates of the adult worms. The previous finding of an anticlotting protease from the leech (7) led us to investigate the possibility that some of the protein bands in Fig. 2A might have proteolytic activity. To analyze these proteolytic components, both A. caninum homogenates and ES products were separated on SDS-PAGE and overlayed onto casein agar (Fig. 2 B). Examination ofproteolytic activity in the homogenates revealed seven bands (lane a) including three major components at 31,000, 36,000, and 40,000 daltons. Smaller amounts of proteolytic activity were associated with molecular mass bands at 21,000, 71,000, 91,000, and 102,000 daltons. Homogenates of A. duodenale showed a similar pattern of proteolytic activity (data not shown). In contrast, only a single band of proteolytic activity was found in the ES products of A. caninum (Fig. 2B, lane b). The apparent molecular mass of this protease was 36,000 daltons. Anticlotting Properties of ES Products. A number of experiments were undertaken to determine the specificity of the proteolytic activity which could account for the anticlotting effect. Using multi-well plates coated with 12sI-fibrinogen, both homogenates and ES products ofA. caninum were observed to degrade radiolabeled fibrinogen coated on plates (Fig. 3). The amount of fibrinogen degraded was proportional to the amount of homogenate or ES products added, and showed no significant amplification by the addition of plasminogen (data not shown). This latter experiment rules out the possibility that the protease was acting as a plasminogen activator. The ability of ES products to degrade fibrinogen was also demonstrated using SDS-PAGE under nonreduced conditions (Fig. 4). ES products (lane b) catalyzed the degradation of fibrinogen (Fig. 4, lane a) to five major components of molecular mass 223,000, 204,000, 156,000, 122,000, and 80,000 dahons (lanes d and e), and a minor component at 61,000 dahons (lane e). The molecular mass of fibrinogen alone incubated at 37°C remained unchanged throughout the experiment, and the molecular mass of the fibrinogen degradation products was different then those observed with plasmin-catalyzed degradation of fibrinogen (data not shown). The fibrinogen degradation products resulting from ES digestion, by themselves increased PT. When 50 bd of fibrinogen (6.7 mg/ml), which had been previously incubated for 12-24 h at 37°C with 0.01 vol of ES products, were added to citrated plasma, the PT was prolonged 80% as compared with a 30% prolongation with 50 /zl of fibrinogen (6.7 mg/ml) incubated alone under similar conditions. In addition to direct fibrinogenolysis, ES products also catalyzed the cleavage of plasminogen (Fig. 5). After a 1 h incubation with plasminogen, two polypeptides of 40,000 and 58,000 daltons were formed (Fig. 5, lane b). The molecular mass of the smaller fragment is similar to that reported for mini-plasminogen (18) which is formed that human plasminogen contains two major components, plasminogen a and b, which have slightly different molecular masses (19). This cleavage by ES products was in contrast to the incubation of plasminogen with urokinase which resulted in two fragments of 68,000 and 28,000 daltons, corresponding to the heavy and light chain of plasmin, respectively (data not shown). Plasminogen alone showed no degradation during this incubation, Elastolytic Properties of ES Products. The catalytic cleavage ofplasminogen to a miniplasminogen-fike fragment suggested that ES products might have a proteolytic activity with elastolytic properties. Both hookworm homogenates and ES products could hydrolyze the synthetic substrate meosucc-ala-ala-pro-val-AMC, which has specificity for elastolytic enzymes (17). The specific activity for the hydrolysis of the substrate (20 ~tM) at 37°C was 0.02 nmol of AMC released/rain/rag protein and 0.21 nmol of AMC released/rain/rag protein for the homogenates and ES products, respectively. This is comparable to 21 nmol of AMC released/rain/rag protein using commercially purified porcine elastase (Elastin Products Co., Pacific, MS). The low activity observed for the purified enzyme reflects the suboptimal synthetic substrate concentrations (17) and the pH conditions (pH 7.0 instead of the optimal pH 8.8) used for the experiment. The time course of secretion in vitro by A. caninum of the elastolytic-like protease was followed using the synthetic substrate (Fig. 6). A. caninum hookworms secrete the protease in linear fashion during the first 9 h in vitro. Subsequently, the amount of protease released decreases, probably reflecting a decrease in the viability of the worms. This increase in elastolytic activity with time in vitro was paralleled by increasing intensity of the zone of lysis at 36,000 dahons on SDS-PAGE with casein agar (data not shown). Discussion Evidence has been presented that the Ancylostoma hookworms secrete a 36,000 dalton proteolytic enzyme. It is hypothesized that this secreted protease may have a role related to the antihemostatic mechanism of the hookworm. Recently, anticoagulant activity from the giant leech Haementeria ghilianii (7) and the bacterium Streptococcus faecalis (20) has been attributed to a proteolytic enzyme that not only inhibits the clotting of plasma, but also dissolves previously formed fibrin clots. We examined the possibility that Ancylostorna ES products function in a similar manner. In this way proteolysis may act concurrently with previously reported anticoagulating properties of hookworm extracts, namely the inhibition ofplatelet aggregation (6) and factor Xa activity (6), to ensure free blood flow into the bucca/cavity and alimentary canal of the parasite, In addition to the protease described above, other proteases with anticlotting activity might also be present in the ES products. This possibility exists since the casein-lysis technique utili2ed for the identification of the protease only detects proteolytic enzymes that are reactivated by removal of SDS. The anticoagulant effects of hookworm ES products on plasma can be explained, in part, by a direct action on fibrinogen to produce nonclottable derivatives. When fibrinogen was incubated with ES products for various times, breakdown products ranging between 80,000 and 223,000 daltons were observed with SDS-PAGE. These products did not resemble the well characterized fragments observed with plasmin degradation (fragments X, Y, D, and E). Degradation of fibrinogen was also observed when a25I-fibrinogen coated on multi-well plates was incubated with ES products. These fibrinogen degradation products also increased PT. In addition to direct fibrinogenolysis, ES products cleaved plasminogen to two fragments of 58,000 and 40,000 daltons. This smaller fragment resembles miniplasminogen, a product formed by leukocyte elastase digestion of plasminogen, which in turn can be activated by urokinase to form mini-plasmin. Mini-plasmin is not readily inhibited by a2-antiplasmin, the major physiologic inhibitor of plasmin in plasma (18,21), and may represent an asset to the hookworm whose survival necessitates destruction of a fibrin clot. This catalytic conversion of plasminogen to a mini-plasminogen-like fragment led us to investigate whether the ES protease had elastolytic properties. The protease was found to hydrolyze meosucc-ala-ata-pro-val-AMC, a synthetic peptide substrate for elastolytic enzymes (I 7). The quantity of elastolytic-like activity released in vitro by the hookworms increased with time. From a medical and veterinary standpoint, the proteolytic anticoagulant of the hookworm represents a unique feature available to natural and induced immunological intervention. In support of this is the fact that dogs with repeated infections became immune to A. caninum infection (25), and sera from these dogs neutralized proteolytic enzyme activity in esophagea! extracts from the parasite (26,27). Presumably, during feeding ES products are introduced into the host and elicit protective antibodies. It is possible that these protective antibodies inhibit the 36,000 dalton ES protease, block the antihemostatic mechanism, allow clot formatin, and starve the parasite. This possibility is under investigation. Summary Hookworms of the genus Ancylostoma secrete an anticoagulant that both inhibits the clotting of human plasma and promotes fibrin clot dissolution. This anticoagulant activity is attributable to a 36,000 dalton proteolytic enzyme. The protease can degrade fibrinogen into five smaller polypeptides that intrinsically have anticoagulating properties, covert plasminogen to a mini-plasminogen-like molecule, and hydrolyze a synthetic peptide substrate with specificity for elastolytic enzymes. It is hypothesized that the parasite uses this enzyme to prevent blood clotting while feeding on villous capillaries.
2014-10-01T00:00:00.000Z
1983-05-01T00:00:00.000
{ "year": 1983, "sha1": "d7fb6120ca57082f14ff8d3db253618b393ba32c", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/157/5/1594.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d7fb6120ca57082f14ff8d3db253618b393ba32c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
253040822
pes2o/s2orc
v3-fos-license
Melanoma Associated Leukoderma: Case Series and Literature Review Vitiligo is an acquired achromia linked to an autoimmune destruction of melanocytes. One of its mysterious aspects is its occurrence with melanoma known as melanoma-associated leukoderma (MAL). The Objective of the study is to shed the light on the clinical aspects of MAL for a better understanding while providing a comprehensive review of the literature. We retrospectively analysed the clinical characteristics of 12 patients having MAL, from 2016-2021. We compared our findings to those reported in the literature. Our series illustrates different situations where vitiligo is linked to melanoma. None of our patients had a positive family history of vitiligo. The median age was 68 years with extremes of 90 and 36 years, 10 patients had their MAL located on photo-exposed areas. Clinically MAL presented as diffuse, macular achromic patches located primarly at sites distant from the primary melanoma and notably on the trunk, legs and face with a late age of onset. No histological particularities as opposed to vitiligo were found. Given the clinical similarities of these achromias with conventional vitiligo, a more thorough clinical examination for melanoma in patients with vitiligo seems to be crucial. Special attention is needed for older patients presenting with late onset, very progressive vitiligo-like lesions refractory to standard treatment. Introduction The presence of melanoma and vitiligo simultaneously in the same patient is considered a medical paradox, giving the fact that the first is characterized by a massive irregular proliferation of atypical melanocytes within the epidermis, while the latter is the result of progressive loss of functional epidermal melanocytes. This article provides an approach of the clinical features of Melanoma associated vitiligo (MAL) for a better understanding of this entity while providing a comprehensive review of the literature. Material and Methods In this case series, we retrospectively analysed the clinical presentation, type of depigmentation, and disease course of patients with MAL who were diagnosed at the Dermatology Department of IBN ROCHD University Hospital in Casablanca, from 2016-2021. As no approved definition of MAL currently exist, we arbitrarily defined MAL as any achromic lesions present before, concomitantly or after the diagnosis of melanoma. When other causes of leukoderma were suspected, biopsy was performed to discard differential diagnosis. Patients characteristics, including demographic, clinical, pathological, and follow-up, were anonymously extracted from the patients medical records. This study was carried out in accordance with the principles set out in the Declaration of Helsinki and local ethical guidelines (Ethics Committee for Biomedical Research, Faculty of Medicine and Pharmacy, Casablanca, Morocco). As no procedures other that standard of care and anonymized and observational data analysis were performed during the study, no additional ethics committee approval was necessary. Results In the period of the study, 130 patients with melanoma were hospitalized and Twelve patients with MAL were identified which gives us a roughly prevalence of 9,2 %. All of our patients were of phototype IV; the median age was 68 years. The bilateral and symmetrical pattern was found in Ten patients and the distribution was mostly generalized to the face, trunk, back and legs, but in one patient with acral melanoma on his right foot, the MAL was mostly dispatched on the same side of the malignant tumor ( Figure 1d). All of our patients had their MAL located at distance of their primary melanoma, although in one patient (Figure 1a), MAL presented as a white halo surrounding the melanocytic lesion. Ten patients had their MAL located on photo-exposed areas, and Two on the back. None of them had a positive family history of vitiligo. The clinical presentation consisted mostly of well-demarcated achromic patches. Lesions were generally refractory to topical steroids and UV phototherapy. Out of the twelve patients, five had MAL prior to melanoma, five after the onset of melanoma, one following interferon treatment, and in one patient both diseases appeared concomitantly. Nine patients out of twelve had a stage IV melanoma, four of which had MAL after the malignant diagnosis. Five patients (Figure 2a, Figure 2b, Figure 2c, Figure 3b, Figure 3d) with MAL diagnosed prior to melanoma were staged IB, IA, and IV for the last tree patients respectively. Extreme latency periods going from 10 years prior to 10 years after the melanoma diagnosis were noted. In the setting where MAL preceded melanoma (5 patients), none of our patients consultation was motivated by the appearance of the vitiligo-like lesions except for one (Figure 2a), she was diagnosed by her dermatologist while consulting for vitiligo. Two patients (Figure 2c, Figure 3b) consulted after their melanoma got ulcerated and enlarged, and the last patient ( Figure 2b) reported that she became esthetically bothered and worried about the pigmented lesion on her face, which proved later to be lentigo maligna melanoma (LMM). Histological analysis was performed in eight patients showing a total absence of functioning melanocytes in the lesions, in keeping with Vitiligo. Clinical characteristics of these patients are summarized in Table 1. The incidence of MAL seems to be very low, Koh et al [1] reported only eight cases in 14 years, and Schrallreuter et al. examined 623 patients with melanoma to only yield the MAL in 23 cases ( 3,7%). [2] Several studies showed that spontaneous MAL in individuals with melanoma is significantly more common than in the general population [2,3,4]. A prospective study of 2954 patients with melanoma of all stages found the prevalence of vitiligo was 2.8%, compared with 0.4-2% in the greater population [5]. Paradoxically, a serie of 1052 vitiligo patients revealed only 3 cases of melanoma (0.3%), which is a lower incidence of melanoma than in the general population [6].Immunotherapy probably increases the incidence of vitiligo associated with melanoma [4,7]. MAL can either spontaneously precede, follow the onset of melanoma, or more commonly occur following treatment [4,6,7,8,9]. Although in 79.5% of cases MAL is diagnosed after the onset of melanoma [5], leukoderma can be a premonitory symptom occurring months to years, before the diagnosis of the malignancy is made. Different forms of MAL were reported in the literature: (1) A white halo surrounding the melanocytic lesion (Sutton's nevus), (2) achromic patches located in the melanoma scar, (3) complete or partial regression of the melanoma [10]. Rarely, (4) MAL manifest as white patches distant from the primary lesion, arising either spontaneously or following immunologic-based treatments. Our series illustrates different situations where vitiligo is linked to melanoma. In Table 2, we represent a comparative analysis of some case series of MAL reported in the literature. Many of these series, considered MAL as a side effect linked to treatment good response, however in our patients, all but one (Figure 1b) had their MAL appear independently of any treatment, which may suggest that MAL, besides being a therapeutic goal, is also an independent indicator of the autoimmunity effectiveness against melanoma. Several studies sought to clarify if vitiligo and MAL are distinct clinical entities [4,8,11,12]. The clinical, histological, and immunohistological differences between MAL and classic vitiligo are not well established. Vitiligo is triggered by both genetic and environmental factors, whereas MAL is triggered by the presence of melanoma. With the heavy consequence of misdiagnosing patients as having vitiligo and later developing melanoma metastases. In MAL, lack of family history of vitiligo or atopy, advanced age of onset, predominance in photo-exposed areas and generalized distribution are found to be discriminative features [8,11,12]. Notwithstanding, histological and immunohistological differences have not been found. Accordingly, none of our patients had a positive family history of vitiligo. The median age of our patients was 68 years with extremes of 90 and 36 years and the most of our patients (10/12) had their MAL located on photo-exposed areas. This correlates with other case series in which a positive family history of vitiligo was absent in all patients with MAL [8,12]. In contrast, lommerts et al reported that 9.1% of the patients with MAL had a positive family history of vitiligo [13]; patients with history of autoimmune disease encounter the risk of having their MAL diagnosed as vitiligo vulgaris, without prompting the clinician to inspect further for melanoma. Lommerts et al. also reported that experts in the field blindly examined photographs of 33 patients with vitiligo and 11 patients with MAL; as a result, 80% of MAL were misdiagnosed as vitiligo based on clinical presentation. Therefore, the authors proposed the term melanoma associated vitiligo (MAV) as no discriminative features were found [13]. Similarly, Hartmann et al. performed clinical, histological, and laboratory tests to evaluate the similarities and differences between MAL and classic vitiligo. MAL lesions, just like vitiligo, were most often distributed in a bilateral symmetrical pattern but less progressive. He reported that MAL was more often associated with other acquired leukodermas. Again, histological and immunohistological discriminative features were not found [8]. The symmetrical bilateral pattern was noted in all our patients, and the results of the biopsies we performed on the achromic patches were in keeping with vitiligo. The depigmentation is a result of a strong autoimmune anti-melanoma defense that also targets healthy melanocytes due to shared expression of differentiation antigens. In fact, the melanoma-specific cytotoxic T lymphocytes (CTLs) are able to recognize melanocytes antigens (Gp100, MART-1, Tyrosinase and Tyrosinase relating protein-2) on both normal and atypical melanocytes. Their presence in the blood and skin surrounding the tumor indicate that melanoma cells do not dodge the immune system [4,14,15]. The frequency of CTLs recognizing melanoma antigens appears to be higher in patients with metastatic disease than in those with primary tumors suggesting that a higher antigen index is associated with tumor progression [11,15,16]. Recently, there have been reports of a patient who developed unusual inflammatory vitiliginous skin lesions after an infusion of MART-1-specific CTLs [15]. Further, the report of Becker et al [17] demonstrated that the lymphocytes of the regression areas of melanomas were the very same as those of nearby hypopigmented areas. However, the presence of regression is paradoxically reported to carry a worse prognosis [18]. Rosenberg et al. reported that the majority of patients with metastatic melanoma treated with autologous tumor infiltrating lymphocytes developed leukoderma after melanoma regression, suggesting that a large infiltration of CTLs in the blood or skin surrounding the tumor is related to a better prognosis in these patients. [19] Byrne et al. investigated the link between destruction of melanocytes in MAL and CTL reaction to melanoma. They uncovered that melanocyte antigens released by the process of MAL plays a major role in the maintenance of the long-term functional memory T cell response against melanoma [20]. This process might explain the complete or partial regression in some cases. Recent studies found similar antibody patterns in patients with widespread vitiligo and with metastatic melanomas [21,22]. They showed that autoantibodies isolated from vitiligo patients targeted melanoma cells, suggesting that the same autoimmune mechanism is responsible for vitiligo and MAL. This suggestion needs further confirmation giving that in contrast Teulings et al. studied 7 patients with MAL and 27 patients with vitiligo and found that antibodies against MART-1 were only present in MAL and not vitiligo [23]. According to the Vitiligo Global Issues Consensus Conference, MAL cannot be classified as a subtype of vitiligo [8,24]. We can deduct according to the literature reports above mentioned, that the only difference between MAL and vitiligo is the absence of melanoma in the latter. The most interesting and last aspect of this association is its significance in terms of prognosis. Almost all studies demonstrated a higher incidence of metastatic disease in MAL patients than those with melanoma of comparable (Breslow) thickness, yet their overall survival rate was higher [7,25,26,27,28]. In our series, MAL enabled early diagnosis of melanoma in only one out of the 12 cases (Case 5). It has been suggested that preceding MAL could be an early warning sign of impending melanoma metastases. MAL carries a better prognosis with an improved 5-year survival compared to melanoma patients of the same stage who do not have the associated depigmentation. [4,8,20] A retrospective study indicated that melanoma patients with concomitant leukoderma had a higher survival rate [29], we only had one patient that developed both diseases simultaneously; he died two years after diagnosis of melanoma. The authors propose topical corticosteroids with phototherapy to treat MAL, and patients should be encouraged to pursue anti-melanoma therapy despite the appearance of MAL as a side effect [30], these treatments both proved to be ineffective for our patients. Patients with MAL should probably be advised not to rush treatment for their MAL lesions given its potentially good prognostic value. In this review, several aspects of the melanoma/vitiligo relationship are looked into, underlining the characteristics of the immune system responses shared by melanoma and vitiligo patients and the value of MAL as probably a biomarker for melanoma. We are aware that the small sample size of our case series makes it difficult to determine the statistical validity of our suggestions, therefore further studies are necessary to better elucidate this intriguing association, as it will provide a clear understanding of immunologic regulation in vitiligo and melanoma, which might represent the corner stone to future therapeutic approaches to both diseases. In conclusion, clinicians should be aware of the differential diagnosis of MAL when diagnosing vitiligo, which may enable an early diagnosis of melanoma by thoroughly examining patients with leukoderma for other suspected pigmented lesions.
2022-10-21T15:29:24.401Z
2022-10-14T00:00:00.000
{ "year": 2022, "sha1": "4c3e885c21d77aa7d9fd40fda304a154f0360f94", "oa_license": null, "oa_url": "https://doi.org/10.12691/ajcmr-10-2-2", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f1ec4db9b163bdec3d398dd0f7a7de9f990847c5", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
237736238
pes2o/s2orc
v3-fos-license
Characteristics of Dust Events in China from 2015 to 2020 : As the main source of dust in Asia, China often suffers from dust events. The temporal and spatial characteristics of dust events change with the variations of geography, climate and human activities. Based on the criteria of selecting dust events proposed recently by the China Environmental Monitoring Station, the hourly concentration of PM 10 and PM 2.5 of 336 cities in China from 2015 to 2020 were used to study the temporal and spatial characteristics of dust events more accurately and objectively. The results showed that all of the dust events in China clearly decreased, but the strong dust events did not decrease. There were 334 cities that had dust events except Shenzhen and Dongguan, 299 cities were seriously polluted due to dust events, 134 cities encountered dust level III and 56 cities encountered dust level IV. The high frequencies of dust events were mainly distributed in Northern China, especially in Northwest China. The dust contribution of PM 10 to the cities in Northwest China was more than 10% and about 5–10% for PM 2.5 . The most likely month for dust was May. The starting time of dust was bimodally distributed, and the most common starting time was 10:00–11:00 BJT, followed by 22:00–23:00 BJT. According to the PSCF (Potential Source Contribution Function) results, the dust potential source contribution of different cities mainly came from the northwest, and was mainly affected by Mongolia in addition to the local dust in China. In addition, Beijing was obviously affected by dust recirculation. This study is of great significance to the improvement of the forecast of dust weather and the warning of heavy pollution caused by dust events. Introduction Due to the existence of Taklimakan Desert, Gobi Desert, Badain Jaran Desert, Tengger Desert and other deserts, dust events often affect China. The dust emission capacity of Taklimakan Desert is the highest in East Asia, and the emission in spring is about 70.54 Tg/yr, accounting for 42% of the total dust emission in East Asia [1]. Dust events are a common natural phenomenon caused by special geographical and climatic conditions; its occurrence and development accelerate land desertification [2,3]. A dust event not only has a direct impact on human activity and life but also affects human health. Aragnou et al., 2021 [4] estimated the dust storm event across the state of New South Wales in February 2019 caused four premature deaths, 161 respiratory disease hospitalizations and seven cardiovascular disease hospitalizations. Ardon-Dryer et al., 2019 [5] investigated the impact that particles from dust storms have on human lung epithelial cells and found that intermediate dust concentrations lead to a larger fraction of dying cells compared to lower and higher concentrations. In addition, dust events have a sustained and long-term indirect impact on the ecological environment and climate. Using a series of climate model results, Kok et al., 2018 [6] found that direct dust-climate feedback accounts for a substantial fraction of the total aerosol feedbacks in the climate system. Huang et al., 2015 [7] found that East Asian dust aerosols influence the cloud properties through two aspects, one is as cloud condensation nuclei, the other is by changing the relative humidity and stability of the atmosphere. The impact of a dust event on air quality should not neglected. Aili et al., 2021 [8] found that all pollutants, such as total suspended particulates (TSP), SO 2 and NO 2 , were increased on strong dust storm days, as compared to normal days. A dust event contributes greatly to particle concentration in Hohhot, Inner Mongolia, especially in spring [9]. Filonchyk et al., 2018 [10] found strong territory pollution with aerosols during sandstorms by studying a severe dust storm that occurred in Northwest China during April 2014. Therefore, the study of dust events is of great significance to the management of the ecological environment and sustainable development. There are many studies on dust events, but most are based on case analysis [11][12][13][14]. Some scholars have studied dust events based on years of data, such as the WRF-Chem that was used to simulate the total dust emissions from the deserts of North Africa, Middle East and East Asia [15]. Based on different paleoclimate archives, the relationship between dust storm events and climate change in Northern China in the past 1000 years were analyzed [16]. The major dust trajectories within seven major deserts worldwide were identified based on satellite images from 2000 to 2010 [17]. The temporal characterization of dust activity in the Central Patagonia Desert from 1964 to 2017 was studied based on surface synoptic observations and satellite aerosol detection [18]. A new empirical equation relating horizontal visibility and PM 10 concentrations was proposed to reproduce the characteristics of seasonal dust over North Africa [19]. Yang et al., 2013 [20] investigated the characteristic distributions of regional dust events over Northeast Asia from 1980 to 2011 by using different meteorological data. However, when compared with the measured results, these studies have some weaknesses due to the data. For example, model simulation results always have a certain error, and clouds easily affect the satellite remote sensing data. There is uncertainty and a lack of rationality when using meteorological data such as visibility to study the dust events, owing to the haze that causes heavy pollution and very low visibility. Therefore, in order to identify the dust events more accurately, it is necessary to eliminate the influence of haze. The occurrences of dust events are not only related to the special geographical climate, but also closely related to human activities. Based on the study of the characteristics of dust storm in the Ebinur Lake of Xinjiang [21], it was found that agricultural acreage exhibited the strongest influence on dust storms. Wan et al., 2016 [22] found that the significant high dust flux during 1950s-2011 was caused by increasing human activities in Northwest China. The study found that since at least 2000 years ago, the impact of human activities may have exceeded the impact of natural climate changes on dust storms in Eastern China [23]. With the improvement of people's awareness on ecological environment protection and the proposal of some environmental protection policies in recent years, the ambient air quality has been improved significantly. Therefore, it is imperative to update the research on the characteristics of dust events in China. Based on these reasons, this study focused on the temporal and spatial characteristics of dust events in China from 2015 to 2020 and aimed to investigate the characteristics of dust events with the changes of geography, climate and human activities. In order to eliminate the influence of haze, the criteria of selecting dust events proposed by China environmental monitoring station were used. The potential source contribution of dust events in different regions was studied using the PSCF. In addition, on the one hand, the pollution caused by dust events reflects the actual air quality; on the other hand, it causes interference to prevention and control of local air pollution. In order to provide a scientific basis for air pollution prevention and control, it is necessary to evaluate the ambient air quality more accurately and objectively. Therefore, the contribution of dust events to the urban particulate concentration was further studied. Hourly concentration data of PM 10 and PM 2.5 obtained from national air quality automatic monitoring stations are described in Section 2.1. In Sections 2.2 and 2.3, the methods of selecting dust events and contribution of dust events to urban particulate concentration are briefly introduced. Section 2.4 shows the method of PSCF. Temporal and spatial distribution characteristics of dust events in China from 2015 to 2020 are calculated and analyzed in Section 3. Section 4 gives the conclusions and possible future improvements to this study. Method of Selecting Dust Events As the distance of sand storm transport increases, the coarse particles gradually settle and the concentration decreases, fine particles become prominent. Similarly, the concentration of coarse and fine particles are high in haze pollution, especially the concentration of fine particles, which leads to the confusion between pollution caused by haze and pollution caused by sand storms after long-distance transport. In order to identify dust events effectively, the starting and ending time of dust events were determined according to the notice proposed by China Environmental Monitoring Station. Criteria for the starting time of dust events: The hourly concentration of PM 10 is greater than 150 µg·m −3 and one of the following conditions is achieved: (a) The hourly concentration of PM 10 is greater than or equal to 2 times of the average concentration of PM 10 in the previous 6 h. (b) The ratio of hourly concentration of PM 2.5 to hourly concentration of PM 10 is less than or equal to 50% of the average ratio in the previous 6 h. Criteria for the ending time of dust events: The moment when one of the following conditions is achieved for the first time: (a) The hourly concentration of PM 10 is reduced to less than 1.1 times of the average PM 10 concentration in the previous 6 h of starting time. (b) The hourly IAQI (Individual Air Quality Index) of PM 2.5 is greater than the hourly IAQI of PM 10 . According to these criteria, a dust event may last for several days, or several dust events may occur in one day. Therefore, this paper stipulated that when the interval between different dust events was less than 12 h, they would be recorded as one dust event. Contribution of Dust Events to Urban Particulate Concentration Contribution of dust events to urban particulate concentration is calculated as follows: where Contribution is the contribution of dust events to urban particulate concentration, Contribution PMb represents the average concentration of particulates at all times, and Contribution PMa represents the average concentration of particulates except during the period of dust events. Method of PSCF Based on the backward trajectories calculated by HYSPLIT (Hybrid Single-Particle Lagrangian Integrated Trajectory) model, PSCF is widely used to evaluate the transport pathways and the contribution of potential source areas of pollutions [24][25][26]. The PSCF value is the ratio of the number of pollution trajectories (m ij ) and the number of all trajectories (n ij ) passing through the receiving point (i,j). That is, The higher the PSCF value, the greater the pollution contributions to the receiving point. In order to study the potential source contribution of sand storms in densely populated cities, eight cities, including Urumqi, Lhasa, Hohhot, Beijing, Taiyuan, Lanzhou, Xining and Xi'an, were selected to study the PSCF values. The start location for PSCF is shown in Table 1. The run hours for all dust events were set to 48 h for convenience. The PSCF value needs to set a threshold value for the pollution factor. When the value of the pollution factor corresponding to the backward trajectory is higher than the threshold value, the trajectory is considered as the pollution trajectory. In this study, different thresholds were selected according to the dust intensity in different cities. The threshold value of PM 10 concentration in Urumqi and Lhasa was set to 150 µg·m −3 , because the moments of the PM 10 concentrations greater than 350 µg·m −3 were very few when dust occurred. The threshold value of PM 10 concentration in Hohhot, Beijing, Taiyuan, Lanzhou, Xining and Xi'an was set to 350 µg·m −3 . Because the PSCF value is a conditional probability, when the residence time of airflow in each grid is short, the PSCF value will fluctuate greatly, which makes the error larger. In order to reduce the error, the weight function (W ij ) was introduced to minimize the uncertainty. After calculation, the weight function used in this study was as follows [27]: Therefore, The meteorological data of PSCF was obtained from GDAS (Global Data Assimilation System) which provided by NCEP (National Center for Environmental Prediction). GDAS assimilates the following types of observations to a 3-D model space grid: surface observations, balloon data, wind profiler data, aircraft reports, and buoy, radar and satel-lite observations. In this paper, the grid resolution is 0.5 • and the time resolution is 6 h. Results and Discussion According to the criteria of the starting time of dust events, the minimum concentration of PM10 is 150 µg·m −3 when dust events occur. Concerning the technical regulation on ambient air quality index (Chinese National environmental protection standard HJ 633-2012), when the concentration of PM10 is greater than 350 µg·m −3 , the corresponding ambient air quality level achieves heavy pollution. In addition, the hourly concentration of PM10 is greater than 1000 µg·m −3 frequently when strong dust events occur, and may even exceed 2000 µg·m −3 .Therefore, the dust events were classified into four levels according to PM10 hourly concentrations, which were named dust level I, II, III and IV. The PM10 hourly concentrations greater than 150 µg·m −3 , 350 µg·m −3 , 1000 µg·m −3 , and 2000 µg·m −3 were for dust level I, II, III, IV, respectively. After statistical calculation, there were 334 cities affected by dust events except for Shenzhen and Dongguan in the period of 2015-2020 (Figure 1a). The frequencies of dust level I in the cities of Northern China, especially in Northwest China, were significantly higher than that in the cities of Southern China, which was more than 500 in the cities of Northwest China. The most affected area of dust level I was South Xinjiang, where the frequency of dust level I in each city was about 1000. The second is North Xinjiang, Western Hexi Corridor and Nagqu, their frequencies of dust level I were about 500-800. These were followed by Ningxia, Inner Mongolia, Qinghai, and Tibet, which were about 200-300. Figure 1b shows dust level II; its main occurrence area was consistent with that of dust level I. The difference was that the influence area of dust level II was reduced, and there were 299 cities were affected, which indicates that the ambient air quality of most cities were seriously polluted due to dust events during 2015-2020. Figure 1c shows the dust events of dust level III. It can be seen that the influence area was significantly reduced; there were 134 cities with PM 10 hourly concentration exceeding 1000 µg·m −3 due to dust events in the period of 2015-2020. The most affected city was Hotan, where 202 dust events occurred, followed by Jiuquan, which experienced 101 dust events. The region of dust level IV was reduced further, as shown in Figure 1d, which was still distributed in Northern China, especially in South Xinjiang and Hexi Corridor. The results show that there were 56 cities with PM 10 hourly concentration exceeding 2000 µg·m −3 due to dust events from 2015 to 2020. The most affected city was Hotan, where 78 dust events occurred, followed by Jiuquan, which experienced 52 dust events. It can be seen that almost all cities in China experienced dust events, which led to severe air pollution in most cities during 2015-2020. The dust events mainly occurred in Northern China. The area with the highest dust frequency of all dust levels were South Xinjiang, followed by Western Hexi Corridor, Ningxia, Inner Mongolia, Qinghai and Tibet. For North Xinjiang, the dust frequencies of dust level I were higher than that in Western Hexi Corridor, and dust level II was close to that in Western Hexi Corridor. However, the frequencies of dust level III and dust level IV were fewer than that in Western Hexi Corridor, which indicated that North Xinjiang was more prone to dust events than that in Western Hexi Corridor, but the strong dust events were fewer than Western Hexi Corridor. For Tibet, the frequencies of dust level I and dust level II were larger, but the frequencies of dust level III were very low, and dust events of dust level IV never happened from 2015 to 2020, which indicated that Tibet was prone to dust events, but the occurrence of strong dust events was rare. In addition, compared with the cities in Northern China, the impacts of dust events in Southern China were obviously weakened, and the hourly concentrations of PM 10 were less than 1000 µg·m −3 when the dust events occur. Contribution of Dust Events to Urban Particulate Concentration Dust events often make the concentration of particulate matter rise significantly, which leads to serious air pollution. Therefore, the contribution of dust events to the particulate concentration of 336 cities in China was further studied. Figure 2 shows the contribution of dust events to the average concentrations of PM 10 ( Figure 2a) and PM 2.5 (Figure 2b) during 2015-2020. In order to highlight the cities with obvious influence of dust events, Figure 2 only shows the cities with more than 1% contribution. It can be seen that the contribution of dust events to the average concentration of PM 10 and PM 2.5 in the cities of Northern China was obvious, especially for the cities of Northwest China, which was more than 10% for PM 10 and about 5-10% for PM 2.5 . The contribution to the average concentration of PM 10 and PM 2.5 in the cities of Southern China was less than 1%. The most affected city was Nagqu, where the contribution to the average concentration of PM 10 reached 18.6%, followed by Xilingol, Jiayuguan, Jiuquan and Jinchang, which were 18.0%, 16.7%, 15.5% and 14.9%, respectively. Nagqu was also the city with the largest contribution to the average concentration of PM 2.5 , reaching 16.2%, followed by Changdu, Jiayuguan and Jinchang, which were 9.9%, 9.4% and 9.2%, respectively. Although Xinjiang was prone to dust events, the contribution of dust events to urban air quality was less than that of the cities noted above due to the high concentration of particulates during dust-free periods. Figure 2 indicates that the contribution of dust events to the average concentration of PM 10 was significantly greater than that of PM 2.5 . Figure 3 shows that all of the dust events in China clearly decreased, but the strong dust events did not decrease. Table 2, and locations of the representative cities are also provided in in Jiuquan was lower than that in Hotan during the period from 2015 to 2020, but it was higher than that in Hotan in 2019, with a total of 33 times. To explain the results shown in Figure 4 and Table 2, the main reason for the increase of strong dust during 2015-2018 was that the strong dust events in Mid-Eastern Inner Mongolia and North Xinjiang increased gradually. For example, the growth rate in 2018 compared to 2017 of Bayan Nur and Xilingol was 333.33% and 275.00%, respectively, followed by Turpan, where the growth rate achieved 55.56%. In addition, the increase of strong dust events in Western Hexi Corridor in 2019 was obvious. Monthly Change of the Dust Event Frequencies The monthly change of the frequency for different dust levels from 2015 to 2020 can be found in the Figure 5. It can be seen that the trend of monthly dust event frequencies of each level was similar, and it increased gradually from January to May. The frequencies increased from 4653 to 5420 for dust level I, increased from 730 to 1890 for dust level II, increased from 52 to 448 for dust level III, and increased from 13 to 150 for dust level IV. Since June, with the rainfall belt moving northward [28], the frequencies of dust occurrence decreased significantly. The frequencies of dust level I and dust level II dropped to half of that in May, and the frequencies of dust level III and dust level IV decreased more significantly, which reduced to a third of that in May. The frequencies of each dust level increased gradually from September to December, but it was still less than January until December. It can be seen that the dust events were the least in August, and it was the most prone to occur in May, which may be due to the increase of temperature, thawing of frozen soil, less precipitation, sparse vegetation and strong wind in spring [8,[29][30][31]. Frequency of the Starting Time of Dust Events In order to explore the characteristics of sand storms starting time in 2015-2020, this study selected the same cities in shown in Figure 4 as the representative cities, which are near the sand source. As shown in Figure 6, the starting time of sand storms showed obvious bimodal distribution, and the most likely starting time was 10:00-11:00 BJT, followed by 22:00-23:00 BJT. After sunrise, with the increase of surface temperature, the convective mixing in the boundary layer was strengthened, and the frequencies of dust occurrence increased gradually. The frequencies reached a maximum at 11:00 BJT, which was about 10.2%, and then the frequencies decreased gradually. In the afternoon, the frequencies of sand storms starting time were only 2-3%, and in the evening, the frequencies increased gradually. The frequencies reached 8.0% at 22:00 BJT, and then decreased again. Some studies [32,33] have shown that a low-level jet (LLJ) often occurs in the stable boundary layer at night. The LLJ can result in an "upside-down" effect and increase turbulent vertical mixing from top to bottom [34], which may be the reason why the sand storm can occur at night. From 2:00 to 7:00, the frequencies were only 1-3%. Results of PSCF From the distribution of WPSCF values (Figure 7), one can see that the dust in Urumqi mainly came from western Mongolia to the northeast, Kazakhstan to the West and Gurbantunggut Desert to the north. The potential source areas of dust in Hohhot were mainly distributed in the North of Xinjiang to the West and Mongolia to the north. The high value of WPSCF in Beijing was mainly distributed in the Central and Western Inner Mongolia to the northwest. In addition, the high value of WPSCF in a small area existed to the south of Beijing, which indicated that besides the influence of upstream transport, dust recirculation was also very common in Beijing. The potential sources of dust in Taiyuan were consistent, mainly from the northwest, and were affected by Badain Jaran, Tengger, Kubuqi and Ulanbuh deserts, and so on. The potential sources of dust in Xi'an were consistent, too, mainly from the northwest. Compared with Taiyuan, the potential source area was located in the north, and was more affected by the western region of Mongolia. In addition, the influence of dust recirculation in the South cannot be ignored. As far as Lanzhou was concerned, the potential source areas of dust were more widely distributed than other cities, which were affected from Southern Xinjiang through Qaidam Basin to Lanzhou, and from Hexi Corridor to Lanzhou. In addition, the contribution of Badain Jaran Desert and Tengger Desert superimposed by Western Mongolia is very clear. The potential dust sources of Lhasa, which is located in the Qinghai Tibet Plateau, were mainly distributed in the Gobi to the west of Lhasa. The dust in Xining came from the west and the north, the west was mainly affected by the northern Tagramagan Desert and Qaidam Desert, and the north was mainly affected by the Hexi Corridor. Discussions and Conclusions In order to investigate the temporal and spatial characteristics of dust events in recent years, the hourly observed concentration of PM10 and PM2.5 in 336 cities in China from 2015 to 2020 was analyzed. The contribution of dust events to the urban particulate concentration was investigated. In addition, the potential source contribution of dust events in different regions was studied using the PSCF. The following conclusions are obtained. (1) For the spatial distribution of dust events, 334 cities had dust events except Shenzhen and Dongguan. Dust prone areas may not have more strong dust events. The dust contribution to PM10 of the cities in Northwest China was more than 10% and about 5- It can be seen that the dust potential source contribution in each city mainly came from the northwest, and was mainly affected by Mongolia in addition to the local dust in China. The potential sources of dust were closely related to the distribution of deserts, which were mainly affected by the Grammagan, Gurbantunggut, Badain Jaran, Tengger, Kubuqi, Ulanbuh deserts, among others. In addition, Beijing, the capital city, was obviously affected by dust recirculation. Discussions and Conclusions In order to investigate the temporal and spatial characteristics of dust events in recent years, the hourly observed concentration of PM 10 and PM 2.5 in 336 cities in China from 2015 to 2020 was analyzed. The contribution of dust events to the urban particulate concentration was investigated. In addition, the potential source contribution of dust events in different regions was studied using the PSCF. The following conclusions are obtained. (1) For the spatial distribution of dust events, 334 cities had dust events except Shenzhen and Dongguan. Dust prone areas may not have more strong dust events. The dust contribution to PM 10 of the cities in Northwest China was more than 10% and about 5-10% to PM 2.5 . In addition, 299 cities were seriously polluted by the dust events during 2015-2020; 134 cities encountered dust level III and 56 cities encountered dust level IV. (2) The dust events clearly decreased during 2015-2020, but the strong dust events did not decrease because the strong dust events in Mid-Eastern Inner Mongolia and North Xinjiang increased gradually during 2015-2018. The trend of monthly dust frequencies of different levels was the same, and it reached the highest value in May. The starting time of sand storms revealed a bimodal distribution, and the most frequent time was 10:00-11:00 BJT, followed by 22:00-23:00 BJT. (3) The dust potential source contribution of different cities mainly came from the northwest of each city. Mongolia was the main potential source contribution outside China. In addition, Beijing was obviously affected by dust recirculation. It is of great significance to the improvement of the forecast of dust weather and the warning of strong dust events by studying the temporal and spatial characteristics of dust events. However, only the ground observation data were used in this study. In the future, the data from the ground Lidar network will be used to study the vertical distribution and size and shape characteristics of dust aerosols.
2021-09-27T20:55:36.332Z
2021-07-24T00:00:00.000
{ "year": 2021, "sha1": "dbc78c28a714236f56bc92780364838254cd2992", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/12/8/952/pdf?version=1627112703", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "fc7a9d78a13ea2b0a7031ca6d04dc21b6071a360", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
2049677
pes2o/s2orc
v3-fos-license
Gliomas: Survival, origin and early detection I did some of my training with Paul Bucy. He had a special interest in the surgical treatment of glioblastoma. Bucy believed, as did many “cancer surgeons” of his day, that tumors resulted from good cells becoming bad cells that formed a mass of tumor and that cells from the tumor’s periphery invaded the surrounding normal tissue. He correctly observed that malignant gliomas usually grew locally and rarely metastasized outside of the central nervous system. If there was any surgically curable “cancer”, he believed, it was a glioblastoma; all that was necessary for a cure was an aggressive enough resection with an adequate margin.[1] Of course, his patients died right on schedule, just like anybody else’s patients. Bucy believed that this was because we just could not identify the true margin of the neoplasm at surgery and that the resection was rarely sufficiently adequate to provide a cure. The advent of computed tomography (CT scanning), magnetic resonance imaging (MRI) and image-guided neuronavigation would change all of this by potentially allowing us to accurately resect as much of a glioma as we chose to resect. Well, I have spent a career trying to cure gliomas with high technology-based surgery – in particular, imaging-based stereotactically guided volumetric resections – but the long-term survival in the vast majority of these patients is not much better than it was 60 years ago! We just do not hurt these patients as badly as we did 60 years ago. And what do we do? We keep throwing more and more expensive surgical high technology at the problem with marginal improvements in survival, if any. To be sure, there are some gliomas that we can cure with modern surgical techniques, such as pilocytic astrocytomas, the occasional oligodendroglioma, neurocytomas, gangliogliomas, subependymomas and a few xanthoastrocytomas and protoplasmic astrocytomas. But this is not a credit to neurosurgeons and our modern surgical methods. It is a function of the growth pattern of these particular tumors that lend themselves to complete and curative surgical excision. These tumors have a distinct boundary where tumor stops and normal brain begins. All that a surgeon has to do in these cases is identify the plane between tumor and surrounding brain tissue, develop that plane and remove the tumor. Image guidance helps a bit. But I will point out that Donald Matson claimed a 50% surgical cure rate in pilocytic astrocytomas over 50 years ago – without any “high technology”. Nonetheless, the “curable” tumors listed above are relatively rare compared to the overwhelmingly more common “fibrillary astrocytomas”, oligodendrogliomas and mixed gliomas. How are we doing with these tumors? Not so great! To be sure, there are many reports in the literature which show that patients having “total resection” and adjuvant therapy do better and live longer than those undergoing biopsy and adjuvant therapy. Comparisons to historical controls attempt to demonstrate the benefit of modern surgical techniques over methods used by past generations. An example of just such an exercise is shown in Figure 1. Figure 1 Post surgical survival following resection in patients with grade IV astrocytomas (glioblastoma) in a recent unpublished series (Kelly 2000) compared to survival curves adapted from earlier studies in the literature (Kelly 1992,[3] Burger 1986,[2] Bucy ... These life table survival curves compare my own experience with cases compiled in the years 1992 (published)[3] and 2000 (not published) to the 1986 series published by Burger et al,[2] Jelsma and Bucy’s series from 1967[4] and Ringertz’s experience from 1950.[2] At first glance at Figure 1, it appears that over the years, we have made some progress with better median and 2-year survivals. Regrettably, these experiences are not really comparable for two important reasons. First, survival times are measured from when surgery is performed and histology is available. Modern imaging methods allow patients to be diagnosed much earlier – usually at the onset of the first symptoms – in contrast to patients from the 1950s and 1960s, who could go for weeks or months before a diagnosis was made and surgery performed. Modern series have the benefit of therapy being delivered earlier in the natural history of the disease and a survival starting point that could be weeks or months earlier than in the past decades. Secondly, modern patients have had the benefit of more effective radiation therapy with linear accelerators instead of cobalt units and more specific chemotherapy. Indeed the improved 2-year survival noted in my 1992 and 2000 series more likely represents the efficacy of carboplatin in the 1980s and temazolamide in the 1990s and not necessarily “better surgery”. INTRODUCTION I did some of my training with Paul Bucy. He had a special interest in the surgical treatment of glioblastoma. Bucy believed, as did many "cancer surgeons" of his day, that tumors resulted from good cells becoming bad cells that formed a mass of tumor and that cells from the tumor's periphery invaded the surrounding normal tissue. He correctly observed that malignant gliomas usually grew locally and rarely metastasized outside of the central nervous system. If there was any surgically curable "cancer", he believed, it was a glioblastoma; all that was necessary for a cure was an aggressive enough resection with an adequate margin. [1] Of course, his patients died right on schedule, just like anybody else's patients. Bucy believed that this was because we just could not identify the true margin of the neoplasm at surgery and that the resection was rarely sufficiently adequate to provide a cure. The advent of computed tomography (CT scanning), magnetic resonance imaging (MRI) and image-guided neuronavigation would change all of this by potentially allowing us to accurately resect as much of a glioma as we chose to resect. Well, I have spent a career trying to cure gliomas with high technology-based surgery -in particular, imagingbased stereotactically guided volumetric resectionsbut the long-term survival in the vast majority of these patients is not much better than it was 60 years ago! We just do not hurt these patients as badly as we did 60 years ago. And what do we do? We keep throwing more and more expensive surgical high technology at the problem with marginal improvements in survival, if any. To be sure, there are some gliomas that we can cure with modern surgical techniques, such as pilocytic astrocytomas, the occasional oligodendroglioma, neurocytomas, gangliogliomas, subependymomas and a few xanthoastrocytomas and protoplasmic astrocytomas. But this is not a credit to neurosurgeons and our modern surgical methods. It is a function of the growth pattern of these particular tumors that lend themselves to complete and curative surgical excision. These tumors have a distinct boundary where tumor stops and normal brain begins. All that a surgeon has to do in these cases is identify the plane between tumor and surrounding brain tissue, develop that plane and remove the tumor. Image guidance helps a bit. But I will point out that Donald Matson claimed a 50% surgical cure rate in pilocytic astrocytomas over 50 years ago -without any "high technology". Nonetheless, the "curable" tumors listed above are relatively rare compared to the overwhelmingly more common "fibrillary astrocytomas", oligodendrogliomas and mixed gliomas. How are we doing with these tumors? Not so great! To be sure, there are many reports in the literature which show that patients having "total resection" and adjuvant therapy do better and live longer than those undergoing biopsy and adjuvant therapy. Comparisons to historical controls attempt to demonstrate the benefit of modern surgical techniques over methods used by past generations. An example of just such an exercise is shown in Figure 1. designated volume of tissue. Unlike surgery in highgrade gliomas, where we resect a volume of solid tumor tissue and necrosis that has replaced or displaced intact parenchyma and can be resected from just about anywhere in the CNS, resections of low-grade gliomas are restricted by anatomical location. Nonpilocytic astrocytomas, mixed gliomas and most lowgrade oligodendrogliomas usually comprise a volume of sick brain tissue infiltrated by isolated tumor cells. Resecting the imaging defined tumor volume is, in fact, resecting intact and functional, albeit "diseased", brain tissue. Figures 2-4 compare my own experience with stereotactic resection and stereotactic biopsy in lowgrade gliomas. This unpublished series suggests that patients undergoing resection clearly survive longer than those who had only biopsies plus whatever radiation or chemotherapy de-jour is administered during the remaining course of their life as their tumors progress from low-grade gliomas to the high-grade tumors that eventually kill them. However, I [3] and 2000 (not published) to the 1986 series published by Burger et al, [2] Jelsma and Bucy's series from 1967 [4] and Ringertz's experience from 1950. [2] At first glance at Figure 1, it appears that over the years, we have made some progress with better median and 2-year survivals. Regrettably, these experiences are not really comparable for two important reasons. First, survival times are measured from when surgery is performed and histology is available. Modern imaging methods allow patients to be diagnosed much earlier -usually at the onset of the first symptoms -in contrast to patients from the 1950s and 1960s, who could go for weeks or months before a diagnosis was made and surgery performed. Modern series have the benefit of therapy being delivered earlier in the natural history of the disease and a survival starting point that could be weeks or months earlier than in the past decades. Secondly, modern patients have had the benefit of more effective radiation therapy with linear accelerators instead of cobalt units and more specific chemotherapy. Indeed the improved 2-year survival noted in my 1992 and 2000 series more likely represents the efficacy of carboplatin in the 1980s and temazolamide in the 1990s and not necessarily "better surgery". LOW-GRADE GLIOMAS In low-grade gliomas, image-guided stereotactic surgical techniques allow us to resect any prospectively have never submitted this material for publication for the following two reasons. Surgical selection bias Experienced surgeons usually know which cases will do well with aggressive surgery and which will not. So, relatively compact tumors -especially those in noneloquent brain areas -will be selected for resection, and diffuse infiltrating tumors in eloquent brain areas will undergo stereotactic biopsy only. So, studies like this basically compare survival in good surgical candidates to survival in poor surgical candidates rather than comparing the efficacy of aggressive resective surgery to less aggressive surgery. In fact, we may be comparing the natural history of two populations of gliomas, which may have the same histologic cell type but possibly different biologies. In order to prove the benefit of aggressive resection on survival in gliomas (both high and low grade) we would have to select the "good" surgical candidates and prospectively randomize these into "resection" and "biopsy" groups. However, considering that low-grade gliomas are relatively "rare" and their survival relatively long (in comparison to, say, glioblastoma), a study of this nature would take many years to accrue enough cases with sufficient enough follow-up to justify any conclusions, and would probably require a multicenter effort and a time commitment that would be longer than most academic careers. Neuropathology And so these men of Indostan Disputed loud and long, Each in his own opinion Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong! Many neurosurgeons and neuroncologists would like to believe that low-grade gliomas fall into clear-cut histologically homogeneous groups: astrocytomas, oligodendrogliomas and mixed gliomas ("oligodroastrocytoma" or "astro-oligodendroglioma"). Most, however, do not. Serial stereotactic biopsy studies frequently show geographic phenotypic heterogeneity in the predominant cell type within individual tumors. In addition, review of surgical specimens by different neuropathologists usually results in different histologic diagnoses on the same surgical specimen! Some focus on the number of cells that stain positive with Glial fibrillary acidic protein (GFAP) and ignore other cells; others note the many oligodendroglial cells and feel that the many astrocytes are astrogliotic and most discount a neuronal component. I have had low-grade glioma surgical specimens reviewed by as many as eight different well-respected neuropathologists and received as many as eight different histologic diagnoses as regards cell type and grade. Diagnostic interobserver variability between neuropathologists in establishing predominant cell type and grade in gliomas in general and low-grade gliomas, in particular, is a well-recognized problem. This makes a mockery of cell type stratification in low-grade glioma follow-up studies. It is not the intention to impugn neuropathologists here; they are the true scholars and intellectuals in our field. Their reviews of surgical specimens are usually thorough and their conclusions well considered. But why cannot they agree? The reason, in the review of a glioma specimen, is that they are all right and, in many cases, they are wrong -like the blind men and the elephant. Many of us are fixated in the Cushing-Bailey concept of glial tumorigenesis that tumors developed from dedifferentiated mature cell lines. Astrocytomas resulted from the dedifferentiation of mature astrocytes; oligodendrogliomas from oligodendocytes, etc. At least that was the old way of thinking. Few believe this anymore. WHERE DO GLIOMAS COME FROM? A more plausible theory is that all gliomas start out as mixed gliomas. All start out their glioma life containing cells with astrocytic, oligodendroglial and neuronal phenotypes. Over time, the phenotypic clone with the highest mitotic rate becomes the predominant cell type. Gliomas probably evolve from stem cells and the lineage specific progenitors that form neurons, astrocytes and oligodendrocytes. Of course, there remains the possibility that mature differentiated cells revert to progenitor or stem cell status from which a glioma evolves. Nonetheless, examination of a young glioma will reveal GFAP positive cells (astrocytes), synaptophysin positive cells (primitive neurons), cells that stain with neither GFAP nor synaptophysin and are probably oligodendrocytes and perhaps even cells that stain with CD133 that is supposed to identify stem cells. Furthermore, microscopic examination of specimens obtained from the periphery of glial tumors at stereotactic serial biopsy procedures show isolated cells in the extracellular spaces within otherwise normal parenchyma. Time-lapse photography of cell cultures containing tissue from these biopsy specimens demonstrates amoeboid-like cells. They move by pseudopod propulsion -like stem cells which are also motile and also move by pseudopod propulsion [ Figure 5]. In normal organogenesis, cytokines bind to notch receptors that cause the stem cell to become a lineage specific progenitor which then progresses to the more specialized cells of the CNS: neurons, oligodendrocytes and astrocytes. In the developing nervous system, neurons need astrocytes, oligodendrocytes and a blood supply from endothelial cells that form capillaries. They signal this need to stem cells by growth factors (cytokines) that bind in the notch receptors of their cellular membrane which begins the intercellular cascade that transforms stem cells into the specialized cells required. All of this is fine in the developing nervous system, but what happens after the brain has developed? There are stem cells left over; what happens to them? Others retire to stem cell clusters of 50-100 cells that self-renew and die. Some are probably called up for brain repair in the case of injury or to simply maintain failing cells in the mature brain. Specific growth factors may provide a "call to action". Or perhaps, these stem cells leave the cluster on their own and wander through the extracellular spaces of the white matter and neuropil. Somewhere, some stop wandering, start reproducing and form neural, oligodendroglia and astrocytic progenitors that also reproduce and this is the genesis of a glioma that contains astrocytic, oligodendroglial and neuronal phenotype -a "mixed glioma". All cells have a cell cycle. New cells are born (mitosis) and others die (apoptosis). Undoubtedly, many nascent "tumors" reach a "steady state" where the mitotic rate equals the apoptotic rate and the tumor never grows, never becomes symptomatic and simply exists as a heterogeneous collection of cells co-existing with normal cells in the extracellular fluid of the interstitial spaces. However, occasionally, the mitotic rate exceeds the apoptotic rate and the early tumor begins to add cells. As their numbers increase, their metabolic by-products increase the local osmotic gradient of the extracellular fluid. This results in the ingress of fluid from the intravascular space, and if the spatial volume of this process is large enough, an MRI may now detect a small region of T2 prolongation. The mitotic and apototic rates differ in the neuronal, astrocytic and oligodendroglial populations of these young gliomas. The cellular clone with the highest mitotic and lowest apoptotic rates will eventually become the dominant cell type. Thus, mixed gliomas containing primitive neurons, astrocytes and oligodendrocytes will eventually become whatever phenotype has the highest mitotic and lowest apoptotic rate as those phenotypic cells can eventually overwhelm and replace the other two cell types in the neoplasm. The vast majority of gliomas may start out as mixed gliomas having neuronal, astrocytic and oligodendroglial components, but with sufficient time, cellular clones with the highest mitotic rate become the dominant phenotypic cell type from which offspring with even higher mitotic rates evolve. Low-grade mixed gliomas become high-grade gliomas with a predominant cell type that ultimately kills the patient. The transition from low-grade mixed glioma to malignant glioma of a single cell type, e.g. glioblastoma, malignant oligodendroglioma or very rarely a malignant neurocytoma can occur in weeks, months or many years. Also, it is possible that some early gliomas reach a steady state in which a low mitotic rate is matched by a similar apoptotic rate and the lesion never progresses or possibly even regresses over time. GLIOMAS ARE MORE COMMON THAN PREVIOUSLY THOUGHT In the USA, about 19,500 new gliomas are diagnosed each year. Considering that the US population in 2008 was 306 million people, this works out to a glioma incidence rate of 0.00064. That is, 0.0064% or 6.4 cases per 100,000 population. There are some data showing that the incidence of gliomas in the US has increased slightly in the recent years, probably because we are finding more of them due to increased awareness and the easy availability of diagnostic imaging. However, this is just the tip of the iceberg; the actual number of gliomas in the USA is probably 40-50 times greater than the reported incidence. Over the years, there have been about 16 reported studies focusing on the incidence of CNS disease detected by CT and MRI within the "normal" asymptomatic population. [6] These have reported various incidences of supposed gliomas ranging from 0 to 6 gliomas per thousand population. A more conservative recent study of 1000 asymptomatic volunteers, conducted by the NIH and reported in the Journal of the American Medical Association (JAMA), found three gliomas in those 1000 individuals. [5] Extrapolating to the entire population of the USA, three in a thousand works out to over 900 thousand Americans harboring asymptomatic gliomas! Only a small number of these will become symptomatic, diagnosed and recorded in any given year. We know that number, which is about 19,500. We do not know how many of these will become symptomatic, diagnosed and treated in a lifetime. Also, there is the possibility that many more people may harbor microscopic gliomas that have not yet sufficiently influenced the parenchymal interstitial microenvironment to be detectable by MRI. Nonetheless, those 900 thousand or so individuals may represent a "population at risk" for developing a symptomatic, and most likely incurable, glioma in their lifetime even though only a small percentage becomes symptomatic in any given year. WHY WE CANNOT CURE GLIOMAS By the time a glioma becomes symptomatic, it is almost always too late in its biological course. Motile isolated tumor/stem cells would have migrated far beyond the imaging-defined tumor mass. These will ultimately start another tumor nidus in the margin of the resection, at some distance from the margin or, indeed, even in the opposite hemisphere. Also, this is purely a function of time, with or without the benefit of surgery. Radiation therapy and chemotherapy may have some effect on some of these cells but there will always be individual cells or even a small population of cells that will not be affected by these modalities -just like the normal cells of the brain that, we hope, are not affected by treatment. The real culprits are not necessarily the "cancer" cells. The real culprits are the cells that are mostly like "normal" cells -the stem cells. By the time a glioma is diagnosed -by the time it becomes symptomatic and an imaging study is performed -the vast majority are incurable. Some neurosurgeons might recall a case or two which presented with an MRI showing a significant glioblastoma but also happened to have had an earlier MRI done for some other reason, a headache or minor trauma, etc., and the earlier MRI being perfectly normal. I have had a few cases like this also. I submit that in these cases the transition from low-grade mixed glioma to, say, malignant astrocytoma occurred much more rapidly than mostover a few weeks or months, perhaps. I believe that such cases are relatively rare. In fact, I have seen many more cases where physicians have watched a T2 abnormality on MRI getting larger and larger over several years but feel that they must wait for symptoms before recommending surgery. This reminds me of my friend Thor Sundt's joke about the man jumping off the top of the Empire State Building, passing someone on the 42 nd floor who calls out: "How are you doing?" The Jumper calls back: "I'm doing fine so far!" SCREENING FOR GLIOMAS Many studies have shown that current therapies (surgery, radiation, chemotherapy, etc.) expand survival in gliomas beyond the natural history of the disease. However, in the vast majority of cases, gliomas are ultimately incurable. Most low-grade gliomas kill patients by becoming highgrade gliomas. Even low-grade gliomas are incurable because by the time they are diagnosed, the disease process has extended far beyond the limits of surgical resectability. Also, no treatment that I know of will prevent a low-grade glioma from ultimately becoming a high-grade glioma -except, perhaps, surgical total excision of a small low-grade lesion. In my opinion, gliomas are incurable because we are finding them far too late in their clinical course. It is like finding breast cancer after it has spread to the regional nodes, lung, liver, skeletal system or brain, prostate cancer after it has spread to the pelvis and spine, colon cancer after it has metastasized to the liver, skin melanoma after it has spread to lymph nodes and beyond, etc. However, there are screening programs for the early detection of all of these "cancers": self-examination and mammography for breast cancer, PSA blood tests for prostate cancer, colonoscopy for colon cancer, dermatological examinations for skin cancer, etc. Why not screen for brain tumors? Unlike these other "cancers", brain tumors grow by local invasion. Brain tumors rarely metastasize outside of the central nervous system. If the concept of early detection has any merit at all, it should be in the early detection of gliomas: find them when they are small, find them before they turn malignant and find them when they may still be curable by some minimally invasive surgical method or even by stereotactic radiation methods such as brachytherapy or radiosurgery. In addition, it is much easier and safer to operate on a small lesion, be it a glioma, meningioma, acoustic neurinoma or whatever, than a big one! In the 1950s, clinics and mobile X-ray units offered free or low-cost screening chest X-rays for the early detection of tuberculosis. This was probably one of the most effective public screening programs ever. Early detection of pre-clinical disease and isoniazid wiped out tuberculosis in the USA in a few years. What screening tools are available for gliomas? Perhaps, a blood test and genetic screening for brain tumors may be possible some day. However, since 1973, we have had an excellent tool for brain tumor screening, i.e. MRI. And what do we use it for? We use it to make a diagnosis in symptomatic patients who, by the time they are diagnosed, have essentially an incurable disease! Why not use MRI to screen for brain tumors in an early detection program? We screen for other tumors; why not brain tumors? Radiologists who are used to mammograms and chest X-rays usually raise the issue of "false positives." But unlike X-ray based procedures, MRI provides various imaging sequences to non-invasively investigate abnormalities. In particular, MR spectoscopy (MRS) is very useful in the determining whether a unidentified bright object (UBO) is a glioma instead of a demyelinating plaque, microinfarction or some other non-neoplastic process. However, what do we do when an abnormality suspicious for a glioma is found? As we have seen above, the clinical incidence of gliomas is orders of magnitude lower than the assumed prevalence in an asymptomatic population. It is possible that many incidentally found gliomas will never grow. Indeed, some smaller lesions may even regress. Most may never require treatment in the near future. Some may never need treatment. Those with an MRI defined abnormality, in whom MRS suggested glioma, would represent a "population at risk". These would require follow-up imaging. Treatment would be recommended for those having lesions larger than, say, 2 cm in diameter or those in whom documented growth or change in a small lesion is noted on follow-up imaging or the lesion becomes symptomatic. Many point out that screening is not "cost effective". I agree. It is certainly less expensive to treat a small number of afflicted people with ineffective and expensive therapies than it is to screen a large healthy population. But this argument could be made for screening, in general. Nonetheless, a screening MRI for the detection of early gliomas only requires two or three imaging sequences (T1, T2 and FLAIR). Contrast enhancement would not be necessary -it would be a very rare tumor that would exhibit contrast enhancement and not show an abnormality on T2 or Flair images. The cost of a diagnostic MRI with multiple sequences and contrast enhancement in New York City, at least, is about $1000 and the entire examination takes about 45-60 minutes. A screening MRI would require about 3-5 minutes scanning time, and as a proportion of the cost of a diagnostic examination, should cost only $60-80 which would compare favorably to the cost of a colonoscopy (average: $2000-3734), total PSA blood test (between $70 and 400), mammography ($140-320), skin screening (about $150 plus), etc. Of course, someone would have to read the MRIs but computer-assisted diagnosis (CAD) systems should reduce the tedium and costs. Over the past 30 years, we have seen real progress in the development of sophisticated surgical technology. Computer-based medical imaging combined with stereotactic navigation techniques for minimally invasive surgical or non-invasive radiosurgical methods, intraoperative imaging, mapping procedures, etc., all of these combine to make tumor neurosurgery less invasive, more effective and safer. However, in the resection of gliomas, we are fighting a war that would be easier and more likely to win if we begin before the enemy becomes extensive and well entrenched in the "civilian population". We need a screening program for the early detection of gliomas. AN EARLY-DETECTION PILOT PROJECT About two years ago, the Manhattan based Brain Tumor Foundation began such a program in New York City (http://www.roadtoearlydetection.org). A General Electric 1.5 Tesla MRI unit, housed in a truck, makes the rounds of the five boroughs of New York City offering free screening head MRI scans to anybody who wants one [see Figure 6]. The response from the general public has been overwhelmingly positive. (The response from the local medical profession has been, predictably, lukewarm to downright hostile.) This project is supported by public funds and private donations. Data, collected prospectively, will be analyzed in collaboration with the Department of Epidemiology at Columbia University.
2018-04-03T01:33:06.396Z
2010-12-25T00:00:00.000
{ "year": 2010, "sha1": "f92d691dca38c871dfad1e5b8e94de7d541d35bc", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3019361", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f92d691dca38c871dfad1e5b8e94de7d541d35bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115135132
pes2o/s2orc
v3-fos-license
Tools for Studying Low-Mass Dark Matter at Neutrino Detectors We determine the neutrino spectra arising from low-mass (4-10 GeV) dark matter annihilating in the sun. We also determine the low-mass dark matter capture rates (element by element in the sun), assuming dark matter interacts either through elastic contact interactions, elastic long-range interactions, or inelastic contact interactions. These are the non-detector-specific data needed for determining the sensitivity of a neutrino detector to dark matter annihilating in the sun. As an application, we estimate the sensitivity of a one kiloton liquid scintillation neutrino detector (such as KamLAND) and LBNE (LAr-based) to low-mass dark matter with long-range interactions and compare this to the expected CDMS sensitivity. It is found that KamLAND's sensitivity can exceed that obtainable from the current CDMS data set by up to two orders of magnitude. A promising avenue for cross-checking the low-mass direct detection data is with neutrino detectors [28][29][30][31][32][33], which search for the flux of neutrinos arising from dark matter annihilation in the core of the sun. Dark matter is captured by the sun via scattering off solar nuclei. If the sun is in equilibrium, then the dark matter capture rate determines the dark matter annihilation rate. The neutrino flux thus constrains the dark matter-nucleus scattering crosssection, and allows neutrino detectors to cross-check direct detection experiments without some of the particle physics and astrophysics uncertainties that plague other types of indirect detection searches. Moreover, the O(GeV) neutrinos produced from the annihilation of low-mass dark matter are easily distinguishable at water Cherenkov, liquid argon or liquid scintillator-based neutrino detectors. In order to determine the event rate expected at a neutrino detector, one must calculate the rate at which dark matter is captured by the sun due to scattering and the neutrino spectrum arising from the decay of the dark matter annihilation products. Numerical packages such as DarkSUSY [35] are commonly used for obtaining these rates, which are determined from numerical simulations. However, the required simulations have not been run for masses less than 10 GeV. Moreover, the capture rate has only been calculated assuming that dark matter scatters elastically via a contact interaction. Recent models for reconciling the low-mass direct detection data have included the possibility of dark matter scattering via long-range forces [36,61,62], inelastic scattering [39][40][41][42][43][44], and dark matter interactions which are isospin-violating [30,39,[45][46][47]. To determine the sensitivity of neutrino detectors to these models, new capture rate calculations must be performed. In this work, we calculate the required neutrino spectra and capture rates for low-mass dark matter in the sun. We not only consider the capture rate for elastic contact scattering, but also for inelastic dark matter and for models in which the dark matter-nucleon interaction is mediated by a low-mass particle. We also determine the regions of parameter-space of these models for which dark matter in the sun is in equilibrium. As an application of these techniques, we consider the sensitivity of a 1 kT liquid scintillation detector with 2135 live days of data (roughly the same exposure as KamLAND) to dark matter with long-range interactions. We find that the sensitivity of neutrino detectors to dark matter with long-range interactions is enhanced, because typical scatters off low-mass targets (such as the hydrogen and helium in the sun) involve small momentum transfers, yielding enhanced scattering cross-sections in models where the mediating particles are light. This implies the existence of an entire class of dark matter models for which current neutrino experiments can provide the leading sensitivity. In section II, we review the general formalism for dark matter searches using neutrino detectors. In section III, we describe the details of the computation of the neutrino spec-trum arising from the annihilation of low-mass dark matter. In section IV, we describe the calculation of the dark matter capture rate in the sun, assuming dark matter with either elastic contact interactions, elastic long-range interactions, or inelastic contact interactions. In section V, we describe the range of circumstances under which low-mass dark matter is in equilibrium in the sun. In section VI, as an example, we apply these techniques to determine the sensitivity of a 1 kT LS neutrino detector to low-mass dark matter with long-range interactions. We conclude in section VII. II. OVERVIEW OF DARK MATTER DETECTION VIA NEUTRINOS The rate of charged lepton events at a neutrino detector can be written as where d ∼ 1 AU is the earth-sun distance, z = E ν /m X , η is the nucleon number density of the detector (including the earth around the detector, in the case of a search for throughgoing muons), and is the efficiency for a neutrino charged-current interaction to produce a charged lepton which will pass the detector analysis cuts. B f is the branching fraction to each dark matter annihilation product f , and dN f,ν i /dz is the differential neutrino spectrum per annihilation to each final state, for each (anti-)neutrino flavor. Γ A is the total dark matter annihilation rate, and σ ν i −N is the (anti-)neutrino-nucleon scattering cross-section. For dark matter in the 4 − 10 GeV range, most of the charged leptons produced in a reasonably-sized detector will be fully-contained, with the vertex where the lepton is produced and the end of the track both within the fiducial volume of the detector. As a result, we will focus on fully-contained lepton events. The quantities η and are specific to the geometry and construction of the detector. For 1 GeV < E ν < 1 TeV, the σ ν i −N can be approximated as [48] The two remaining quantities we will need to compute are Γ A and dN f /dz. For a search for fully-contained charged leptons, we can then write where All detector-specific information is encoded in the effective area, A ef f. (z). If m X 4 GeV, the effect of dark matter evaporation can be important [28,34]. In that case, dark matter in the sun's core is significantly depleted by evaporation, and the total annihilation rate is relatively small, implying that constraints on dark matter from the neutrino flux will be weak. We focus on the regime m X ≥ 4 GeV, for which dark matter in the sun can only be depleted through annihilation. If the sun is in equilibrium, we then find that Γ A is related to the dark matter capture rate, Γ C , by the relation Γ C = 2Γ A . Since Γ C is determined by the dark matter-nucleus scattering cross-section, the above relation allows one to translate neutrino flux bounds into bounds on the dark matter-nucleon scattering cross-section. III. NEUTRINO SPECTRA For low-mass dark matter (m X ≤ 20 GeV), the only dark matter annihilation products relevant for neutrino searches are heavy quarks (b, c), τ , ν e,µ,τ and the gluon (g). Muons and light quarks (u, d and s) will tend to stop within the sun before they decay [49]. The resulting neutrinos are thus very soft (though they also can potentially be used for for dark matter searches [50]). We have calculated the differential neutrino spectrum per annihilation for the relevant channels using the DarkSUSY/WimpSim/NuSigma/Pythia [35,51,52] package. Numerical simulations were run on the Hawaii Open Supercomputing Center (HOSC) computing cluster. 10 7 annihilations were simulated for each annihilation channel and dark matter mass in the range 4 − 10 GeV (in increments of 2 GeV). Representative spectra are plotted in Appendix A, and all of the original data files are available at http: //www.phys.hawaii.edu/~superk/post/spectrum. We present the neutrino spectrum at a distance 1 AU from the sun, including the effects of hadronization and decay of the annihilation products at injection, matter effects as the neutrinos propagate through the sun (including tau-regeneration) and vacuum oscillations. The neutrino oscillation parameters were chosen to be assuming a normal hierarchy. This choice is consistent with recent exciting data from the Daya Bay experiment [53] indicating θ 13 ∼ 9 • . Data files for the choice θ 13 = 0 • are also available online. For this choice, the change in the neutrino spectrum is relatively small unless dark matter annihilates directly to neutrinos in a flavor-dependent way. In the case where dark matter annihilates to b-quarks, annihilation can only proceed if the dark matter mass is larger than the mass of the b-hadron which is produced. Moreover, a b-quark will lose ∼ 27% of its energy during hadronization [49,54]. Thus, the neutrino spectrum arising from annihilation to b-quarks is simulated only for m X ≥ 6 GeV. In the case where dark matter annihilates to ττ , the neutrino spectrum has been computed by averaging over helicities. Some dark matter candidates will preferentially decay to certain helicities, which can have a significant effect on the injected neutrino spectrum [55]. The neutrino spectrum at 1 AU is equivalent to the spectrum of downward-going neutrinos at the detector, averaged over the year. This spectrum determines the rate of downward going fully-contained charged leptons. In the case where the charged lepton is upward going through the earth, one should also include oscillation and matter effects as the neutrinos pass through the earth. These effects depend on the location of the detector and can be determined by inputting the neutrino spectrum at 1 AU into the "WimpEvent" program, with the location of the detector specified. IV. CAPTURE RATE The capture rate can be computed following the analysis of [56], and we follow that notation. A dark matter particle in the halo has velocity u, given by a distribution f (u) obeying du f (u) = η X . Here, η X is the dark matter number density in the halo. When a dark matter particle is at distance r from the core of the sun, it will have velocity w = √ u 2 + v 2 , where v(r) is the escape velocity from the sun at radius r. Thus, a dark matternucleus scatter will result in dark matter capture if the dark matter scatters from velocity w to a velocity ≤ v. More generally, however, 3-body interactions can drastically affect the capture rate. As a conservative estimate, one can choose to count as "captured" only dark matter particles which are kinematically constrained to orbits with maximum radius r 0 , often taken to be the radius of Jupiter's orbit. The velocity to escape from position r < r 0 within the sun to radius r 0 is denoted by v e (r), and is given by the relation v e (r) 2 The rate for dark matter to be captured in any differential solar volume by scattering off an element with atomic number Z can then be written as where u min,max are the minimum/maximum dark matter velocities in the halo such that a nuclear scatter at position r resulting in dark matter capture is kinematically possible. Ω − v (w) is the rate per unit time at which a dark matter particle with velocity w will scatter to velocity < v e (r), and is given by the expression where η is the number density of the sun, E min is the minimum recoil energy needed for capture, and E max is the maximum recoil energy that is kinematically allowed. dσ Z,A /dE R is the differential cross-section for dark matter to scatter off a nucleus with Z protons and A nucleons. For any dark matter model, the quantities that must be known to compute the capture rate are dσ Z,A /dE R , E min,max , and u min,max . Given these quantities, we can compute the capture rate numerically using the DarkSUSY code (appropriately modified), with its standard assumptions about solar composition. A. Elastic Contact Interactions The most commonly used assumption is that dark matter interacts with nuclei via elastic, isospin-invariant, contact interactions. In this case, we have where µ ≡ m X /m A and µ ± ≡ (µ ± 1)/2. f p,n are the relative strengths of dark matter coupling to protons and neutrons, respectively. σ p is the dark matter-proton scattering cross-section, and F A (E R ) is the nuclear form factor. To match the assumptions used in DarkSUSY, we will assume a Gaussian form factor ] fm is taken as the nuclear radius. Following default assumptions in DarkSUSY [35,56], we have assumed a Maxwell-Boltzmann velocity distribution for dark matter in the Galactic halo of the form wherev is the three-dimensional velocity dispersion, which we have set tov = 270 km/s. This velocity distribution is truncated at the galactic escape velocity, v esc . The velocity distribution seen by an observer moving through the halo with velocity v * is then where cos θ max = 1, The step function imposes the condition that f (u) = 0 for u > v * + v esc . If we ignore the truncation at the galactic escape velocity, this reduces to the expression We take v * = 220 km/s to be the velocity of the sun through the halo. For isospin-invariant interactions, one assumes f n /f p = 1. For isospin-violating dark matter, the capture rate from scattering by each element is scaled by a factor [Z+(f n /f p )(A− Z)] 2 /A 2 . To facilitate this rescaling in the case of generic isospin-violating interactions, we plot the capture rate for each of the main elements in the sun separately. We have plotted in fig. 1 the capture rates for elastic contact interactions if one requires dark matter to be captured to within the radius of Jupiter's orbit. If the presence of Jupiter is neglected, capture rates change by less than 1% in the case of elastic contact interactions. B. Elastic Long-Range Interactions If dark matter interacts with nuclei via long-range interactions (or, equivalently, via a tchannel interaction with a mediator with a mass much smaller than the momentum transfer), then the differential scattering cross-section will have a different form. The quantum matrix element scales as M ∝ 1/(q 2 − M 2 * ) where M * is the mass of the mediating particle and q is the momentum transfer; for M 2 * q 2 , the differential cross-section scales as q −4 . For such a model, it would not make sense to parameterize the differential scattering cross-section in terms of σ p since, as with Rutherford scattering, the total cross-section is infinite. We may instead write where µ p = m X m p /(m X + m p ) is the dark matter-proton reduced mass. C is a constant which defines the size of the differential scattering cross-section in terms of the proton charge; if g X,p are the strengths with which the mediator couples to the dark matter and a proton, respectively, then C = g 2 X g 2 p /e 4 . Using the Gaussian form factor defined above, the integrated differential scattering cross-section takes the simple form The kinematics of the scattering process are the same as in the case of an elastic contact interaction, and thus E min,max and u min,max are the same as in equation (8). The capture rates for dark matter with long-range interactions are plotted in fig. 2 (for m X = 4−10 GeV) and fig. 3 (for m X = 10 − 1000 GeV), again assuming that captured dark matter must be confined to an orbit inside Jupiter's. One should note that, in the case of long-range interactions, it is necessary to assume that captured dark matter be confined to an orbit within some finite radius r 0 ; without this assumption, the capture rate would be infinite. The origin of this divergence is easily understood to arise from the low-velocity tail of the Maxwell-Boltzmann velocity distribution. Near u = 0, dark matter far from the sun has a very small kinetic energy. As a result, even scattering interactions yielding very small recoil energies can result in a dark matter particle being captured, (i. e., having negative total energy). Since the differential scattering crosssection diverges at small recoil energy, the total capture rate diverges. This simply reflects the fact that it is not physically sensible to think of dark matter as captured if confined to an orbit of very large radius. It is most sensible to count as captured only dark matter confined to orbits that lie within Jupiter's orbit. Note that we are not including the possibility of capture due to multiple scattering. For many models, these effects can significantly enhance the dark matter capture rate, especially in the case of long-range interactions. The dark matter capture rate may be much larger if dark matter scattering exhibits Sommerfeld enhancement. But this depends on the details of the model, including the nature of dark matter interactions with electrons and possible 3-body effects. These issues may be relevant for specific models but are beyond the scope of this work. C. Inelastic Contact Interactions One may also consider the case where dark matter scatters inelastically off nuclei, via the process XA → X A. We will consider the case with δm X = m X − m X ≥ 0. In this case, the scattering matrix element will only change by subleading O(δm X /m X ) terms, but the kinematics of the scattering process can change dramatically. It is easiest to consider this process in the center-of-mass frame. We then find p 2 i − p 2 f ≈ 2m r δm X , where m r = m X m A /(m X + m A ) is the reduced mass, and p i = m r w and p f are the spatial momenta of the incoming X and outgoing X , respectively, in the center-of-mass frame. The phase space factor of the differential scattering cross-section is directly proportional to the outgoing momenta. Again, it is not appropriate to express the dark matter-nucleus inelastic scattering crosssection in terms of the dark matter-proton scattering cross-section, since there exist kinematic regions where dark matter-proton inelastic scattering is impossible, though dark matter can scatter off other nuclei. Using the fact that the recoil energy can be written as where is the squared dark matter-nucleus matrix element (summed over final spins and averaged over initial spins). I is roughly constant for different elements (up to O(δm X /m X ) corrections), so it makes sense to present bounds on inelastic dark matter in terms of this quantity. We also find Similarly, we find where the first term is the minimum recoil energy kinematically possible in two-body inelastic scattering, and the second term is the minimum recoil energy in a process where the outgoing dark matter particle is slower than v e , the velocity to escape to radius r 0 . u max is determined by the constraint E max ≥ E min , yielding Finally, we have because inelastic scattering is only kinematically possible for δm X ≤ (1/2)m r w 2 . This implies that, for m X 10 GeV one need only consider models with δm X O(10−100) keV. The capture rates for inelastic dark matter with contact interactions and δm X = 10, 30, 50 keV are plotted in fig 4. It is interesting to note that, as δm X increases, the rate of capture arising from scattering off light elements vanishes. This is because m r is smallest for light elements, implying that they have the smallest maximum value of δm X such that inelastic scattering is kinematically allowed. It is also worth noting that inelastic scattering for low-mass dark matter is kinematically allowed in the sun for larger δm X than in the Earth, because dark matter within the sun has gained kinetic energy from gravitational infall. As a result, even low-mass inelastic dark matter with δm X ∼ 50 keV can be potentially probed by neutrino detectors. V. EQUILIBRIUM If the effects of WIMP evaporation are negligible, the equilibration time τ for dark matter in the sun can be written as [54,57] t τ = 1.9 × 10 −11 Γ C s −1 where t ∼ 4.5 × 10 9 yr is the age of the solar system. For the case of elastic contact interactions (assumed to be isospin-invariant, and either spin-independent or spin-dependent), we plot in the top panel of fig. 5 the minimum σ p required for the sun to currently be in equilibrium, assuming that the total dark matter annihilation cross-section is given by σv = 1 pb. Note that, for the case of IVDM with spin-independent interactions, the σ p required for the sun to be in equilibrium would lie between that required for spin-dependent scattering and that required for isospin-invariant spin-independent scattering. Note that the equilibration time scales as (σ p σv ) −1/2 . IVDM with spin-independent interactions and f n /f p ∼ −0.7 could be consistent with the data of DAMA, CoGeNT and XENON10/100 if σ p SI ∼ 10 −2 pb [45,46]. Such dark matter can be in equilibrium in the sun even if σv ∼ 10 −5 pb. This implies that, if the IVDM candidate is a thermal relic, then it can currently be in equilibrium in the sun even if almost all of the annihilation cross-section at freeze-out was due to p-wave interactions (suppressed at current times), with only a negligible amount due to s-wave interactions. Similarly, for the case of long-range interactions, we plot in the bottom panel of fig. 5 the minimum C required for the sun to be in equilibrium (assuming σv = 1 pb). VI. AN APPLICATION TO NEUTRINO DETECTORS We now consider an application of these tools to a specific detector. We will focus on the case of long-range interactions, because neutrino detectors are expected to have a major advantage over direct detection experiments in this instance. For both direct detection experiments and neutrino searches, the measured event rate will be proportional to the dark matter-nucleus scattering cross-section. For the case of long-range interactions, the integrated dark matter-nucleus scattering cross-section is roughly proportional to where m A is the mass of the nucleus and E min is the minimum nuclear recoil energy which one can measure. For a direct detection experiment, E min is the recoil energy threshold of the experiment, and is typically of order 2 − 10 keV. For germaniumbased experiments (such as CDMS and CoGeNT), m A ∼ 72 m p , while for xenon-based experiments m A ∼ 130 m p . For neutrino searches, E min is the minimum recoil energy such that dark matter is captured and can annihilate in the core of the sun. We thus find E min ≈ (1/2)m X u 2 ∼ 2 − 5 keV, for dark matter in the mass range considered here. However, m A is the mass of the target nucleus in the sun and is very small for some elements that contribute significantly to capture in the sun. For example, hydrogen contributes ∼ 3% of the dark matter capture rate for low-mass dark matter, and m H = m p . So one can expect the sensitivity of a neutrino search for low-mass dark matter with long range interactions to be significantly enhanced (∼ 10 2 − 10 3 ) compared to direct detection experiments. We compare the sensitivity of liquid scintillation (LS) neutrino detectors to that of CDMS. It was shown in [58] that liquid scintillation neutrino-detectors can determine the flavor and direction of leptons produced by a charged-current interaction using the timing of the first photons which reach the photomultiplier tubes. We will focus on a search for electron neutrinos producing fully-contained electron/positron events. An advantage of this strategy is that the atmospheric electron neutrino background is significantly smaller than that of mu neutrinos. It was estimated that liquid scintillation neutrino detectors can provide almost absolute lepton flavor discrimination, and electrons of the energy range we consider can be measured with an angular resolution 1 • . It was also estimated that the neutrino energy could be determined (from the energy and direction of the produced charged lepton, as well as total energy deposition) with a resolution ∼ 1 − 3%. We will consider the sensitivity of a LS neutrino detector with a spherical fiducial volume V 0 ∼ 1000 m 3 and 2135 live-days of data (these are roughly the specifications of KamLAND). We estimate the neutrino detector's sensitivity utilizing the procedure outlined in section II. The density of the liquid scintillator is taken to be 80% that of water. Following [31], we define as "fully-contained" an electron/positron event starting within the detector with at least 10 radiation lengths (∼ 4.3 m) contained within the detector. Furthermore, the lepton event must point back to the sun within a half-angle θ cone = 20 • 10 GeV/E ν , and the energy of the neutrino must be obey E ν ≥ 1.5 GeV. We then find [31] dV η(r) × (r, z) ∼ η × 2 3 where the factor 2/3 is the fraction of lepton events which will be within θ cone , and the factor 1/2 is the fraction of the fiducial volume which will yield fully-contained events. It was shown in [31] that, using the background estimate of [59], one would expect less than 5 electron/positron events satisfying these cuts arising from atmospheric neutrinos during the specified runtime. We will thus consider a model which would produce 10 signal events arising from dark matter annihilating in the sun as being excludable. To determine the electron (anti-)neutrino flux at Kamioka, we have used the WimpEvent routine, run on the Hawaii Open Supercomputing Center cluster. Of the 10 7 dark matter annihilations which were simulated (as described in section III) for each annihilation channel and value of m X , 2 × 10 6 were used to compute the neutrino spectra at the detector. The effect of neutrino propagation through the earth typically suppresses N z by ∼ 25 − 50% (depending on the mass and the annihilation channel). For CDMS, we will roughly estimate the sensitivity to dark matter with long-range interactions from their published bounds [7] on dark matter with elastic contact isospin-invariant interactions. The bound CDMS can place on C can be related to its bounds on σ p SI by the relation where u is the velocity of a dark matter particle far from the sun, and w is the is the velocity of the same particle once it has reached the surface of the earth. When it reaches the surface of the earth, the kinetic energy of the particle has increased by an amount equal to the change in the gravitational potential energy. The change in the gravitational potential energy due to the sun and the earth (V sun and V earth , respectively) can be written as ∆V sun,earth = −(1/2)m X v 2 sun,earth , where v sun ≈ 42.1 km/s is the escape velocity of the sun at the radius of the earth's orbit, and v earth ≈ 11.2 km/s is the escape velocity of the earth at the surface of the earth. Using the relation ∆E kinetic = −∆V sun − ∆V earth , we find w = (u 2 + v 2 sun + v 2 earth ) 1 2 . The maximum recoil energy which can be transferred to a germanium nucleus is E max = 2m 2 X m Ge w 2 /(m X + m Ge ) 2 . We assume a threshold energy E thr = 2 keV [7]. u min is the minimum dark matter velocity (far from the sun) such that scattering with E R > E thr is kinematically possible, and is given by the expression u 2 We again assume a Gaussian form factor F A (E R ); for the recoil energy range of interest, the Gaussian form factor for germanium differs from the Helm form factor [60] by at most 6%. We will assume a Maxwell-Boltzmann velocity distribution withv = 270 km/s, and that the galactic escape velocity is 600 km/s. We will also assume a constant efficiency for events with recoil energy greater than the threshold energy to appear in the CDMS lowenergy analysis band. Due to this assumption, the result shown here should be regarded as only an estimate of the sensitivity CDMS could obtain with present data to dark matter models with long-range interactions. The estimated sensitivity of CDMS and a 1 kT liquid scintillation neutrino detector are plotted in figure 6. For the LS neutrino detector, we assume 2135 live-days of data, and assume that dark matter annihilates exclusively to either ττ , bb, cc, gg or νν (with equal coupling to all three neutrino flavors). In [31], it was shown that the sensitivity of CDMS to 10 GeV dark matter with isospin-invariant elastic contact interactions is roughly an order of magnitude greater than that of a 1 kT LS detector. Our calculation of the relative sensitivities of CDMS and a 1 kT LS detector to dark matter with long-range interactions bears out our original estimate of a roughly 10 2 − 10 3 relative enhancement in sensitivity for the LS detector. Note that, for the models to which KamLAND would be sensitive, the sun would be in equilibrium (see figure 5) even if the annihilation cross-section were significantly smaller than 1 pb (assuming standard astrophysical assumptions). If the sun is not in equilibrium as a result of deviations from these assumptions, then the constraints which would be possible from neutrino detectors would be significantly suppressed. We will not attempt a quantitative estimate of the sensitivity of XENON100 to dark matter with long-range interactions. XENON100's recoil energy threshold is defined in terms of scintillation photoelectrons; the detector's scintillation response to recoil energy (L ef f ) is not measured for low recoil energies. Moreover, bounds from XENON100 are generated assuming that the number of photoelectrons is determined by a Poisson distribution. Even some low energy recoils can thus produce enough scintillation photoelectrons to exceed the threshold. As a result of the uncertainties in the detector response at low recoil, an attempt to estimate the event rate expected at XENON100 for dark matter with long-range interactions is beyond the scope of this work. We will simply note that the recoil energy range for which XENON100 is sensitive is at best comparable to that of CDMS, while a xenon nucleus is roughly twice as heavy as that of germanium. This implies that the sensitivity of XENON100 relative to CDMS will be suppressed by roughly a factor of 4 for the case of long-range interactions. Recent hints of low-mass dark matter have potentially been seen by the DAMA [1], CoGeNT [2] and CRESST [3] experiments. Dark matter models with long-range interactions have been discussed as a possible way of reconciling the data from these experiments with the constraints from other direct detection experiments [61,62]. Long-range interactions can affect not only the magnitude of the overall excesses seen by CoGeNT and CRESST, but can also affect the modulation seen by DAMA and CoGeNT. Although models with long-range interactions can provide a better fit to the overall excesses, they sometimes provide a worse fit to the modulation signals. We will not attempt to define a region of parameter-space for dark matter with long-range interactions which could match the DAMA, CoGeNT or CRESST data. This would require a detailed matching of the expected event spectrum with that observed by the experiments, which is beyond the scope of this work (and perhaps premature, given the issues raised in [15]). However, since CoGeNT also uses germanium as the target material, one would expect a 1 kT LS detector to be easily sensitive to dark matter models with long-range interactions which could potentially explain the data of CoGeNT. Moreover, one should note that the low-mass CRESST region is consistent with scattering from both oxygen and calcium. Given the difference in mass, one would expect long-range interactions with calcium to be suppressed by roughly a factor 4 relative to oxygen, as compared to the case of contact interactions. Finally, we can consider the sensitivity of LBNE (Long-Baseline Neutrino Experiment). We will assume the detector target material is liquid argon (configuration 2) [63], with a total fiducial volume of roughly 51 kT. Liquid argon-based neutrino detectors are expected to have very good event reconstruction; we will assume that liquid argon detectors permit a reconstruction of charged lepton flavor, energy and direction with at least the same resolution as liquid scintillation detectors. We then find where the factor 2/3 again arises from the fraction of charged lepton events which would point back to the sun within angle θ cone . For a detector as large as LBNE, almost the entire fiducial volume can produce fully-contained events. We then see that the sensitivity which KamLAND could obtain with its 2135 day data set could be obtained by LBNE with only ∼ 17 days of data. Note that the possibility of dominant annihilation to leptonic channels is not inconsistent with dark matter-nucleus scattering which is large enough to be probed by neutrino detectors. For example, it could be that dark matter-quark scattering is mediated by an effective operator which permits velocity-independent, spin-independent scattering, but does not permit s−wave annihilation (an example of such an operator isXXqq). In this case, the dark matter-nucleus scattering cross-section could be reasonably large, while the the cross-section for dark matter to annihilate to quarks would be v 2 -suppressed. If dark matter coupled to leptons through an operator which permitted s−wave annihilation (an example of such an operator would beXγ µ Xf γ µ f , if the dark matter were a Dirac fermion), then the dark matter would mostly annihilate to leptons. VII. CONCLUSIONS We have computed the capture rates and neutrino spectra which are relevant for neutrinobased searches for low-mass dark matter in the sun. The neutrino spectra are presented at a distance of 1 AU from the sun, accounting for matter effects in the sun, and vacuum oscillations (assuming a normal hierarchy and θ 13 = 10 • ). The capture rates have been found assuming either elastic contact, elastic long-range, or inelastic contact interactions. These are the tools required for a neutrino detector to search for dark matter annihilating in the sun. As an application of these tools, we plot the sensitivity of a 1 kT LS detector, with 2135 days of data, to low-mass dark matter with isospin-invariant elastic long-range interactions with Standard Model nucleons. We have found that neutrino detectors have a greatly enhanced sensitivity to dark matter with long-range interactions, relative to leading direct detection experiments such as CDMS. This enhancement is readily understood; in the case of long-range interactions, the scattering matrix element is inversely proportional to q 2 = 2m A E R . Scattering rates in detectors with heavy targets, such as germanium and xenon, are heavily suppressed. Dark matter capture in the sun involves scattering from low-mass targets such as hydrogen and helium, implying that these scattering rates will see a relative enhancement. A LS neutrino detector with the exposure already available to KamLAND could have a sensitivity up to 2 orders of magnitude greater than that of CDMS. LBNE (with a 51 kT liquid argon target) could achieve similar sensitivity with roughly 17 days of data. We have also found that low-mass dark matter with inelastic contact interactions can be probed by neutrino detectors even for δm X ∼ 50 keV. This implies that neutrino detectors can be sensitive to inelastic dark matter models which are more difficult to probe on earth, because gravitational infall allows inelastic scattering in the sun for models where inelastic scattering would not be kinematically possible on earth. The choice θ 13 = 10 • is consistent with recent data from the Daya Bay experiment [53]. The neutrino spectrum is slightly different from the θ 13 = 0 • case, with the difference most noticeable in the case of annihilation entirely to neutrinos. For searches involving upward-going leptons, there will also be a modification to the neutrino spectrum due to passage through the earth. This effect will depend on the location of the detector; for any particular detector, one can obtain the appropriate neutrino spectra by running the WimpEvent program, inputting the data files for the neutrino spectrum at 1 AU found at http://www.phys.hawaii.edu/~superk/post/spectrum. It is worth noting that a direct detection experiment with a target molecule containing hydrogen would also be expected to have enhanced sensitivity to dark matter with longrange interactions. Gaseous time projection chambers (such as DRIFT [64], DMTPC [65], D 3 [66], MIMAC [67] and NEWAGE [68]) using hydrocarbon targets may be well-suited for this type of search. Specific dark matter models with long-range interactions may have solar capture rates that are enhanced by collective effects, such as multiple scattering. Neutrino searches thus have enhanced sensitivity to such models, and current data may already provide tight constraints. It would be interesting to consider such models in more detail. VIII. ACKNOWLEDGMENTS We gratefully acknowledge K. Choi, D. Marfatia, M. Sakai, P. Subramoney and S. Vahsen for useful discussions. We also thank the Hawaii Open Supercomputing Center. This work is supported in part by the Department of Energy under Grant DE-FG02-04ER41291. FIG. 7: Neutrino spectra (left panels) and anti-neutrino spectra (right panels) at 1 AU for dark matter annihilation to the bb channel. The spectra for ν e (ν e ), ν µ (ν µ ), and ν τ (ν τ ) are shown in red, green, and blue, respectively. Spectra are shown for m X = 6, 8 GeV. and anti-neutrino spectra (right panels) at 1 AU for dark matter annihilation to the ττ channel. The spectra for ν e (ν e ), ν µ (ν µ ), and ν τ (ν τ ) are shown in red, green, and blue, respectively. Spectra are shown for m X = 4, 6, 8 GeV.
2015-07-15T07:06:22.000Z
2012-04-23T00:00:00.000
{ "year": 2012, "sha1": "e2125aff4743cb9fb59b5b4c24eada3170fc2da4", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.86.073002", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "e2125aff4743cb9fb59b5b4c24eada3170fc2da4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
203123873
pes2o/s2orc
v3-fos-license
Collaborative Cultivation of Agronomy and Biological Science Specialties with Demands Induction and University-Industry Cooperation To overcome the problems emerged during the cultivation of undergraduate students in the agronomy and biological science specialties and promote the personnel training quality in Hunan University of Humanities, Science and Technology, investigation and reforms have been made in the recent years. This paper analyzed the common problems during the cultivation of undergraduate students in these specialties, and introduced the related reforms in the past years’ practice. Briefly, we built a scientific professional cognitive education model, encouraged the students to participate in the scientific and technological innovation and production, adopted the individualized cultivation for students, and cultivated the students with double tutors from the entrance to graduation. And our university also took measures to promote the young and middle-aged teachers’ teaching levels and social demands serving ability orderly. Our university and other institutions had applied achievements and acquired good effects. Keywords—agronomy and biological science specialties; personnel training quality; reform; professional cognitive education model; undergraduate INTRODUCTION The College of Agriculture and Biotechnology of Hunan University of Humanities, Science and Technology has been working tightly with a number of large-scale enterprises including Hunan Haili High-tech Group Co., Hunan Five Stars Biotechnology Co., and Hunan Jiulong Economic and Trade Group Co., to form an innovation alliance in order to carry out the in-depth production, teaching and research cooperation since 2011. While doing the scientific research and innovation, we carried out the reform practices for the undergraduate innovative talents training in the agricultural and biological science specialties simultaneously, closely relying on the Collaborative Innovation Center for Field Weeds Control of Hunan Province and the Provincial University-enterprise Cooperation Talent Demonstration Base for Undergraduates in Agricultural and Biological Specialties, and all of the work was focused on improving the personnel training quality with the industrial guidance. The comprehensive quality of undergraduate students and the training effects have been improved effectively, and the social acceptance for students has been promoted significantly. During the past years, we constructed the professional cognitive education model to adapt the development and requirement from industries, set up the "1+2+1" course teaching mode which was carried out jointly by on-campus teachers and off-campus teachers, built the double tutorial model through which each undergraduate was guided by a specific on-campus teacher from entrance to graduation and several teachers from enterprises at different terms, and took some effective measures to promote the teachers' teaching and scientific research abilities. This paper analyzed the common problems and perplexities encountered during the cultivation of the undergraduate students in the agronomy and biological science specialties, and introduced our practice and successful experiences of collaborative cultivation with enterprises. Our achievements and experiences can be a helpful reference to the other universities. II. COMMON TEACHING PROBLEMS IN AGRONOMY AND BIOLOGICAL SCIENCE SPECIALTIES A. Students were lack of professional confidence Many undergraduate students in the agriculture specialty and related specialties lacked the necessary interest and confidence in their majors, and some of them had no stable professional ideas after enrollment [1][2]. Some students even applied to transfer into more popular majors such as information engineering, electronic commerce, finance, and civil engineering, etc. B. Students were lack of the ability of innovation and application Guided by the tradition training mode, the students were not taught in accordance with aptitude, and they seldom participated in the teachers' scientific researches and the various disciplines competitions [3]. Thus, their learning interests and innovative abilities could not be stimulated fully during their undergraduate stage. C. Insufficient practical teaching resources The practical teaching contents of the agronomy and biological science specialties cover a wide range and are changing rapidly, whereas the existing teaching resources in universities are limited and cannot meet the teaching needs [4][5][6]. These teaching materials are updated slowly, which cannot reflect the development trends of industries and enterprises. Through the collaborative training mode, the students will be cultivated jointly by both sides, and more of the practical teaching resources can be shared. D. Insufficient abilities of teachers to adapt to the university's transformation and development The existing knowledge structure of the teachers in the agronomy and biological science specialties in the local universities is not suitable for the transformation and development of these universities [7][8][9]. The universities should retrain the teachers, encourage them to become good at teaching and love teaching, and stimulate them to put the undergraduate education as the first priority in their work. A. Establishing a scientific professional cognitive education model The teachers should correctly guide the students to learn about the curricula setting, the key cultivation points, and the core competencies of these specialties, rather than simply introduced what they should study, how to do in the future, or something like the salary situation and employment in the industries. To achieve these goals, we took the following measures: firstly, we set up the professional introduction courses, drew up the outlines of the professional introduction courses with the industry experts jointly, and collected the opinions from the industry experts to clarify the objectives, tasks, and requirements of the courses; secondly, the teaching mode of the professional introduction courses of these specialties were reformed by using both on-site teaching and expert lectures, each accounting for half of the class hours, and the professional teachers in the on-campus practice bases held the on-site teaching, whereas the well-known industry experts held the lectures to interact with the students; thirdly, the teachers often went out to visit the relative enterprises and public institutions to learn about the industry demand for talents, and then introduced the urgent social needs for talents in these specialties to their students. B. Students were widely involved in scientific and technological innovation and production Relying on the provincial cooperative innovation center, a number of provincial and municipal innovation and entrepreneurship platforms were built including Zhongshi Edible and Medical Fungi Star Creation Base and Guanjin Medical Maker Base, to encourage students to take part in, forming the integration platforms for talents, markets, capital, and technological innovation achievements from the students. C. Individualized cultivation for students was adopted The teachers were organized to connect their scientific research programs with the students, the bidirectional choices were directed between the teachers and students, and the students were encouraged to take part in the teachers' scientific activities. The students were organized to join the teachers' teams, and they applied for the projects of undergraduates' research learning and innovative experiment plans themselves, so as to stimulate the students' interest in scientific research and to train them according to their personality. Based on this cultivation mode, the students were organized to participate in the competitions of agricultural and biological sciences. And we also established the relative incentive mechanism to reward the students for their awards and research achievements such as papers, production standards, software knowledge properties, and patents. D. Cultivating students with double tutors from the entrance to graduation The students were trained jointly relying on the alliance units of the provincial innovation center. Each teacher supervised four or five students all the way around. After entering the professional practice and graduation thesis periods, the students were sent to enterprises including Hunan Haili High-tech Group Co., Hunan Five Stars Biotechnology Co., and Hunan Jiulong Economic and Trade Group Co., etc. to carry out their professional practice and graduation thesis work under the guidance of both on-campus tutors and off-campus tutors. Some of the students participated in the projects of the enterprise tutors to promote their innovative and application abilities, whereas some of the students carried on the projects of their on-campus teachers in the enterprises. E. Promoting young and middle-aged teachers' teaching level and social demands serving ability in an orderly way Three to five teachers under the age of 40 were selected according to these specialties to work in enterprises and research institutes every year, gradually forming a group of a double-qualified professional teaching team with one specialty and many abilities. Eight to ten teachers were selected to visit well-known overseas institutes and universities and participated in relative high-level international conferences every year, to broaden their scientific horizons and enhance their professional abilities. A. Professional cognitive education model was innovated The education model was created and performed jointly by the university and enterprises according to the personnel demands from society. Under this education model, the students would form a correct and comprehensive understanding of their specialties, and gradually loved their specialties. B. Training mode of collaborative innovation talents in agricultural and biological science specialties was created Based on the idea of cultivating talents for the local economic needs, we basically relied on the advantages of the provincial collaborative innovation center and the plant protection discipline during cultivating students. As a result, the students were trained in accordance with their aptitude and Advances in Social Science, Education and Humanities Research, volume 336 personality. Thus, both of the personality and ability of the students were developed jointly. C. Advantages of discipline and platforms were brought into the full potential The high-quality teaching resources of platforms and bases were shared, and the double tutors' system for undergraduate students were established, in which the on-campus tutors and off-campus tutors guided the students together throughout all their stages during education. Then the cultivation of students outside university and inside university was seamlessly docked, and the whole process cultivation and multi-party joint cultivation were implemented. V. POPULARIZATION AND APPLICATION OF ABOVE RESEARCH RESULTS The first agricultural and biological science specialty, i.e. the biotechnology specialty, was created in 2007 in Hunan University of Humanities, Science and Technology. Since then, five relative specialties have been opened for undergraduate education. Through the recent years practice in teaching, we have made a lot of high-quality symbolized achievements, these achievements have been applied in our daily teaching and introduced to other universities and institutions, and have received positive responses. A. Personnel training quality was improved significantly The personnel training quality was improved and the social acceptance of the agronomy and biological science specialties was increased significantly, and the graduates quality was widely recognized by all walks of life. From 2017 to 2018, a number of publicly listed companies or industry-leading enterprises including Hunan Ava Seeds Co., Hunan Five Stars Biotechnology Co., Hunan Zhongshi Agriculture Bio-tech Co., Shenzhen Noposion Agrochemical Co., and Hunan Dafang Agrochemical Co. came to Hunan University of Humanities, Science and Technology to hold more than ten on-site job fairs successively. A large number of outstanding graduates emerged, and they either chose to start their own business careers themselves successfully or had been promoted to important positions in some relevant large scale enterprises. B. Significant achievements in undergraduate teaching quality engineering and subject competitions We acquired the significant achievements in the undergraduate teaching quality engineering, and built a number of provincial platforms. In 2015, relying on the local enterprise Hunan Jiulong Economic and Trade Group Co., Hunan University of Humanities, Science and Technology successfully set up Hunan Agricultural and Biological Science Talents Training Demonstration Base. In 2016, our university was successfully approved to set up Hunan Modern Agriculture and Bioengineering Virtual Simulation Experimental Teaching Center and Zhongshi Edible and Medical Fungi Star Creation Base. In addition, the undergraduate students in these specialties achieved good results in the subject competitions in the past years. In 2017, the students in the agronomy and biological science specialties won two second-class prizes and five thirdclass prizes in the Second National College Students Life Science Innovation and Entrepreneurship Competition; and in 2018, they won three second-class prizes and eight third-class prizes in the Third National College Students Life Science Innovation and Entrepreneurship Competition. From 2015 to 2018, they obtained two projects of national research learning and innovative experiment plan for undergraduate students, seven projects of provincial research learning and innovative experiment plan for undergraduate students, and three projects of industry-university cooperation and collaborative cultivation of the Ministry of Education. C. A number of influential teaching research and educational reform achievements were acquired Since 2016, sixteen papers on undergraduate teaching research and teaching reform on the agronomy and biological science specialties were published in the influential journals. Among them, fours papers were indexed by CPCI-ssh. Furthermore, two textbooks were published. D. Significant achievements were made in the construction of teaching staff team Since 2016, the College of Agriculture and Biotechnology of our university has sent a total number of ten teachers to enterprises and scientific research institutes for training. Twelve teachers have visited or studied in some famous universities including the University of Bath Spa in UK, the University of Western Australia, the Vanung University, and the I-Shou University. A double-faculty teaching team with international perspectives has been formed. E. A number of achievements in scientific research were gained In the meantime, the teaching reforms had promoted the teachers' scientific research during teaching. In 2016, three achievements by the teachers in the agronomy and biological science specialties passed the provincial appraisal, all of which reached the international advanced levels or domestic leading level. And the teachers won the third prize of China Circular Economy Association in 2018 and the second prize of Loudi Science and Technology Progress in 2017. More than 80% of the undergraduate students in the agronomy and biological science specialties took part in the process of these scientific researches actively. For example, in the study of project "Investigation of weed species and control technology in Camellia oleifera forests in the hilly areas of the central Hunan Province", all of the students majoring in the plant protection and agricultural science of the 2016 grade took part in the investigation of weed niche of the middle part of Hunan Province. In the process of scientific researches, the teachers guided the undergraduate students' practice, and organized the scientific researches and teaching together organically. From 2015 to 2018, the undergraduate students published more than 30 scientific research papers. Among them, twelve papers were of first authors by the undergraduate students. F. Application of achievements Some parts of the achievements were introduced to other universities and institutions and were welcome. The achievements were applied in Hunan University of Arts and Sciences, Shaoxing University of Arts and Sciences, Loudi Vocational and Technical College, Hunan Zhongshi Agriculture Bio-tech Co., Hunan Five Stars Biotechnology Co., and Lianyuan Fengleyuan Agricultural Development Co., etc. with sound effects (Table I). Practice training mode, students applying scientific projects with off-campus tutors, and students participating in subject competitions. VI. SUMMARY The agronomy and biological science specialties were not popular, and the profession ideas of the undergraduate students in these specialties were not strong. These phenomena seriously affected personnel training quality. This paper analyzed the common problems in the cultivation of the undergraduate students in the agronomy and biological science specialties in Hunan University of Humanities, Science and Technology, and introduced the practice and experiences of our collaborative cultivation mode which was built according to the social needs. After several years of practice, the quality of undergraduate students was significantly promoted, the achievements of teaching research and educational reforms were used by the other universities and relative institutions, and the responses were positive.
2019-09-19T09:13:59.984Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "a3126ab852661257b68d0ae9626fd0c1d12f8f16", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125915734.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f3be734bcb5dd477b0180f2a273957fad9bbdf87", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Engineering" ] }
62711716
pes2o/s2orc
v3-fos-license
Mid-Pliocene climate modelled using the UK Hadley Centre Model : PlioMIP Experiments 1 and 2 Abstract. The Pliocene Model Intercomparison Project (PlioMIP) is a sub-project of the Paleoclimate Modelling Intercomparison Project (PMIP) whose objective is to compare predictions of the mid-Pliocene climate from the widest possible range of general circulation models. The mid-Pliocene (3.3–3.0 Ma) is the most recent sustained period of greater warmth and atmospheric carbon dioxide concentration than the pre-industrial times and as such has potential to inform predictions of our warming climate in the coming century. This paper describes the UK contribution to PlioMIP using the Hadley Centre Model both in atmosphere-only mode (HadAM3, PlioMIP Experiment 1) and atmosphere-ocean coupled mode (HadCM3, PlioMIP Experiment 2). The coupled model predicts a greater overall warming (3.3 °C) relative to the control than the atmosphere-only (2.5 °C). The Northern Hemisphere latitudinal temperature gradient is greater in the coupled model with a warmer Equator and colder Arctic than the atmosphere-only model, which is constrained by sea surface temperatures from Pliocene proxy reconstructions. The atmosphere-only model predicts a reduction in equatorial precipitation and south Asian monsoon intensity, whereas the coupled model shows an increase in the intensity of these systems. We present sensitivity studies using alternative boundary conditions for both the Pliocene and the control simulations, indicating the sensitivity of the mid-Pliocene warming to uncertainties in both pre-industrial and mid-Pliocene climate. that can be expected in the future.These models vary significantly in the way they parameterise certain complex processes and as such vary in their simulation of the modern climate as well as in their past and future predictions.The Paleoclimate Modelling Project (PMIP, see http://pmip3.lsce.ipsl.fr/)exists to create intercomparisons of the widest possible range of models forced as nearly as possible with identical palaeo boundary conditions.The intercomparisons focus on different time slices of particular interest to the scientific community, including the Last Glacial Maximum (21 kyr), Last Interglacial (130 kyr, 125 kyr and 115 kyr) and mid-Holocene (6 kyr) and more recently extended to 8.2 kyr and Pliocene, specifically the mid-Pliocene warm period (MPWP) from 3.29-2.97Ma (Haywood et al., 2010). The Pliocene is of particular interest for the development of the GCMs used in future climate predictions as it is the most recent sustained period which is significantly warmer than the present day and thus the climate system may operate in a similar manner to potential climates of the coming century.A substantial dataset describing the MPWP time slab has been assembled by the US Geological Survey PRISM project (Pliocene Research, Interpretation and Synoptic Mapping, http: //geology.er.usgs.gov/eespteam/prism/)including topography data, sea surface temperatures, vegetation reconstruction and ice sheet extents and topography (Dowsett et al., 2010).These data are used to force the models to achieve consistent simulations of how the Pliocene climate differs from the pre-industrial control.This paper describes the implementation of the PRISM3 boundary conditions for use with the UK Hadley Centre Model according to the PlioMIP protocols for atmosphereonly models ("Experiment 1", Haywood et al., 2010) and coupled ocean-atmosphere models ("Experiment 2", Haywood et al., 2011), a summary of the basic results and some initial analysis including model-data comparison.Introduction Conclusions References Tables Figures Back Close Full Model description We use the UK Meteorological Office Hadley Centre Model, also known as the Unified Model (UM) for these experiments, specifically the atmosphere-only version, HadAM3 (Pope et al., 2000), for Experiment 1 and the coupled ocean-atmosphere version, HadCM3 (Gordon et al., 2000), for Experiment 2. Atmosphere The atmosphere module has a resolution of 96 × 73 grid points (3.75 • × 2.5 • ) and 19 vertical levels using a hybrid σ-pressure grid with a timestep of 0.5 h.An Arakawa Bgrid (Arakawa and Lamb, 1977) is used in the horizontal plane to improve accuracy, with thermodynamic variables stored at the centre of the grid and wind components at the corners (Johns et al., 1997). The atmosphere-only model uses specifies 12 mid-monthly fields of sea-surface temperature as boundary conditions, which are interpolated to daily values at run-time. A convection scheme due to Gregory et al. (1997) is included which accounts for the direct effects of convection on momentum.A first order scheme for turbulent vertical mixing of momentum and thermodynamic quantities is used within the boundary layer, which can occupy up to the first 5 layers of the model (Smith, 1990).Sub grid-scale gravity wave and orographic drag parameterisations include the impact of orographic variance anisotropy (Milton and Wilson, 1996;Gregory et al., 1998). Clouds are modelled as either water, ice or mixed-phase between 0 and −9 • C. Clouds are aggregated into 3 layers (low, medium and high) and form when the cell moisture level standard deviation exceeds a critical level of relative humidity, RH crit = 0.7.The threshold of total water content for precipitation to occur is varied between land and ocean cells to account for the different levels of available cloud condensation nuclei.Introduction Conclusions References Tables Figures Back Close Full Radiation and land-surface energy schemes The radiation scheme of Edwards and Slingo (1996) has 6 short-wave and 8 long-wave bands and represents the effects of water vapour, carbon dioxide, ozone and minor trace gases.A background aerosol climatology following Cusack et al. (1998) increases the atmospheric absorption of short-wave radiation relative to previous versions representing a significant improvement.The land-surface energy scheme, MOSES I (Cox et al., 1999) accounts for the effects of freezing and melting of soil moisture in 4 soil layers and includes the impact of atmospheric concentration of carbon dioxide, water vapour and temperature on stomatal resistance to evapo-transpiration. Ocean The ocean module has a horizontal resolution of 288 × 144 grid points (1.25 that is 6 ocean cells correspond to each atmosphere cell.The land-sea mask is defined at the atmosphere resolution to simplify coupling.There are 20 vertical levels with finer definition at the ocean surface: the first cell is 10 m deep.The ocean timestep is 1 h.The ocean and atmosphere modules are coupled once a day with no flux adjustment being necessary. Modern bathymetry is derived from the ETOPO5 reconstruction (Edwards, 1989) using a simple smoothing algorithm.Behaviour in some significant channels is modified from the resulting coarse interpolation to ensure a more realistic model performance (Gordon et al., 2000).The Greenland-Scotland ridge and Denmark Strait have significant sub-gridscale channels which are lost in the smoothing and so have been re-created by deepening single cell width channels in 3 locations along the ridge to reproduce a mean outflow matching observations.The resolution of the Gibraltar Strait leaves the Mediterranean isolated, so a partial mixing of the closest cells at each depth down to 1200 m is carried out to represent the actual mixing that occurs across this opening.The region around Indonesia is modified to ensure that flow occurs between Introduction Conclusions References Tables Figures Back Close Full Indonesia and Papua New Guinea and not between Indonesia and the mainland of Asia. A rigid lid approach is used meaning there is no variation in volume of the ocean.Freshwater flux from land runoff is therefore converted to a salinity flux on entering the ocean.Ice sheets are not modelled dynamically in HadCM3, therefore the snow accumulation on each ice sheet is balanced by a notional equivalent loss through iceberg calving represented as a freshwater flux distributed around the edge of the ice sheet and polar oceans. The ocean mixed layer mixing of tracers (potential temperature and salinity) is represented by the Kraus and Turner (1967) model which assigns 15 % of gravitational potential energy and 70 % of wind-stress energy to turbulent kinetic energy, which is mixed out exponentially with depth.At all depths, 5 iterations of convective mixing of tracers are carried out each timestep.Horizontal mixing of tracers is carried out using the isopycnal parameterisation of Gent and Mcwilliams (1990).Horizontal mixing of momentum is performed using a latitudinally varying formulation which, coupled with the finer resolution of the ocean grid, enables western boundary currents to be resolved. Sea ice Sea ice is calculated as a zero layer model on top of the ocean grid.Partial cell coverage of sea ice is possible in all high latitude cells, up to 0.995 in the Arctic and 0.98 in the Antarctic according to the parameterisation of sea ice concentration due to Hibler (1979).Ice forms primarily by freezing in leads, also ice can form from snow falling on existing ice and by freezing at the base at the freezing point of −1.8 • C. A constant salinity is assumed for ice, excess salt from freezing is rejected into the ocean.Ice drift follows the ocean currents in the top layer, converging ice has a limit of 4 m in depth.Ice albedo is set at 0.8 below −10 • C and 0.5 above 0 • C with a linear variation between.Introduction Conclusions References Tables Figures Back Close Full Model validation The Hadley Centre model validation is documented for HadAM3 in Pope et al. (2000) and for HadCM3 in Gordon et al. (2000).The model has been shown to reproduce the main features of modern climate observations. Experimental design Table 1 summarises the experimental design for both Experiment 1 (Haywood et al., 2010) and Experiment 2 (Haywood et al., 2011) Pliocene and control simulations.Also included are details for an additional Pliocene simulation, a fully coupled experiment based on the previous PRISM2 version of boundary conditions (Dowsett, 2007) which will be referred to in the discussion below. Land-sea mask The PlioMIP protocols define two possible land-sea masks: a "preferred" mask which differs from modern (primarily due to sea-level change and glacial erosion) and an "alternate" mask which is the same as modern.For Experiment 1, the Pliocene simulation land-sea mask was derived from the PlioMIP preferred fractional dataset, being first interpolated onto the UM 3.75 • × 2.5 • grid, then those cells with a land-fraction greater than 0.5 being set to be land.The principal differences from the modern mask are in the Hudson Bay which is filled in at low altitude and the regions of West Antarctica where the modern ice shelf is absent in the Pliocene.The Panama Seaway is post-edited to be closed as the interpolation process renders this region as ocean in the coarser grid. The standard UM land-sea mask was used for the control run.All of the coupled Experiment 2 simulations also use the standard UM land-sea mask without modification due to the difficulty of changing the land-sea mask in the ocean module and achieving a stable solution.This mask is derived from the US Navy 10' topography (NCAR/Navy, 1984) and can be seen in the boundary condition plots in Fig. 1.Introduction Conclusions References Tables Figures Back Close Full Topography including ice sheets For both experiments, Pliocene topography was created using the anomaly method specified in the PlioMIP protocol.Anomalies were calculated as the difference between the PlioMIP Pliocene and control datasets.For Experiment 1, the UM standard topography was then extended to cover the different land-sea mask using an iterative expansion algorithm before applying the anomaly values.Ice sheet topography is included simply as part of the overall topography.The resultant field was then masked with the required final land-sea mask ensuring that all ocean points have an elevation of 0 m and no land points are below sea-level.For the pre-industrial control dataset, calculation of derived fields required for the model describing the orographic variance was performed on the high resolution topography data prior to regridding onto the UM resolution.These data were also used for the Pliocene in both experiments as topographic data on the same resolution as the modern data are not available to make the equivalent variance calculations.In the case of Experiment 1, the data were expanded to match the preferred land-sea mask. Land surface properties including ice sheet extent Properties required by the land surface-energy scheme for the Pliocene simulations were derived from the PlioMIP BIOME4 dataset by means of the lookup table of Haywood et al. (2010).This table relates the 28 biome types describing land cover to the MOSES I input parameters (Cox et al., 1999) via the land-use classifications of Wilson and Henderson-Sellers (1985).These data include ice as a possible surface type, hence the extent of ice sheets is naturally incorporated into this process.In land regions which are specified as ocean in the PlioMIP dataset, these derived parameter fields were expanded out from neighbouring land points, most notably in the West Antarctic region.This conversion is illustrated using snow-free albedo which is shown in Fig. 1.Introduction Conclusions References Tables Figures Back Close Full For the control experiments, the standard UM dataset of vegetation was applied which is based on the Wilson and Henderson-Sellers (1985) archive of land cover. Soil properties Soil properties in both Pliocene and control experiments are those used in the standard UM setup derived from Wilson and Henderson-Sellers (1985), expanded in the case of Experiment 1 Pliocene simulation to match the preferred land-sea mask.In reality, soil properties could be different at the Pliocene compared to modern, but a lack of Pliocene palaeosol data precludes meaningful changes to be implemented in the model. River routing (Experiment 2 only) River routing in both Pliocene and control experiments is the same as the UM standard catchments. Sea surface temperatures and sea ice (Experiment 1 only) Sea surface temperatures (SST) for the Pliocene simulation were created using the specified PlioMIP anomaly method (Haywood et al., 2010, Sect. 2.2).The difference between the Pliocene and modern fields was calculated and applied to the UM standard SST climatology for each month.Temperatures were not allowed to fall below −1.7 • C. The data were expanded into regions which are land in the standard UM mask using an iterative process, in particular into the Hudson Bay.The SST fields for January and July are shown in Fig. 2. Sea ice is specified on a monthly basis where SST < −1.7 • C with a depth of 2 m in the Northern Hemisphere and 1 m in the Southern Hemisphere, as in the control.Introduction Conclusions References Tables Figures Back Close Full Solar constant and orbital parameters The solar constant and orbital parameters were unchanged from the standard UM values (Ingram et al., 1997) and are summarised in Table 1.The perihelion is specified as day 2.5 from a 360 day year giving an angle of 281.5 • from the spring equinox. Trace gases and aerosols The atmospheric CO 2 concentration was set to 280 ppmv for the control simulations and 405 ppmv for the Pliocene in accordance with the PlioMIP protocol.Levels of methane (761 ppb) and nitrous oxide (269 ppb) were held constant in all simulations at standard pre-industrial levels.Ozone is specified as a fixed annual climatology of 12 monthly fields which is the same for all simulations.No aerosols are specified other than the background climatology of the scheme of Cusack et al. (1998). Initialisation For Experiment 1 simulations, the atmosphere was initialised using a standard UM preindustrial restart file.For Experiment 2, the atmosphere and ocean modules were initialised from the respective Pliocene and control simulations based on PRISM2 bound- for coupled experiments, sea ice is initialised using the method of Sect.3.6. soil temperature is initialised with a constant value of 14 • C in ice free regions and −6 • C with ice cover. soil moisture is initialised with constant default values for each of the 4 layers.snow depth is initialised with flat profile of 50 m over ice and none elsewhere. Integration length, spinup and climatological means Both Experiment 1 simulations were run for 50 yr, as required by the PlioMIP protocol, enabling the simulations to come to equilibrium.The Experiment 2 Pliocene simulation was run for 500 yr as specified in the protocol.The control model was run for 200 yr, but this simulation is a continuation from that of Lunt et al. (2010) with no change to the boundary conditions.Average climatologies for the Experiment 1 simulations were calculated over the last 30 yr and for Experiment 2 over the final 50 yr. Results The results are presented here firstly for Experiment 1, then Experiment 2. Global Global means for the Experiment 1 and 2 simulations are listed in Table 2.The coupled model predicts a significantly greater level of warming in the Pliocene relative to the control than the atmosphere-only version: 3.3 • C compared with 2.5 • C.There is also a small rise in total precipitation in the Pliocene relative to the control of around 4 % in Experiment 1 and 5.5 % in Experiment 2. The latitudinal temperature gradient, especially in the Northern Hemisphere is significantly reduced in the Pliocene relative to the control.Polar amplification in the Pliocene relative to the control is clear in the zonal profiles.These results are apparent in greater detail in Fig. 5 which shows annual, DJF and JJA mean surface air temperature patterns for both Pliocene and control simulations and the difference between them.The polar amplification is most pronounced in winter for both hemispheres.In the Antarctic, the majority of the warming maxima occur in regions which are ocean in the Pliocene and land in the control simulation due to the change in albedo and heat capacity, combined with smaller areas on land where the ice sheet is at considerably lower altitude in the Pliocene model.Pliocene Arctic warming is also associated with regions of change in the altitude or extent of the Greenland icesheet in a similar manner to the Antarctic.There is also a warming evident in the North Atlantic driven by the sea surface temperature boundary conditions which are significantly warmer than in the control. Experiment 1 Figure 6 summarises the precipitation patterns for the Pliocene and control experiments and the difference between them, showing the annual, DJF and JJA means. There is a reduction in equatorial rainfall in the Pliocene, especially the extent and intensity of the south Asian summer monsoon systems.poles lead to an increased latitudinal temperature gradient, which is broadly similar in shape to that predicted for the control experiment (see Fig. 7a), in contrast to the reduction in latitudinal gradient shown by the PRISM reconstruction, implicit in Fig. 4c.This global shift in temperature is also apparent in greater spatial detail in Fig. 8, which shows global temperature patterns for Experiment 2. Outside of the polar regions, there is very little variation in temperature shift with latitude, only the marked difference between land and sea noted above.The difference in Experiment 1 and Experiment 2 patterns of temperature change are shown in Fig. 11d-f and confirm the previous observations, the equatorial ocean is broadly warmer in Experiment 2, the poles are cooler, especially in winter and the land warms more than the oceans.The most striking difference is in the far north Atlantic where there is apparent cooling in Experiment 2, more meaningfully described as a lack of the warming seen in the sea surface temperature data which are imposed in Experiment 1. Experiment 2 Figure 9 shows precipitation patterns for Experiment 2. The Pliocene minus control differences for Experiment 2 are very different here from those seen for Experiment 1 in Fig. 6.In this case, there is an increase in equatorial precipitation and an intensification of the Indian monsoon.There is also a significant drying over equatorial South and Central America. Figure 10 summarises the coupled model predicted sea surface temperatures and salinity for the Pliocene, control and the difference between them.Also shown are Atlantic (Fig. 10g,h) and Pacific (Fig. 10i,j This extended "hotspot" is not reproduced by the model, however there is a distinct circulation change south of Greenland evident in the sea surface temperature change. HadCM3 is known to be sensitive in the modern to the detailed submarine topography of the Greenland-Scotland Ridge in this region (Gordon et al., 2000) and recent work has also suggested that this region is sensitive to uncertainties in the Pliocene bathymetry (Robinson et al., 2011). Impact of revised PRISM3 boundary conditions The PRISM3 boundary conditions applied in the model are updated from the previous version, PRISM2 (Dowsett, 2007).Previous simulations has been carried out with the coupled model, using these older boundary conditions (Lunt et al., 2010) which we continued for 200 yr, Fig. 13 shows the change in Pliocene-control differences for PRISM3 compared with PRISM2 boundary conditions.See Table 1 for details of the PRISM2 model simulation.There is very little difference in the simulations in terms of PRISM3-PRISM2 global means: surface air temperature anomaly falls by 0.05 • C and precipitation is unchanged to 3 significant figures.At the regional level however, there Introduction Conclusions References Tables Figures Back Close Full is a distinct increase in seasonality of temperature over much of the land in the Northern Hemisphere.There are also temperature differences where ice sheet topography has been updated and in regions of significant change to the orographic boundary conditions, notably the Rockies which show cooling with PRISM3 topography even in summer when the rest of the Northern Hemisphere is has a fairly uniform warming trend.Recent work has interrogated the impact of each set of boundary conditions (CO 2 , orography, ice and vegetation) in the PRISM2 simulations (Lunt et al., 2012), and a similar study is required for the PRISM3 simulations in order to fully understand the changes shown in Fig. 13, but this result serves to highlight the significance of the uncertainty in boundary conditions on model predictions. Impact of alternative vegetation data in control experiment For the control Experiment 1 simulation, an alternative vegetation dataset was used to derive the land surface properties required by the model.PlioMIP supplies a modern biomes dataset (BAS Observ BIOME.nc)whose data represent potential natural vegetation created by running the BIOME4 vegetation model (Haxeltine and Prentice, 1996;Kaplan et al., 2003) forced with a pre-industrial climate.This differs from the WHS dataset which represents modern conditions and thus includes an element of human land use changes.The same lookup process as for the Pliocene data was applied to derive surface characteristics from biomes.The differences due to the change in the control model is shown in -Polar amplification is evident in the atmosphere-only model, which is highly constrained by the PRISM3 SST dataset, reducing the mean latitudinal temperature gradient relative to the control.The coupled model has higher levels of warming in equatorial regions and lower levels in polar areas, relative to the atmosphere-only version, resulting in a higher mean latitudinal gradient similar to that in the control simulation. -There is a marked difference in predicted precipitation patterns between the atmosphere-only and coupled models.The atmosphere-only model predicts reduced equatorial and Asian monsoon intensity whereas in the coupled model, these systems increase in intensity. -Features of the PRISM3 data which could be interpreted as suggesting changes in ocean circulation are partially supported by the coupled model results.There is good agreement in the region of the Kuroshio current which is predicted to warm significantly, no evidence of Eastern Pacific coastal warming in present day cold upwelling zones.There is evidence of a distinct circulation change in the North Atlantic south of Greenland in the model, in the data there is also a significant feature in the North Atlantic, but considerably further north. -Sensitivity to boundary conditions was demonstrated using alternative datasets for both Pliocene and control simulations.Whilst the change from PRISM2 to Introduction Conclusions References Tables Figures Back Close Full Full Full the Pliocene Model Intercomparison Project (PlioMIP) simulations carried out using the UK Meteorological Office Hadley Centre Model (HadAM3 and HadCM3).General circulation models (GCMs) are one of the main tools used for studying the climate system of the present day and the past and to predict likely climate changes Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | ary conditions(Lunt et al., 2010) with cumulative integration lengths of over 1000 yr.Time series of temperature evolutions of the global mean of 1.5 m air temperature are shown in Fig.3along with intermediate and deep ocean temperature evolution for the coupled simulations.Other parameters were initialised as follows: Discussion Paper | Discussion Paper | Discussion Paper | Figure 4 Figure 4 shows the zonal average surface air temperatures for Experiment 1, for land only and ocean only, both absolute values and Pliocene minus control difference.The profile over the non-polar oceans falls to close to 0 • C around the equator; this profile is strongly constrained by the imposed sea surface temperature boundary condition. Figure 7 Figure 7 shows the zonal mean surface air temperatures globally, for land only and ocean only along with Pliocene minus control difference for Experiment 2. Polar amplification is seen as in Experiment 1, though with reduced magnitude.Experiment 2 also exhibits noticeable warming at lower latitudes, typically at levels of around 2 • C over water and 4 • C on land which are consistent throughout most of the non-polar latitudes.The difference in Pliocene-control changes are shown in Fig. 11 as Experiment 2 minus Experiment 1. Figure 11a-c again demonstrate the increased warming at lower latitudes in Experiment 2 and the reduction in polar warming, especially apparent in the Northern Hemisphere.Taken together, the warmer equator and colder (less warm) 848 ) zonally averaged meridional overturning streamfunction.The change in sea surface temperature between the Pliocene and control broadly parallels that seen in surface air temperature over the oceans, a rise in the order of 2 • C in a fairly uniform distribution.There is a distinct change in circulation south of Greenland with adjacent warming and cooling zones.Salinity increases in the Atlantic and reduces in the rest of the world ocean, with the greatest decrease being in the Arctic ocean.The Pacific streamfunction shows a significant increase in the formation of Antarctic Bottom Water.The model results are compared with data in Fig. 12.The Pliocene minus control sea surface temperatures from the coupled experiments are shown in the background contours.The data are shown as the difference between the PRISM3 data points and the annual mean of the Hadley Centre HadISST compilation of sea surface temperature observations (Rayner et al., 2003) averaged over the years 1870-1899 at the PRISM3 locations.Data sites along the Eastern Pacific coast show high levels of warming over and above the general Pliocene increased warmth, phenomena which are not reproduced by the model.Warming of the Kuroshio current is present in the model.The relatively dense sampling of the North Atlantic shows consistent warming north of 50 • N. Discussion Paper | Discussion Paper | Discussion Paper | - Fig. 14.As with the change in Pliocene boundary conditions discussed above, this change in the control model also results in significant changes to the results.There is a direct impact over land with the standard pre-industrial control data resulting in predicted temperatures around 2 • C colder over much of the Northern Hemisphere land in Eurasia and North America, corresponding to a reduction in global annual mean temperature of 0.26 • C or a reduction of around 10 % of the total warming in the standard experiment (0.26 in 2.5 • C).It is possible that the Pliocene minus control anomalies here are a closer representation of the true natural mid-Pliocene warming, as the vegetation changes include only naturally forced changes.Furthermore, the Discussion Paper | Discussion Paper | Discussion Paper | underlying biome vegetation types and their conversion to model boundary condition parameters is more consistent between the Pliocene and control simulations than in the standard pre-industrial dataset.The predicted global mean temperature increase for the Pliocene is greater for the coupled model (3.3 • C) than the atmosphere-only version (2.5 • C). Discussion Paper | Discussion Paper | Discussion Paper | PRISM3 mid-Pliocene conditions did not significantly alter the global mean climate, the change from standard pre-industrial to potential natural modern vegetation reduced the predicted Pliocene minus control warming by 10 %Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Table 1 . Summary of boundary conditions for PlioMIP simulations. Table 2 . Summary of overall global means and differences from control for Experiments 1 and 2.
2018-12-29T08:40:07.856Z
2012-09-13T00:00:00.000
{ "year": 2012, "sha1": "4a4abb4338bd0b7f0ef14224a67eee298ad043c2", "oa_license": "CCBY", "oa_url": "https://gmd.copernicus.org/articles/5/1109/2012/gmd-5-1109-2012.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "4a4abb4338bd0b7f0ef14224a67eee298ad043c2", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Environmental Science" ] }
213590000
pes2o/s2orc
v3-fos-license
Q-Method Evaluation of a European Health Data Analytic End User Framework MIDAS (Meaningful Integration of Data Analytics and Services) project is developing a big data platform to facilitate the utilisation of a wide range of health and social care data to support better policy making. Our aim is to explore the use of Q-methodology as part of the evaluation of the implementation of the MIDAS project. Q-methodology is used to identify perspectives and viewpoints on a particular topic. In our case, we defined a concourse of statements relevant to project implementation and goals, by working from a logic model previously developed for the evaluation, and structured interviews with project participants. A 36-item concourse was delivered to participants, using the HTMLQ system. Analysis was done in the qmethod package. Participants had a range of professional backgrounds, and a range of roles in the project, including developers, end-users, policy staff, and health professionals. The q-sort is carried out at 14 months into the project, a few months before the intended first release of the software being developed. Sixteen people took part, 6 developers, 5 managers, 2 health professionals and 3 others. Three factors (distinct perspectives) were identified in the data. These were tentatively labelled ‘Technical optimism’, ‘End - user focus’ and ‘End -user opt imism’. These loaded well onto individuals, and there were few consensus statements. Analysis of these factors loaded well onto individuals with a significant number of consensus statements identified. Introduction Health care like many modern activities generates large amounts of data, a proportion of which is stored, in some accessible form as usable information, but rather less of which used to guide practice, planning or policy (Murdoch & Detsky, 2013). Information systems are a key tool to support this, and assist with effective decisionmaking. The need for effective use of data is particularly critical in public health organizations, where it is required to support areas such as epidemiologic surveillance, health outcome assessment, program evaluation and performance measurement, public health planning, and policy analysis (Studnicki et al., 2008). To take appropriate actions, health policymakers require many different kinds of information. The knowledge translation literature contains many studies on information synthesis methods for producing best available evidence. However, less attention is paid to methods of disseminating epidemiological information to policymakers (Zakkar & Sedig, 2017). To satisfy this need, more flexible health data representation, analysis, querying and visualization methods (analytic software tools) are desirable (Tilahun et al., 2014). The literature on information systems development recommends that end users should be involved in the process of IS development (ISD) (Engler, 1996). In practice, user involvement may be limited or completely absent (King, 1995). Developers are therefore forced to "design in the dark." Further, software engineering development models do not take into consideration all the dimensions of software development, in particular the organizational, economic, and human dimensions (Toffolon, 2000;Ilavarasan et al., 2003). End users differ greatly in experience, and professional background, yet visualization tools and other software platforms are designed for a single idealised end user (Ziemkiewicz et al., 2012). The effectiveness of knowledge integration in a software system determines the quality of the overall system. This knowledge gap is the commonest reason for the rejection of a software system by the intended users (Dakhli & Chouikha, 2009). It is therefore critically important to ensure that a thorough evaluation is conducted throughout the development process to minimise the potential for software rejection. A rigorous evaluation of information systems is of great importance for policy makers and end users of the technology (Kaplan et al, 2002). Health informatics evaluations provide an objective measurement of processes and outcomes against expectations, with the intention of identifying strengths and successes, whilst finding means of addressing and improving weaknesses or even system failures (Rigby, 2006). Context of the Study MIDAS project is developing a big data platform to facilitate the utilization of a wide range of health and social care data by policy makers. The platform will enable the integration of heterogeneous data sources, provide privacy-preserving analytics, forecasting tools and bespoke visualizations of actionable epidemiological data. Study Design Longitudinal semi structured interviews are performed at critical time points throughout the duration of the project. This involves stakeholders (lead technical developers, platform end users and policy makers) utilizing a novel parallel case study design. The data collection process was developed based on a logic model, semi structured interviews and Q sort to evaluate health analytic software acceptance of the MIDAS platform and identification of system requirement gaps at each iteration of the tools development. Participants Stakeholders, lead technical developers and the platform tools end users with a background in epidemiology and heath policy development (n =19) were recruited through the MIDAS project policy board. Q Methodology The Q method has been described as the scientific study of subjectivity (Webler et al., 2009;Watts et al., 2012). Concourse theory proposes that people form their belief and value systems within a universe of ideas, feelings, thoughts, and related referential material (Brown, 1980;Stephenson, 1986a;Stephenson, 1986b;Wingreen et al., 2009). The concourse is the universe of ideas or statements on any given topic, and a person's belief or value system with respect to the concourse manifested by how that person prioritizes the ideas and thoughts within the "universe" of the concourse. Qmethodology is the proposed means of operationalizing and analyzing a concourse, and the person's unique system of beliefs and values with reference to the concourse (Martin et al., 2015). The primary benefit of using q-methodology is that it provides a rich and interpretive understanding of the phenomenon of interest, and minimal demands on the sample size (Brown, 1980). The application of q-methodology commences with the development of q-statements which represented the concourse, in this case technical development teams' expectations of the requirements of end users of the MIDAS platform and essential factors for successful delivery of the MIDAS project. These dimensions representing the concourse were sourced from the project delivery protocol's, a logic model developed in conjunction with the MIDAS consortium, and one to one interview with end users of the MIDAS platform, lead technical developers and policy makers. Concourse Development The task of concourse construction is to identify components for relevant subjects at relevant moments in relevant contexts (Kampen et al., 2014). The concourse is the population from which a representative sample of statements is to be drawn. The concourse, according to Farrimond et al. (2010), "can never be fully known but the sample of items (usually written statements) should give a workable estimate of it." The basis for development of the concourse were semi-structured research questions based on stakeholders' expectations of the project and platform tools development. The primary objective was to identify expectations of the platform tools and its utility for the purpose of assisting effective public health decision making and policy formation. Prior to undertaking the interviews both stakeholder groups were provided with the general themes of the interview questions to assist them consider their answers in advance of the interviews process. Interview questions related to: the big data collection process, barriers to adoption of the project, and discussion of early outputs, outcomes and impacts of the MIDAS project. Stakeholders (end users, policy makers, and lead developers) were interviewed in person via conference software. Questions in subsequent interviews with stakeholders will focus on the projects overall progress, platform tool implementation, technology adoption issues and resolutions and the level of collaboration between end users, and technical development teams at each iteration of the platform tools development. Each phase of interviews will inform the next round as a means of identifying gaps between end users' expectations of the platform, achievement of the logic model outcomes, impacts. The duration of the semi structured interviews ranged from 30-40 minutes, they were recorded with the consent of stakeholders and transcribed verbatim. On completion of each round of interviews, the stakeholders provided with a copy of their transcript for review. Development of the concourse for this study was based on interview themes, technology acceptance literature, the project deliverables for the MIDAS project as a means of guiding the development of a pool of statements (n = 97). Concourse Refinement The process of refining the concourse statements involved face and content validation (Valenta et al., 1997). The face validation process involved refining statements for clarity, readability and repetition. Content validity was performed by the research team and collaborators to check items of ambiguity, applicability and completeness within the context of the study. On completion of this validation process the number was reduced to (N = 36) statements. Q Sort Ranking The objective was to evaluate how stakeholders view the MIDAS project progress at the current iteration of the platform tools development, achieved through ranking and rating statements. Prior to undertaking the online Q sort stakeholders were provided a copy of the concourse items and instructions in advance. Appointments were scheduled separately with each participant to conduct the Q sort with a member of the study team to provide assistance to clarify statements and the ranking process procedure if required. Under the instruction of the researcher, participants were requested to reading through each statement and rank them into three columns "agree "neutral", and "disagree". On completion of ranking each set, participants were instructed to further rank their statements into the ±3, ±2, ±1 column until all of the statements were populated on the grid (refer to Figure 1). The final stage of the q sort required stakeholders to provide a brief explanation for their assignments of the "+3" and "-3" "agree" "disagree" statements and answer questions relating to their professional involvement in the MIDAS project. Q-sort analysis Factor analysis extractions were obtained through principal components analysis. The factor structure was simplified using varimax rotation. Composite factor scores for each statement in the Q set were determined from the defining Q sorts for each factor. Prior to factor interpretation ( Table 1) and normalized and weighted average statement scores (z score) or factor scores were calculated. Statements with a significant factor score (p < 0.05) were considered and assessment of the preliminary factor loadings, primary factors were extracted with eigenvalues (> 1.00) (Kelly et al., 2016). Results Each factor represents a viewpoint, held by one of more of the 16 participants, and expressed in their views towards the 36 statements [S01] to [S36] in the concourse. Three separate factors were identified, described as "Technical Optimism" "End User Focus" and "End User Optimism" explaining 47% of the total variance. The factor score arrays from participants contributed to the process used to develop an understanding of each viewpoint. Common agreement statements endorsed positively included participant agreement that effective direction is essential for successful implementation of the MIDAS platform tools (S03) and for the to generate awareness of the benefits of big data (S28) in the context of public health and for these tools to assist public health professionals (S30). Statements what were not endorsed by participants were a matter of timing, as the platform was at the early stage of development when these questions were posed. These negatively endorsed statements related to achievement of multisite collaboration (S33), strategies in place to integrate data sources for each demonstration project or the utility of the platform tools to enhance public health decision making (S20) within the six months (S14). However, at this time the majority of participants were of the opinion that there were no strategies in place to integrate gaps between data sources (S15) is concerning. Factor 1 -Technical Optimism Factor 1 explained 17% of the total variance for 5 out of 16 participants significantly loaded on this factor. There was strong agreement across participants that the MIDAS platform will enable end users combine datasets to develop expert knowledge systems and data models (S22). It is essential for the platform to generate awareness of the benefits of big data (S28) and technical meetings were beneficial for the early identification and provision providing solutions to issues encountered during the early stages of the platform tools development (S04). A set of perspectives that were viewed positively but not necessarily shared with others, related to the benefit of training workshops for end users (post implementation (S06) and confident the platform tools will be sufficiently flexible to allow non-MIDAS researchers develop their own data mapping and forecasting models (S25). Since the Rovinj, Croatia commencement of the project, there is a clearer view of the scope of technical issues between technical developers and end user groups (S12). Participants strongly disagreed with statements relating to the development of indicators (S19) and enhance public health decision making (S19) within the next six months. Participants disagreed that open source cloud tools were an essential component of the platform if (time, manpower) resources need to be reallocated (S18) or that the MIDAS project use the EU Data Portal to standardise meta-data collation techniques (S34). The following statements were viewed negatively by participants relating to the quality of technical documents (S13) gaps between data sources to achieve the required outcomes and impacts (S15) data integration and data sharing achieved this year (S16) or that the tools will be sufficiently developed the end users can provide technical development teams (S17). Factor 2 -End User Focus Factor 2 explained 16% of the total variance 4 out of 16 participants significantly loaded on this factor. This perspective strongly endorses the process of completing legal agreements between stakeholder groups negative impact on the pilot demonstrations progress (S02). Participants strongly endorsed that training workshops should be underway (S06) and the process of resolving governance and consent issues (S01). Participants strongly disagreed that there are strategies in place to integrate gaps between data sources (S15) as identified in Factor 1 the platform will develop indicators (S19) red flags (S21) to identify at risk population groups, provide information (S32) and sufficiently flexible (S14) for policy makers and enhance public health decision making within six months (S20). Interestingly participants disagreed with the utility of the platform tools utility to develop expert knowledge systems and data models (S22), work packages are on target within the agreed deliverables schedule (S08). They agree that there is a need for greater understanding between developers and end users of the scope of technical problems encountered so far (S18) the consortium should encourage newcomers to use the platform (S29) or that a secure cross EU data source integration framework with open APIs is an essential component to allow newcomers to use the tools (S26). Participants agreed less strongly there is a discrepancy between developers and end users' expectations of the final platform tools (S11). Factor 3 -End User Optimism Factor 3 explained 14% of the total variance 3 out of 16 participants significantly loaded on this factor. Participants endorsed effective direction from the policy board as essential to the successful implementation of the platform [S03] more strongly than the other statements. Participants endorsed moderately strongly use the EU Data Portal to harvests the metadata of public sector information [S34] and system dynamics simulations to facilitate improved decision making [S35]. They weakly endorsed development of indicators to support effective public health and health policy decision within six months [S19] and the utility of the platform tools to provide 'red flags' identifying 'at risk' population groups to support decision simulations [S21]. They strongly reject the process of completing legal agreements slowed progress developing demonstration test platforms [S02] more strongly than factor 1, adhering to data governance, data standards, GDPR [S01]. This suggests participants are satisfied with the regulatory environment in which they work. They disagree more ENTRENOVA 12-14, September 2019 Rovinj, Croatia strongly than others the platform tools are sufficiently flexible so that both senior policy makers and data analysts can use it effectively [S14]. At this point, in the MIDAS platform's development participants are stronger than perspective 2 in their rejection of the quality of technical documents [S13] and that the expectations of each work package are clearly defined and feasible within the agreed deliverables schedule [S08]. They also rejected statements relating to expectations of each work package clearly defined and feasible within the agreed deliverables schedule and [S17]. The demonstration tools for each work package will be sufficiently developed to allow end users to provide work package 6 with timely feedback. Participants rejected feasibility of the MIDAS platform to generate social media campaigns to get feedback from the public relating to public health policy [S27]. They agree with factor 2 that there is a need for a greater understanding between developers and potential end users of the scope of technical problems encountered so far [S12] and that the platform should be sufficiently flexible to allow non-MIDAS researchers develop their own data modelling, forecasting and mapping algorithms and that [S25]. The MIDAS tools will support policy makers and not just analysts in adopting data driven problemsolving mind set. Adhering to data governance, data standards, GDPR, and concerns relating to consent, will slow progress implementing the MIDAS platform. s21 The MIDAS platform will provide 'red flags' identifying 'at risk' population groups to support decision simulations. Discussion Lead technical developers and end users of a data analytic framework from a range of professional backgrounds participated in two rounds of semi-structured interviews. The objective was to explore project progress and the utility of the MIDAS platform tools to meet end user (epidemiologists, policy makers) requirements from the system. The q sort was performed with these stakeholders a few months before the intended first release of the software being developed. The concourse of 36 statements was constructed based on themes identified through coding the semi structured interview transcripts working from a logic model to identify outcomes, and impacts to achieve successful completion of the project. Three factors were identified, labelled as 'Technical optimism', 'End-user focus' and 'End-user optimism'. Common agreement statements endorsed positively by participants related to effective project management generating awareness of the benefits of big data analytics were essential to ensure buy in from end users identified to work with the platform tools. The principal findings from the first factor (perspective) "Technical optimism" indicated overall participants acknowledge the project is moving in the right direction facilitated through technical meetings and acknowledge the platform tools were in the early stage of development. At the time when the Q sort was conducted significant resources were focused on cleaning and structuring datasets for each of the four pilots. As a result, progress developing technical indicators, data integration across the project which is an ongoing process and the projects impact at that point to enhance public health development were not endorsed by participants. The primary findings for Factor 2 -End-user focus (perspective), related to GPDR governance and consent issues negative impact on progress rolling out of the demonstration for each pilot and platform training with end users. As expected, and highlighted in Factor 1 participants did not expect the platform tools to develop indicators and enhance public health decision making within six months of administrating the Q sort. Some concerns were evident from participant's pessimistic opinion of the utility of the platform tools to develop expert knowledge systems and data models, or that the project is on track within the deliverables schedule. The final factor 'end-user optimism' (perspective) as participants expressed positive expectations that the expected data analytic modelling and forecasting utilities of the platform could be used to generate red flags to identify at risk population cohorts from the pilot datasets. Those loading on this factor also strongly rejected the statement that GPDR compliance and ensuring the pilot datasets meet, other data governance criteria impacted on progress to complete the demonstration projects. However, technical issues were highlighted with the need for end users and technical development teams to discuss and resolve these issues. Over the past few months the number of technical meetings with lead developers and end users' groups has become more frequent in order to undertake training and user experience testing. The strengths in this study are the use of Q methodology, in conjunction with semi structured interviews as a means of studying individual perspectives a systematic and rigorous manner, enabling statements to be quantified statistically using validated research techniques (Kelly et al., 2016). Potential limitations include the fact the MIDAS platform tools were in the early stages of development at the time the interviews were conducted. Administration time of the Q sort varied as English was not the first language of some participants, and participant fatigue cannot be ruled out. However, all participants had a good working knowledge of English. Conclusion Q methodology was utilised to identify perspectives of lead technical developers and end users during the development of a data analytic framework through semistructured interviews. Prioritised requirements of the system were clustered into three factors, which were namely "technical optimism" (factor 1). Indicated participants acknowledge the project is moving in the right direction in terms of meetings end users' requirements. However, the second factor "end user focus" indicated that in the early stages of the project, GPDR governance and consent issues had a negative impact on progress rolling out the demonstration for each pilot and platform training with end users. The final factor "end-user optimism" participants expressed positive expectations that the expected data analytic modelling and forecasting utilities of the platform could be used to generate red flags to identify at risk population cohorts from the pilot datasets. Some of the previous studies utilising the Q method related to health systems software platforms focused on e health (Banna et al., 2010), health professional adaption and use of technologies in clinical practice (Ladan et al., 2018) and the definition and utility of clinical health research (Kim & Bates, 2011). The present study contributes to the available literature through the evaluation of stakeholders (technical development teams, end users) perspectives at critical time points during the MIDAS project. As the MIDAS platform tools become more advanced further insights will be captured from stakeholders using longitudinal interviews and logic models expected outputs, outcomes and impacts to create additional concourse statements. As part of this realist evaluation framework of the platform tools development Qmethodology facilitates on understanding the viewpoints of stakeholders focusing on end users' subjective standpoints on issues affecting them. The primary outcome is to bridge the gap between end users' expectations and technical development teams acknowledging these requirements at each iteration of the platform tools development. We are confident that the stakeholder interviews on which the concourse statements are based are valid. Even though the MIDAS platform tools were in the early stages of development when the interviews the conducted to construct the concourse statements, given that, stakeholders were requested to verify interview consent and elaborate on their interview transcripts (if required). A final Q sort will be performed with stakeholders closer to the end of the project.
2019-12-19T09:19:01.147Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "6deaf9c4e0022c1b952efd2c5de39cfd4ad30fc2", "oa_license": "CCBYNC", "oa_url": "https://hrcak.srce.hr/file/365040", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "36710780855f414b537c7e295be941d079899043", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Psychology" ] }
234303886
pes2o/s2orc
v3-fos-license
Online training of specialists in the energy industry We considered the psychological and pedagogical problems arising from online learning, especially in a pandemic. Since in these conditions the lecturer’s emotional impact on the audience is depleted, the task is to use the achievements of information technologies to find ways to “bring” the audience closer to the lecturer. A computer program has been developed that uses the image recognition function so that in the online learning process the lecturer receives information about the emotional state of the audience directly during classes. The developed program can be used for the practical implementation of the proposed approach. Introduction In recent years, the principles of education in the world community have changed significantly. This was especially noticeable during the Covid-19 pandemic [1][2][3][4][5]. Distance learning has long been practiced in the education system. However, it is far from the same as online learning. During online learning, a student can listen lectures in live broadcast, use interactive tests, exchange files with a teacher, communicates with classmates and teachers in chats, etc. Such learning allows you to fully immerse yourself in the educational environment. At the same time, the emotional component is lost to a large extent during online learning. However there are specific fields of industry in which specialists should have certain psychological features, for example, operators of nuclear reactor. These features include psychological stability, the rate of reactions and intuition. Thus methodists and educators should look for different ways to bring lecturer and student closer together so that they can better feel each other's emotional reactions. This is a difficult task, and so far no results are visible in this direction. Recently, pattern recognition methods have become widespread [6]. These methods have been especially successfully applied in medicine. Scientists at the National Human Genome Research Institute (USA) have used facial recognition software to diagnose rare genetic diseases in Africans, Asians and Hispanics. Using face analysis technology, scientists in 96.6% of cases made the correct diagnosis for difficult to identify diseases and different ethnic groups [7]. Since, as mentioned above, with the help of modern methods of pattern recognition, it is possible to analyze the specific features of a person, which makes it possible to assess the internal state of a person, it becomes possible to use this approach to improve the nature of interaction between the teacher and students in online learning. We solved this problem by using the results of works [8,9] in which methods for regulating the emotional state of students in the learning process have been developed. These works examine the specific emotions of students, their classification, and methods of correcting them. The solution to the problem is to use face analysis technologies to determine the emotional state of the audience during the lecture and so that the lecturer constantly receives this information. This approach may well be practically implemented with the further improvement of online learning. One of such additional technologies is described below. Online learning today Nowadays most of the universities are providing online courses for their students within and off campuses. Online learning has become a logical continuation of distance learning. And the word "online" only indicates the way of acquiring knowledge and communication between the teacher and the student. Online learning has the capacity to break down barriers that have restricted individuals from an equitable education in the past [10]. However, there is a need for new pedagogical approaches and methodological development. In connection with the spread of COVID-19 pandemic and a selfisolation regime the online learning has become more relevant than ever. All educational institutions have urgently switched to online education. However the online classes are often problematic because there are many problems that need the face to face interaction between teacher and students [11]. We consider the most spread cloud-based online tools for example Video conferencing platforms, Zoom, Microsoft Teams, WebEx Blackboard and Google Hangouts. There are pros and cons in each application, depending on our needs. People try to define factors influencing different aspects of educational environment in e-learning. Authors of [12,13] conducted semi-structured interviews with e-teachers and e-students to determine the main factors influencing educational level in e-learning environment. Many online tools have become household names because this technology has been adopted since the outbreak of COVID-19. Using the example of some of widely used platforms for conducting online classes, we see its advantages and disadvantages (see table 1). The challenge is to be able to obtain the desired quantitative results using the selected technology. Further investigations will be conducted for choosing the corresponding technology which can be used to complement existing qualitative methods. Taking into account the last achievements in Zoom and similar technologies we can hope on the fast progress in the mentioned directions contributions in the future [14]. Of course, tutors who use Teams for teaching should provide intensive training for colleagues [15]. Tutors need to avoid duplication and set clear parameters for methods of communication. They should make it clear that a certain channel will use a chat as the preferred tool for communicating. Using pattern recognition systems Face recognition has attracted more and more attention from computer vision researchers in recent years. The face recognition systems are widely used individual frames. This approach gives rise principal difficulties, the main of which is in the following question: "which frames to use for recognition" and "how best to combine information received from different frames ". In general, the scheme of algorithms for recognizing people by face is as follows: 1) detection of the specific face area in the input image; 2) preprocessing of the face image and its geometric identification; 3) construction of a compact description vector [16]. All facial recognition technologies are based on choosing facial parts by help of computer program and obtaining video image. Then the computer program compares the obtain data with previously investigated face pictures from the existed database. These systems carry out analyzing facial features; reveal their positioning and distancing between sets of geometric coordinates. Every person's 'faceprint' has unique nature, and it is a complex task to identify geometric properties of a captured face image. However, recently developed algorithms and created programs largely solve the problem. Nowadays the corresponding 'facial detection' technologies already work. The programs allow fixing facial expressions in order to infer people's moods, emotions and affective states [17]. It is necessary to create technologies which can be applied to the specific conditions of education process. There are also statistical methods through which facial recognition technologies quantify and frame a student's face. These works are based on the 'emotion learning analytics' that has tried to use facial detection in the learning higher education. Scientists already use a facial detection of 'academic emotions' of students (contentment, anxiety, and frustration, satisfaction with the learning content) [18] for improving the learning results. New approach in order to improve the system of online learning in pandemic conditions This paper implements the idea of using image recognition technologies to improve online learning by providing the teacher in the course of classes with continuous information about the emotional state of students. To solve this problem, a computer program has been developed in a simplified version in comparison with the existing one, which allows you to apply the method of image recognition in the online education system. For the application of the developed program, a technical solution is proposed, which is illustrated in Fig. 2. Computer program and technical realization (2 Axis Automatic Recognition and Aiming System -2A 2 RAS) 2A 2 RAS is a system of algorithms for describing the interaction of a computer program with a person. It is an automatic recognition system. Let us consider how the face hover program works. The face is critical to a person's identity. This is a feature that best distinguishes a person. Face recognition -interesting and a complex problem, and affects important applications in many areas, including education. The program is designed to track faces in a picture. The algorithm is shown in Fig. 1. The work of the program includes the following stages. 1. A video image is received from a camcorder connected to a computer. The program written in C# is running on a computer. 2. The video image is split into frames and each frame is processed using the open computer vision library Emgu CV. It is a cross-platform library that can be used to practically explore really interesting features from image capturing to character recognition [19][20][21]. In this article, we focused on the OpenCV wrapper Emgu CV, whose methods can be embedded in a C# program. 3. Using a file containing a set of special functions corresponding to certain parts of the picture (in the case of a person's face, this can be a nose, mouth, lips, eyebrows, etc.), the program converts the picture to black and white, reducing its volume to almost 3 times and breaks the picture into small sections. 4. "Haar Cascades" technology [22] is applied to each area (for example, to find the eyes). If the result coincides by the required percentage with the compared original image, then this part of the picture is determined as the one that the program was looking for. The so-called Haar signs for finding and highlighting objects allow you to find out the position of the face relative to the camera. The whole system consists of several controlled servo motors, to which the command is sent to turn the video camera so that the recognized face is in the center of the picture. 5,6. Displays the confidence score for each emotional component. Determining the path to the image in the database and returning the result of emotional perception in the form of a table. Thus, by sequentially searching for different parts of the face, it is possible to determine the characteristic features of the face and track changes corresponding to different emotions. The mentioned functions are used in the form of matrices to check the picture pixel by pixel. The figure shows main parts of the proposed 2A 2 RAS system: a computer with installed Webcam, micro controller, External power supply, Servo motors. Coordinates determined using "Haar Cascades" technology the program sends via the local USB port to the micro controller in format: {X: x coordinate Y: y coordinate}. Further, a program written in C++, running on micro controllermanipulates the received data, taking out from them the desired coordinates and sends these coordinates to 2 Servo motors, which are connected with micro controller. Conclusion The use of online learning in the modern educational process, which has become predominant in the context of the pandemic, has led to the emergence of new psychological and pedagogical problems. An important problem is finding ways for improving emotional and psychological contact between the lecturer and students. We have developed a computer program and a scheme for the technical implementation of the approach based on the theory of pattern recognition to provide the lecturer with continuous information about the emotional state of the audience during the lecture. The developed program and technical implementation of the approach for assessing the emotional state of students in the process of online learning is a simplified version of similar programs used in medical diagnostics. The proposed approach is technically easy to implement, which will improve to a certain extent the interaction of the teacher and students in conditions of online learning.
2021-05-11T00:03:03.330Z
2021-01-23T00:00:00.000
{ "year": 2021, "sha1": "4283a7570a678f8fefeb84c92e21aa1b450554ed", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/628/1/012037", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b6ed61fd531d03be0760622f9034c8ca40832dc2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
214281458
pes2o/s2orc
v3-fos-license
On variations in turbine runner dynamic behaviours observed within a given facility When confronted with cracks or high stresses in turbine runners, we often wonder if the behaviour observed on one specific runner will be present on all other similar runners. In this case, we have a facility with 19 runners having the same blade geometry. In order to answer the question, we selected three runners for measurement campaigns. First, the runners were divided in groups using band length, materials and wicket gate geometries. We then examined two runners with different wicket gate geometries and were thus able to explain why one runner exhibited recurrent fatigue damage problems and not the other. However, even within a given group, significant reliability differences were found when comparing with a third runner. The observed data shows that an important turbine characteristic was overlooked. Our conclusions point toward eccentricities and imperfections in the discharge ring attributable to only the upper part of the labyrinth seal being refurbished in this facility. This may generate a significant imbalance in the force produced by the flow in the runner side chamber. The paper underscores the impact of such imbalance, which could be present in older refurbished facilities. Introduction It is generally believed that turbine runners having similar geometries will exhibit similar behaviour in terms of fatigue life and overall reliability. Yet, historical maintenance data collected from a large runof-river generating station at Hydro-Québec shows that not all turbines exhibit the same behaviour. This phenomenon has puzzled the maintenance staff for some time. By studying all turbine characteristics rather than just runner geometries, seven different turbine groups were identified within the 19 Francis turbines in this facility. Note that all the runners have the same blade profiles but, because of some of this facility's specificities, they require different runner band lengths resulting in a different blade overhang. We also considered that some runners are manufactured in different materials and have different wicket gate geometries. To understand the dynamic behaviour of these different turbines, strain measurements were carried out on 3 of the 19 Francis runners in the facility. The first runner, Runner A, was chosen from within a very reliable group that exhibits no fatigue cracking. The second, Runner B, was chosen from within a group prone to fatigue cracking. The third, Runner C, was chosen from within the same group as Runner B, but it has historically exhibited significantly more fatigue damage. Comparison between Runner A and Runner B As previously mentioned, Runners A and B belong to two different groups of turbines having the same blade profiles. They differ in term of materials, band lengths and wicket gate geometries. The differences between these two runners in terms of dynamic behaviour were first exposed in a previous study by Gagnon et al., 2017 [1]. The conclusions of this study showed that the differences in dynamic behaviour were mainly attributable to wicket gate geometries that generated much more rotor-stator interactions (RSI) during operation for Runner B, as shown in Figures 1 to 3. Often, small differences in wicket gate geometries are neglected by maintenance staff. However, the impact of such differences was obvious from a hydraulic standpoint and was later confirmed using numerical simulations. Notice that for all the runners, the strains measurements were taken at the same critical location, known from prior finite element analysis (FEA). Nonetheless, an important characteristic of the dynamic behaviour of Runners A and B could not be explained by the difference in wicket gate geometries. If we look at Figure 3, we can see an imbalance in the strain distribution over one rotation. While this imbalance is slightly different for each runner, it was not deemed to be important during our previous study [1] as the amplitude was similar on both. As we will see below, this is not the case for all runners. with an arbitrary static strain of 100 microstrains used for comparison purpose. Comparison between Runner B and Runner C One of the characteristics of the imbalance observed on Runner B is that if the RSI is neglected, the amplitude does not change significantly with the opening of the wicket gates. Given that this imbalance was not significantly different in Runners A and B, similar results were expected for Runner C. However, the measurements obtained for Runner C proved to be much larger than expected, as shown in Figure 4. Looking at the synchronous average in Figure 5, the highest deformation observed is always at the same angular position during the rotation, and this position is identical for measurements made on two different blades. The difference between Runners B and C is attributable to a localized imbalance that does not change significantly with the opening of wicket gates and cannot be explained by any of the characteristics that were used to differentiate the runners in this facility. Furthermore, the difference is quite significant, such that it fully explains why Runner C always exhibited more fatigue damage than Runner B. Discharge ring and runner dynamic behaviour The source of this imbalance was not obvious, and a review of the work by Doerfler et al., 2013 [2] helped us to identify two potential imbalances, both linked to some form of casing asymmetries. The first is attributable to asymmetric flow in the spiral case. Here, we have an old facility for which such asymmetry might be expected. However, by examining the average pressure distribution at maximum opening for Runners A and B in Figure 6, we found no correlation with the imbalance observed in the strain measurements presented in Figure 3. Furthermore, if the imbalance stems from the flow, the amplitude should change with the opening of the wicket gates, which was not observed in this case. The second type of imbalance is related to the flow in the side chamber adjacent to the runner. This chamber is just below the labyrinth seal at the band and is formed by the gap between the runner band and the discharge ring, as shown in Figure 7. Here, the original design of the facility is presented, but the refurbished runners do not have a fretted ring at the band, leaving a significantly larger gap. During refurbishment, only the labyrinth seal was machined and rebuilt as it was believed that a larger gap would provide more tolerance for imperfections and eccentricities. The main characteristics of an imbalance related to flow in the side chamber are localized angular position, unaffected by the flowrate, and amplitude, which scales proportionally to the square of the rotating speed. We easily confirmed the first characteristic, but the second could not be verified because of the nature of the measurement campaign that did not include constant over speed measurements. Figure 8 shows a simple numerical simulation to illustrate the phenomenon. Here, the rotating cylinder represents the runner operating in an ovalized and slightly eccentric cavity. The result is a significant imbalance in the force acting on the cylinder with a maximum located where the gap is minimal due to flow acceleration. Conclusions The strains and pressure measurements on three runners were compared and numerical simulations were used to explain the different dynamic behaviour linked to recurrent fatigue damage on certain runners in the facility under study. Understanding the nature of this variability proved to be important for future refurbishment projects. Usually, only the labyrinth seal at the top of the discharge ring is refurbished in order to align the runner with the generator. Because the gap between the band and the discharge ring is generally large, this area is usually considered tolerant to eccentricities and imperfections. Strain measurements at the same location on three runners demonstrated that such a presumptive belief might be false and the cause of significant strain fluctuations leading to recurrent and extensive fatigue damage. Furthermore, such differences in dynamic behaviour might also be present and overlooked in many other facilities, hence the importance of better understanding the many factors that influence the behaviour of a specific runner.
2020-01-02T21:11:29.140Z
2019-12-19T00:00:00.000
{ "year": 2019, "sha1": "f8a3a3c8422f728f43f4306e014292c21f9dedd1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/405/1/012005", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2204ad5db94a0afcdab51d98813589f937f57f4c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
55905107
pes2o/s2orc
v3-fos-license
Accuracy of Approximation for Discrete Distributions The paper is a contribution to the problem of estimating the deviation of two discrete probability distributions in terms of the supremum distance between their generating functions over the interval [0, 1]. Deviation can be measured by the difference of the kth terms or by total variation distance. Our new bounds have better order of magnitude than those proved previously, and they are even sharp in certain cases. Introduction Dealing with random combinatorial structures needs to estimate the deviation of two discrete probability distributions in terms of the maximal distance between their generating functions over [0,1].This is often the case when given a couple of not necessarily independent random events, one needs to estimate the number of those that occur.Among many popular methods of Poisson approximation, sieve methods with generalized Bonferroni bounds, such as the graph sieve [1], are at hand.They provide estimates not only for the probability that none of the events occur, but also for the difference between the generating function of and that of the corresponding Poisson distribution over the interval [0, 1] (see [2] for more details).This raises the following problem. The difficulty lies in the constraint that the difference of the generating functions is only available over the real interval [0, 1], and not over the whole complex unit disc, which would make it possible to apply standard methods of characteristic functions. Several positive and negative results were achieved in the last three decades, beginning with [3]; see Section 2. Lower and upper estimates got closer and closer, but the final answer is still ahead.The aim of the present note is to provide new bounds that have better order of magnitude than those proved previously.They are even sharp in certain cases. The paper is organized as follows.In Section 2 we introduce the necessary notions and notations and cite some earlier results.Section 3 is devoted to the case where | − | is estimated in terms of Δ, while in Section 4 the total variation distance is treated. For ∈ (0, 1] and = 0, 1, . . .define In [2] it was shown that, for every = 0, 1, . .., holds with a suitable positive constant , if Δ is sufficiently small.In fact, the upper estimate is valid for every Δ ∈ (0, 1], but it becomes trivial for Δ ≥ exp(−32 4 ) by elementary calculus.Though in both the upper and lower bounds the multiplier of Δ is slowly varying as Δ → 0, they are not of the same order of magnitude. It is easy to see that ∑ ∞ =0 | | cannot be estimated in a nontrivial way, because for arbitrary Δ ∈ (0, 1] there exists a function ∈ F, such that Δ() = Δ and ∑ ∞ =0 | | = 2, maximal.Indeed, let > 1/Δ and Then 0 ≤ () ≤ Δ for ∈ [0, 1], because by the choice of , provided Δ < 1.For Δ = 1 the estimation obviously holds.Hence Δ() = (0) = Δ.However, if = p − q , and one of the distributions, say p, is fixed (as in the case of Poisson approximation), then the class of feasible functions is smaller; thus the bounds for | | may decrease, and even the total variation distance can be estimated nontrivially.The following results can be found in [2]. Let p be a fixed discrete probability distribution such that > 0 for every = 0, 1, . .., and lim sup Let be a positive integer and a sufficiently small positive constant.Then for every sufficiently small positive Δ there exists a discrete probability distribution q, such that Δ( p − q ) = Δ, and If the tail of p is lighter than exponential, the lower estimate decreases. Instead of (8), suppose that lim sup is positive and finite, where ℎ is a positive, continuous, and increasing function, regularly varying at infinity, and lim →∞ ℎ()/ = ∞.Let be a positive integer and a sufficiently small positive constant.Then for every sufficiently small positive Δ there exists a discrete probability distribution q, such that Δ( p − q ) = Δ, and Particularly, when p is the Poisson distribution with parameter , it follows that for every sufficiently small positive Δ there exists a discrete probability distribution q, such that Δ( p − q ) = Δ, and (The constant does not depend on and q.The parameter only appears in the bounds implicit in the phrase "sufficiently small.")Let us turn to the case of total variation.In [2] for every fixed p an increasing function : [0, 1] → R was constructed in such a way that lim →0 () = 0 and ‖p − q‖ ≤ (Δ) for arbitrary q.However, apart from the case where the tail of p was extremely light, the function proved to be slowly varying at 0, which is just a little bit better than nothing.For example, in the case of Poissonian p, the following inequality was obtained: as q varies in such a way that Δ → 0. Since ‖p − q‖ ≥ | − |, every lower estimate obtained for fixed will do for the total variation.However, if does not decrease faster than exponential, that is, condition (8) is fulfilled, there is a lower estimate of the form with an exponent ∈ (0, 1) depending on p. When the tail of p is lighter than exponential, namely, condition (10) holds, then for every sufficiently small positive Δ the following lower estimate is valid. with a constant depending on p. Particularly, in the case where p is Poisson, was proved. Estimation for the Difference of 𝑘th Terms The following important result can be traced back to Markoff, 1892 [4], who dealt with the extremal properties of Chebyshev polynomials over the interval [−1, 1]; see Chapter 2 of [5].The proof can be found in [6] or in [7]. Theorem 1.Let be a polynomial of degree less than or equal to , and 0 ≤ ≤ .Then Using this result an upper bound can be proved without any restriction on , which is of the same order as the lower bound in the left-hand side of ( 5) Proof.Suppose first that Δ ≤ Here −2(+1) < − = Δ; therefore the right-hand side is less than Δ.By Theorem 1 we then have as claimed. If −2 < Δ ≤ −/2 , then the upper bound, being greater than 1, is trivial.Indeed, in that interval, Δ 2 is decreasing, hence the upper bound attains its minimum at Δ = −/2 .Then its value is Here the right-hand side is equal to for = 0 and to (1/8) 5/2 > 1 for = 1.Stepping further from to + 1, the right-hand side gets multiplied by where we used the fact that (( + 1)/) is increasing; hence it is not less than 2. Finally, it is easy to see that 2+1 /(2)! is decreasing for ≥ 1, from which the second inequality follows. If p satisfies (8), that is, cannot decrease faster than exponential, then (9) implies that the estimate of Theorem 2 is sharp in the order of magnitude.Theorem 3. Let = − , where p = ( ) ≥0 and q = ( ) ≥0 are discrete probability distributions.Suppose that where is a positive, continuous, and strictly decreasing function tending to zero.Then for every < −1 (Δ). Remark 5.If 1/ℎ() log(1/ ) is bounded away from zero and infinity as well, then both the upper estimate of Theorem 3 and the lower estimate of (11) are applicable and they are of the same order of magnitude.Use the fact that for regularly varying functions ℎ of order > 0 we have (ℎ Remark 6. Reference [6] proved similar bounds with different constants, but of the same order of magnitude.However, they imposed conditions similar to (10) on the sequence = − , rather than on p, which is less useful for applications in probability.Besides, for the estimate of Theorem 2, which is true without any restriction on the coefficients, they needed exponential decay of the sequence ( ). Remark 7. If ℎ is linear, that is, ∑ ≥ ≤ −V , and it holds with a sufficiently large V (V ≥ 2 will do), the upper bound of Theorem 3 is better for > 0 than that of Theorem 2. (For = 0 the bound | 0 − 0 | ≤ Δ is obviously the best possible.) Particularly, let p be the Poisson distribution with mean .Then, for > , we have (see Theorem A.15 in [8]); hence ℎ() = log + (), and as → ∞.In addition, ℎ −1 () > / log eventually.Let us plug this back into Theorem 3 to get the following estimate. Corollary 8.If p is Poisson, then, uniformly in q, one has for every < / log , if Δ is sufficiently small. Note that the order of this upper bound is the same as that of the lower bound (12). Estimation for the Total Variation Distance Let again = − , where p = ( ) ≥0 and q = ( ) ≥0 are discrete probability distributions.As we have seen, if nothing is known about p and q, it is impossible to give a nontrivial upper bound for the total variation distance However, when p is fixed, the situation is completely different. The right-hand side tends to 0 as Δ → 0 only if the tail of p is not too heavy; namely, () = ( − ). The method of proof will be applied a couple of times in the sequel with different parameters.Therefore we formulate its essence in a separate lemma as a master inequality. Lemma 10.Let , , and be positive real numbers; ≤ .Suppose Proof.Let = ⌈⌉, then − 1 < ≤ .Clearly, By supposition we have Hence Proof of Theorem 9. Starting from Theorem 3, let us apply Lemma 10 with = 3Δ, = (2 + 1) 1/2 −1 (Δ), and = −1 (Δ).We get that Particularly, let p be the Poisson distribution with mean .As we have seen in (35), Let us plug this into Theorem 9. Writing 4 in place of in the exponent we can get rid of the term 1 + (1), and even the multiplier 8 gets eventually absorbed.Thus we obtain the following estimate. Corollary 11.Let p be Poisson; then, uniformly in q, one has if Δ is sufficiently small.This is already similar to the lower bound (16), and is much better than (13). If the tail of p is subexponential, that is, ∑ ≥ = exp(−()), then the estimate of Theorem 3 is useless: it tends to infinity as Δ → 0. However, with suitably chosen parameters in the master inequality, a reasonable upper bound can be obtained: not really sharp, but at least not trivial. Theorem 12. Suppose () = − () Since (4 2 ) 1/4 = 2.331 ⋅ ⋅ ⋅ < , the first term in the righthand side can be estimated by a positive power of Δ, while the second term decreases slower than any positive power of Δ, as Δ → 0; thus it will eventually dominate.(49) Finally, let us deal with the case of exponentially decaying tails.Let () = −V , with V ≥ 0. Then the total variation ‖p−q‖ can be estimated by a positive power of Δ.This follows from Theorem 9, but only for V > .The following theorem is valid for all positive V. Theorem 14. Suppose () ≤ −V , with V > 0. Then ‖p − q‖ ≤ 8Δ V/(V+7) . (50) As not sharp, the concrete form of the exponent holds no interest; what really matters is that it is positive and tends to 1 as V → ∞.Note that if tends to 0 at an exponential rate, (14) provides a lower bound which is also a positive power of Δ (from Theorem 2.4 of [2] it follows that = (V+1/4)/(V+2) can be used for exponent).
2018-12-13T18:30:53.120Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "24aeac431013cada382332cbba92242f37294b3d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jps/2016/6212567.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "24aeac431013cada382332cbba92242f37294b3d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
51708650
pes2o/s2orc
v3-fos-license
Hallmark of success: top 50 classics in oral and maxillofacial cone-beam computed tomography Purpose The aim of this study was to identify the top 50 cited articles on the use of cone-beam computed tomography (CBCT) for oral and maxillofacial applications and to summarise the characteristics of the most impactful research articles in this domain. Material and methods A database was generated by combining the search results from Thomson Reuters Web of Science and Elsevier’s Scopus to ensure that all top-cited publications were captured. We used three search fields to generate the database: 1) CBCT, 2) oral and maxillofacial pathologies, and 3) oral and maxillofacial anatomical structures. Publications were then ranked by citation counts and reviewed by two independent reviewers. Results A total of 50 top publications were included in the study. Their citation count ranged from 43 to 170 with a median of 55.5. Five publications were cited more than 100 times. All except for one paper were published after 2000. The most well published journal was the American Journal of Orthodontics and Dentofacial Orthopedics (n = 12), and the United States of America (n = 15) was the most productive country in the field. The majority of the studies (n = 27) discussed the imaging of primary tooth pathologies, but there are also a significant number of articles that discuss imaging of bone grafts or dental implants (n = 7), upper airways (n = 5), the skull (n = 4), and other maxillofacial structures (n = 7). Conlcusions Our study identifies 50 research articles with the highest number of citations in oral and maxillofacial CBCT, discusses the characteristics and commonalities between these articles, and predicts future trends in the field. Introduction Cone-beam computed tomography (CBCT) is a relatively recent technology, which can provide three-dimensional volumetric information about oral and maxillofacial structures. It involves having an X-ray source and detector simultaneously rotate 360 degrees around the patient's head, which is fixed and stabilised. This is then used to generate multi-planar three-dimensional volume data set to provide diagnostic images. Compared to conventional CT, CBCT has numerous benefits including reduced radiation dose, improved image accuracy, and superior image resolution [1]. For these reasons, it has been used for various applications including assessment of tooth root morphology [2,3], pathology and resorption of temporomandibular joint (TMJ) disorders [4], implant correlation to anatomical structures [5], periapical lesions [6], and endodontics [7]. Because of the varied clinical applications of oral and maxillofacial CBCT, it is important to survey the existing literature and study the trends of this progressive field. Bibliometrics utilises statistical analysis to examine the effectiveness and efficiency of peer-reviewed research [8]. Citation analysis, a frequently used method, analyses publications by their citation counts and reflects the impact and quality of the top articles [9]. In spite of some limitations, citation analysis remains an important method to assess publications that have made a significant impact on a particular field [10]. The two most widely used bibliometric databases are Thomas Reuter's Web of Science, which provides coverage of 12,000 journals, and Scopus, covering over 22,000 journals and proceeding volumes [11]. Material and methods A database of the most-influential publications on the topic of oral and maxillofacial CBCT was generated using Thomas Reuter's Web of Science and Elsevier's Scopus. All journals were included regardless of their field of specialty, language, country of origin, or electronic availability of the abstract. The terms were combined in the following format: A total of 4953 publications resulted from these search terms with publication dates ranging from 1975 to 2016. The publications were arranged by their total citation counts, in descending order. One board-certified radiologist and one board-certified dentist screened the 321 top-cited manuscripts for inclusion of the publications that discussed the clinical applications of oral and maxillofacial CBCT. Publications that used reconstruction models, extracted teeth, phantom models, or human cadavers were excluded. Additionally, those that were not related to oral and maxillofacial CBCT, explored basic science research, or did not include human subjects were excluded. Meta-analyses, reviews, letters, editorial, and communication and case reports were also excluded. From the 321 most cited articles, 50 publications were chosen based on the above inclusion and exclusion criteria and compiled into the database. This is a sufficient sample size, which allowed us to identify the common characteristics of the most cited articles. Citation counts from Scopus, Web of Science, and Web of Science Core Collection were collected and crosschecked. Final citation counts reported in our manuscript were taken from Thomas Reuter's Web of Science. Using the method described by Lim et al. [21], we collected the following data: article title, WOS all database citations, WOS Core Collection citations, Scopus citations, year, journal of publication, authors, number of authors, number of institutions, country of primary institution, study design, sample size, and imaged structures. We reported continuous variables using mean, median, and range. Categorical variables were analysed by frequency and percentage. SPSS 20 was used to summarise the data. Results The list of top 50 articles, their total citation counts, and citations per year were taken from Thomson Reuter's Web of Science. This was cross-matched with the list generated by the same search terms using Elsevier's Scopus. The top articles are listed in Table 1. Citations (total and citations per year) The top 50 publications were cited between 43 and 170 times, with a median of 55.5 citations. On an annual basis, they were cited between 2.6 and 17 times, with a median of 7.1 citations per year. The top three publications by total citations were: 1) "Limited cone-beam CT and intraoral radiography for the diagnosis of periapical pathology", with 170 citations, 2) "Accuracy of cone beam computed tomography and panoramic and periapical radiography for detection of apical periodontitis", with 144 citations, and 3) "A clinical study of changes in the volume of bone grafts in the atrophic maxilla", with 120 citations. After adjustments were made for the number of citations per year, the top two articles remained in the same order, with 17.0 and 16.0 citations per year, respectively. How ever, "Comparison of airway space with conventional lateral head films and 3-dimensional reconstruction from cone-beam computed tomography", with 12.6 citations per year, became the third most-cited article on an annual basis. Year of publication The year of publication of top-cited publications ranged from 1989 to 2011. All, except for one paper, were published after the year 2000. Figure 1 shows a graphical distribution of the top-cited publications. Number of authors The articles had an average of five authors. The most prolific authors are Cevidanes, Lucia Helena S. and Miller, Arthur J., who each had five publications. Cevidanes, Lucia Helena S. had two first authorships and Miller, Arthur J. had three last authorships. The top authors in the field are summarised in Table 2. Country of origin The United States of America contributed 15 of the highly cited publications to the field of oral and maxillofacial CBCT. This was followed by Brazil, and Japan, which contributed six and five publications, respectively. The countries that contributed two or more publications are summarised in Table 3. Journals of publication The Table 4 shows the journals with two or more publications and their impact factors. Additional descriptors We also analysed the manuscripts based on their study designs (prospective or retrospective), number of affiliated institutions, sample sizes, and primary imaged structures. This information is summarised in Table 5. Discussion We identified the top 50 cited articles in the field of oral and maxillofacial CBCT. Out of the 50 articles, the only article published before 2000 was "Craniosynostosis: diagnostic value of three-dimensional CT reconstruction". This article, by Vannier et al., was published in Radiology in 1989. This shows that CBCT is a rapidly expanding field of imaging that has only recently been utilised for clinical practice. The top three articles identified by citation per year compared the utility between oral and maxillofacial CBCT to radiographs in the diagnosis of periapical lesions, apical periodontitis, and nasopharyngeal airway restriction, respectively [22][23][24]. The third most cited article by total citation counts, "A clinical study of changes in the volume of bone grafts in the atrophic maxilla", by Johansson et al., published in Dentomaxillofacial Radiology in 2001, examined volumetric changes associated with bone grafts. The articles were affiliated with an average of 5.24 authors and 3.7 institutions, and most of the articles (n = 30) had three or more affiliated institutions. This shows that there is much collaboration in the field. Most of the articles (n = 38) had a patient sample size of less than 100, indicating that larger sample size is not necessarily correlated with success. Most of the studies (n = 42) were prospective. Although the majority of the studies (n = 27) imaged primary tooth pathologies, there were also a significant number of articles that discussed imaging of bone grafts or dental implants (n = 7), upper airways (n = 5), the skull (n = 4), and other maxillofacial structures (n = 7). Thus, it can be seen that oral and maxillofacial CBCT has a wide array of clinical applications. The top three journals were American Journal of Ortho dontics and Dentofacial Orthopedics (n = 12), Journal of Endodontics (n = 8), and Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and Endodontics (n = 8). There is healthy competition amongst the different countries in this research field. The United States of America (n = 15) was the most productive country, followed by Brazil (n = 6) and Japan (n = 5). Importantly, Sweden (n = 2) contributed two of the three most cited articles. Identification of top articles by their citation counts is a relatively good measure of academic success but this approach also has its limitations. The first is the "obliteration by incorporation" phenomenon, which occurs when the information becomes common knowledge so that the landmark articles are rarely cited [25]. Because CBCT is a relatively new technology, we suppose that this phenomenon did not significantly affect our study. The second limitation is that publications from earlier years were more likely to appear in our database because they have had more time since publication and were thus more likely to be cited. For example, the third most cited article in our database, "A clinical study of changes in the volume of bone grafts in the atrophic maxilla", published in 2001, had a total citation count of 120 (7.5 citations per year), while the fifth most cited article, "Comparison of airway space with conventional lateral head films and 3-dimensional reconstruction from cone-beam computed tomography", published in 2009, had 12.6 citations per year with a total citation count of 101. In addition, our database did not include articles published after 2011 because these articles did not have enough time to accumulate an adequate number of citations. Because the research field is still in its infancy, it would be valuable to re-examine the literature after a few years to identify the top-cited publications. Self-citation is another limitation of citation analysis. Previous research indicates that self-citations do not significantly influence research [26]. However, it may play a role in our bibliometric analysis because each article had five authors on average. Furthermore, open-access articles may be able to acquire more citations because they are available without subscription [27]. However, in our bibliometric analysis, open-access articles did not play a major role because only five of the top 50 articles were available via open access. This is probably because most academic researchers would likely have access to institution-based journal subscription services and be able to access information found in these articles. Lastly, we only considered articles that were published in peer-reviewed academic research journals in our study. Thus, we did not include "grey literature" such as opinion or positional papers, government documents, or conference proceedings. Furthermore, Scopus, Web of Science, and Google Scholar generated different citation counts. Web of Science was used for this study because it was the most consistent with our search criteria. Lastly, we found many irrelevant publications in our initial search -these were filtered out after discussion with a team of three independent reviewers.
2018-08-06T13:23:14.105Z
2018-01-20T00:00:00.000
{ "year": 2018, "sha1": "f331cc2ff24e50d5a40e7731914ce0f28fd47265", "oa_license": "CCBYNCND", "oa_url": "https://www.termedia.pl/Journal/-126/pdf-32235-10?filename=Hallmark%20of%20success.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "dd384791169297ab40bfe145978c3f968f9b66da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55848512
pes2o/s2orc
v3-fos-license
Chitinozoan dynamics and biostratigraphy in the Väo Formation ( Darriwilian ) of the Uuga Cliff , Pakri Peninsula , NW Estonia The distribution of chitinozoans in the Väo Formation (Lasnamägi and Uhaku regional stages, Darriwilian) of the Uuga Cliff, Pakri Peninsula, NW Estonia, was investigated from 62 samples. Chitinozoans are very common and assemblages are diverse, with a total of 36 species and up to 170 specimens per gram of rock. The assemblage is dominated by Belonechitina, Desmochitina, Cyathochitina, and Euconochitina. The relative and absolute frequency of particular taxa displays regular and possibly cyclic patterns, which are not directly reflected in lithology or geochemistry. Abrupt changes in chitinozoan abundance, coinciding with some of the discontinuity surfaces, suggest the presence of stratigraphical gaps. The diversity of chitinozoans is the highest in the lower part of the Väo Formation (Rebala Member), where up to 20 species were identified in one sample and the standing species diversity exceeded 25. Within the studied interval at least 11 biostratigraphical horizons can be distinguished, including the subzonal boundaries within the Laufeldochitina striata Zone. The disappearance of abundant Belonechitina pellifera can be used for tracing the lower boundary of the Uhaku Stage. INTRODUCTION The Middle Ordovician Darriwilian Stage includes the uppermost Volkhov, and the Kunda, Aseri, Lasnamägi, and Uhaku regional stages in the Baltic region.This stage falls within a crucial interval of geological time characterized by rapid diversification of several groups of organisms (see, e.g., Servais et al. 2009 and references therein). The chitinozoans, most likely representing a reproductive stage of yet unknown marine metazoans (Paris & Nõlvak 1999), originated in the Early Ordovician.However, it was in the Darriwilian when they became a diverse group and common microfossils.Next to graptolites and conodonts, chitinozoans are some of the most useful index fossils for the entire Ordovician (Paris et al. 2004;Webby et al. 2004).Their huge biostratigraphical potential was first suggested by Männil (1969Männil ( , 1971Männil ( , 1972)), who successfully correlated Ordovician sequences between the East Baltic and Scandinavia.Since then many studies on Ordovician and Silurian chitinozoans of Estonia and neighbouring areas have been published.The Ordovician chitinozoan biozonation of Baltoscandia was formally proposed by Nõlvak & Grahn (1993), and later updated by Nõlvak (1999) and Nõlvak et al. (2006).This biozonal scheme is nowadays widely used for regional and wider correlations (e.g.Webby et al. 2004). Although the largest chitinozoan collections in the world come from the Baltic area (Paris et al. 2004), our knowledge of their taxonomy and distribution in the Baltic Darriwilian is still unsatisfactory.A number of Darriwilian species are yet to be described and their biostratigraphical potential to be fully utilized. This study concentrates on the upper Darriwilian Väo Formation and its under-and overlying strata, which represent an interval of rapid increase in chitinozoan abundance and diversity in the Baltic area.Better documentation of this diversification episode and improved biostratigraphical resolution have been the main objectives of this study.With respect to biostratigraphy, finding new criteria for the lower boundary of the Uhaku Stage and correlation between the chitinozoan and conodont biozones have been of particular interest.We also aimed at obtaining abundance and relative frequency data on East Baltic Ordovician chitinozoans.Hitherto such data are practically missing except for the report by Grahn (1984) from Tallinn, northern Estonia.Yet, quantitative microfossil data may contribute to a better understanding of palaeoenvironments and depositional regime, and aid regional biostratigraphy. The Väo Formation is also of economic interest as it contains the so-called Lasnamägi Building Limestone that has been actively quarried and used in constructions for more than 600 years.Precise dating of these beds may help, for instance, archaeologists and restoration builders in their work. The studied locality is an easily accessible coastal cliff on the Pakri Peninsula, representing the best Lower Cambrian to Middle Ordovician succession in northwestern Estonia.In spite of the fact that several palaeontological studies have been conducted on the Pakri succession in recent decades and the sections are frequently visited by geologists, the Darriwilian biostratigraphy has remained virtually unknown in this locality.The present study aims to fill this gap. GEOLOGICAL SETTING AND STRATIGRAPHY During the Darriwilian the Baltica palaeocontinent, with the epicontinental Baltic palaeobasin, was located between 60° and 30° southern latitudes, drifting northwards (Cocks & Torsvik 2005).At that time the Baltic palaeobasin was characterized by carbonate sedimentation in the areas of present-day Sweden, Estonia, Latvia, Lithuania, and NW Russia. The Uuga coastal cliff (59°21′41″N, 24°2′22″E; outcrop 4a of Orviku 1940) is located on the Pakri Peninsula, NW Estonia, some 50 km west of Tallinn.This area represents shallow-water settings of the Ordovician palaeobasin, traditionally called as the North Estonian Confacies or Estonian Shelf (Fig. 1).Fennoscandian land was located north of it, whilst deeper shelf environments, known as the Central Baltoscandian Confacies or Livonian Basin, were present to the south and to the west (in South Estonia, Latvia, and Sweden).The configuration and development of the Baltic palaeobasin are discussed in detail by, e.g., Männil (1966) and Nestor & Einasto (1997). The studied 6-m succession is composed of limestones (wacke-to packstones, occasional grainstone interbeds) and secondary dolostones.Above the dolomitic Pae Member, limestones are slightly dolomitized (dolomite reaching to ca 12%).The content of siliciclastic material varies generally between 5% and 15% (average ca 8%) in the Väo Formation, reaching ca 25% in the Aseri Formation (Fig. 2).Siliciclastic material is mostly represented by clay, except in the Aseri Formation, where phosphatic or goethitic ooliths and admixture of quartz sand are recorded.Numerous impregnated and commonly bioturbated discontinuity surfaces occur in the studied interval (Fig. 2, see also Orviku 1940).The δ 13 C stable carbon isotope curve only shows small variations, mostly between -1.0‰ and 0.0‰, with an increasing trend in the Aseri and lower Lasnamägi stages and slightly decreasing values in the Uhaku Stage (Fig. 2).The Väo Formation on the Pakri Peninsula is rich in shelly fauna and various groups of acid-resistant microfossils such as acritarchs, chitinozoans, scolecodonts, and conodonts. The Väo Formation spans the Lasnamägi and Uhaku regional stages and is underlain by the Aseri Formation (Aseri Stage) and overlain by the Kõrgekallas Formation (Uhaku Stage).It is further subdivided into the Rebala, Pae, and Kostivere members.The Pae and Kostivere members and the lower part of the Kõrgekallas Formation make up the so-called Lasnamägi Building Limestone. Local quarrymen have named over 50 individual beds within this unit.Many of these units can be recognized all over northern Estonia (e.g.Einasto & Hints 2004), denoting a very flat sea-floor and uniform depositional conditions as suggested also by, e.g., Jaanusson (1976). The succession of the Building Limestone on the Pakri Peninsula is discussed and illustrated by Einasto & Rähni (2005; based on the Paldiski 5 drill core). The boundary between the Lasnamägi and Uhaku stages falls within the Väo Formation and is approximated by the first appearance datums (FADs) of the graptolite Gymnograptus linnarssoni and conodont Eoplacognathus robustus (e.g.Männil 1986).In the type section in Tallinn this boundary is drawn at the level of a marked discontinuity surface within the bed named 'Raudsüda', above which G. linnarssoni appears.In the Uuga section the same discontinuity surface is recognized at 2.45 m above the base of the Väo Formation.According to V. Viira (pers. comm. 2009) slightly younger strata (ca 3.2 m from the base of the Väo Formation), which probably indicates unfavourable environment rather than the true FAD of this species. MATERIAL AND METHODS The study is based on 62 bed-by-bed samples collected in 2006 for integrated geochemical and micropalaeontological analysis from the Aseri, Lasnamägi, and Uhaku stages of the Uuga Cliff.The sampling depths are measured from the base of the Väo Formation. The samples are numbered consecutively from OM6-1 (lowermost) to OM6-62 (uppermost); for brevity they are referred to as samples 1 to 62 in this paper. All bulk rock samples were split into three parts before processing.In order to obtain abundance and relative frequency data on chitinozoans, 1-40 g (on average 10 g) of each initial sample was digested using diluted hydrochloric acid.From these 'portions', referred to as quantitative samples below, all chitinozoans were picked and counted.In order to reveal large and rare species, assess variability of chitinozoans, and study conodonts and scolecodonts, another 250-1100 g (on average 700 g) of each bulk sample was treated with diluted acetic acid.From these 'portions', referred to as qualitative samples, only a selection of chitinozoans were picked.The remaining parts of the bulk samples were used for geochemical and lithological analyses, and kept for future reference.Two of the bulk rock samples (24 and 25) were further split into nine stratigraphically successive 'spot samples', 1-15 g each, for detailed abundance analysis.The acid preparation residues were sieved through a 20 or 45 µm mesh; chitinozoans were hand-picked and stored in glycerin. The term abundance used in this paper means the number of chitinozoan specimens per gram of rock; relative frequency denotes the percentage of a taxon in a sample.Diversity indices were calculated and some statistical analyses performed with the PAST software (Hammer et al. 2001).Chitinozoan classification in this study follows Paris et al. (1999). Background information on rock composition was obtained from polished and thin sections, and from XRF chemical analysis of 11 major elements (Mg, Al, Si, P, S, K, Ca, Ti, Mn, Fe, Ba).Additionally, whole-rock δ 13 C was measured from all samples.The research was conducted at the Institute of Geology at Tallinn University of Technology (GIT), where also the samples, residues, and all recovered specimens are deposited.For figured specimens the collection number 590 is allocated.More information on individual samples and specimens can be obtained from the on-line catalogue at http://sarv.gi.ee. The chitinozoan fauna Chitinozoans are very common in all samples studied.Approximately 10 600 specimens were picked, identified, and counted from the quantitative samples.In the set of large (qualitative) samples the total yield was roughly 100 times larger, but not all specimens were picked and none were counted.Altogether 36 different species have been identified, including a few morphologically distinct and stratigraphically constrained species which are referred to under open nomenclature.The preservation of organic-walled microfossils is mostly good to excellent in the sampled section, but poor in dolostone of the Pae Member and partly in dolomitized and/or weathered limestones of the Kostivere Member and Kõrgekallas Formation.Selected characteristic chitinozoans are illustrated in Fig. 3 Abundance The abundance of chitinozoans in the studied interval extends from ca 1.5 to 170 specimens per gram of rock, with an average of 22 specimens per gram (Figs 2 and 4 For general abundance curve see Fig. 2. The top of sample OM6-25 was on a separate small rock slab that could not be polished and scanned.The ca 1.5 cm gap between samples OM6-24 and OM6-25 is due to difficulties in taking monolithic samples from a vertical cliff wall.Note that the graph is drawn in the logarithmic scale. The maximum abundance values were recorded in the basal beds of the Rebala Member and in the lowermost part of the Kostivere Member.Minimum abundance was observed in sample 25 close to the Lasnamägi-Uhaku boundary.Grahn (1984) studied chitinozoans of the same interval from Tallinn, some 50 km to the east of the Uuga Cliff (Fig. 1A).Both localities belong to the shallow-water Estonian shelf and the environmental gradients between the two localities were supposedly subtle.Grahn (1984) reported somewhat smaller abundance values in the Aseri (2-6 specimens per gram) and Uhaku stages (5-10, occasionally 21 specimens per gram).The difference is more pronounced in the Lasnamägi Stage, where Grahn (1984) found generally less than 2 specimens per gram in the lower part of the stage, and 7-15 specimens per gram in the upper part of the stage, with a maximum of 7 and 48 specimens, respectively.This discrepancy may partly be a result of different sediment accumulation rates that have 'diluted' microfossils in the matrix in the Tallinn area, where the entire Väo Formation is 1.5-2 times thicker than on the Pakri Peninsula.However, this alone is not sufficient to explain 10-fold differences in chitinozoan abundance.As far as a methodological off-set cannot be ruled out, the question awaits resampling of the Tallinn section in order to test if such a difference truly exists and is statistically significant.Only then possible factors responsible for notable areal as well as stratigraphical variations in abundance can be discussed and perhaps explained.A similar, likely methodological difference between data of different chitinozoan students has previously been reported by Vandenbroucke (2004) from the Fågelsång section, Sweden. The question about possible causes of abundance variations rises also in case of a single section.Studying lower Silurian microfossils, Hints et al. (2006) noted that chitinozoans often display highly variable abundance patterns not matching the lithological change.Grahn (1982) also reported that fluctuations in chitinozoan abundance could not be correlated with lithology.The same is true of the Uuga section, where notable changes in chitinozoan abundance occur in lithologically rather monotonous intervals.For instance, the interval at ca 3-4.5 m displays two gentle peaks in the abundance curve, which are traced on the basis of nearly 20 samples (Fig. 2) and thus unlikely occasional.Similar trends in rock composition could not be detected. A considerable drop in abundance occurs between samples 24 and 25, where the number of vesicles decreases from ca 90 to 2 per gram of rock.Both samples contain several discontinuity surfaces (DS), but otherwise the lithology is very similar and no clear indications for, e.g., a transgressive event or major change in the deposition rate can be detected (Fig. 2).In order to obtain more information about this particular change, a separate set of centimetre-scale samples was studied.The results illustrated in Fig. 4 show that the number of vesicles first decreases sharply from ca 170 to 70 within sample 24, and then drops further to ca 14 vesicles per gram at the level of a pyritized DS.Possibly the actual sharp change coincides with the DS and the strata 2-3 cm below it contain a time-averaged assemblage.Hence the DS most likely marks a stratigraphical gap, which is also supported by the appearance of the zonal conodont Eoplacognathus robustus in sample 25.Above this DS a smooth decrease in abundance continues, achieving the minimum value of 1.4 vesicles per gram in the upper part of sample 25.In that sample several phosphatic DSs occur (Fig. 4), but they seem to lack clear relationship with chitinozoan abundance.In contrast to chitinozoans, the abundance of polychaete jaws shows only slight variations in the same interval (ca 1-3 maxillae per gram of rock; Fig. 4).Therefore, the decreasing chitinozoan abundance cannot be explained by general depositional processes that would affect all microfossils with a similar size and composition.Grahn (1982) suggested that several physical factors such as temperature, salinity, and nutrient input may have been responsible for variations in chitinozoan abundance, leaving no obvious traces in lithology.The same explanation may perhaps apply to the Väo Formation of the Uuga Cliff section.However, other localities need to be studied in order to exclude the possible effects of, e.g., local currents and hydrodynamic concentration. Genus-and species-level frequency patterns The most common genera that dominate the chitinozoan faunas in the Uuga section are Belonechitina and Desmochitina.The maximum relative frequency of these genera is 95% and 75%, respectively.Throughout the section species of Cyathochitina display several stratigraphically restricted pulses of higher frequency, accounting occasionally for nearly 70% of recovered vesicles (Fig. 2).In the Rebala Member also Euconochitina often predominates in the assemblage, with a maximum of 65% of specimens.All other genera, including Conochitina, Pterochitina, Rhabdochitina, and Laufeldochitina, may be common, but their relative frequency typically remains well below that of the aforementioned taxa. The dominant species make up 20-90% and on average 40% of the assemblage.The most common chitinozoan in the Uuga Cliff is Belonechitina micracantha s.l., whose relative frequency peaks in the basal part of the Rebala Member and basal part of the Kostivere Member (note the frequency charts in Fig. 2). Other species that may account for more than half of specimens in a sample are Euconochitina primitiva, Cyathochitina calix, Cyathochitina campanulaeformis, and Desmochitina minor s.l. Relative frequency, expressed as percentage, is dependent on other taxa found in a sample.Thus the absolute frequency (i.e.abundance, specimens per gram of rock) of a particular taxon may be rather different from its relative frequency curve.To illustrate this feature, absolute frequency curves of Desmochitina and Cyathochitina are shown in Fig. 2 (a selection was made since all other genera could not be fitted into the same scale).One can see that in case of Cyathochitina the two curves are relatively similar, but notably different for Desmochitina.For instance, continuous increase was detected in relative frequency of Desmochitina in samples 26-31 but not in absolute frequency.Samples 24 and 25 were rather similar in relative frequency but very different with respect to absolute frequency. The frequency curves of particular genera or species seem not to be occasional but represent a rather regular pattern.This is best illustrated by the fluctuating frequency of Desmochitina (Fig. 2; note that Desmochitina accounts largely also for the above-discussed abundance fluctuations).Grahn (1982) was able to distinguish three groups of Middle Ordovician chitinozoan taxa, each with different environmental preferences: (1) high water energy (shallowwater) taxa, (2) low water energy taxa, and (3) 'faciesindependent' taxa.For instance, Desmochitina was reported to reach its maximum frequency in skeletal sand bottoms with relatively high water energy in Sweden (Grahn 1981(Grahn , 1982)).When applying the same concept to interpret frequency patterns in the Uuga section, one might expect that Desmochitina-dominated intervals (for instance sample 37) represent shallowerwater conditions than intervals with a smaller proportion of the same genus (sample 40).However, only slight variations in skeletal sand content and composition, and bulk-rock geochemistry were detected between these samples.The frequency curves of other taxa show no better correlation with lithological or geochemical data.This supports the opinion of Vandenbroucke et al. (2009) that chitinozoans (and most likely their parent organisms) were epi-planktonic and water-mass-specific rather than facies-specific. On a larger scale, however, there seems to be a certain correspondence to Grahn's data.In the Gullhögen Formation (lower Uhaku Stage in Västergötland, Sweden), which represents more offshore environments than in northern Estonia, the average proportion of Desmochitina is much lower than in the Uuga Cliff, remaining mostly below 10% and only occasionally reaching 20% (Grahn 1981).At the same time, Laufeldochitina striata, a 'low water energy species', reached well over 50% in the Gullhögen Formation (Grahn 1981).In the Uuga section, the latter species accounts only for a few per cent of the assemblage. Diversity The diversity of an assemblage can be estimated by species richness or various diversity indices.The former may be expressed by the number of species actually recorded in a sample.When studying the succession of samples (or time slices), diversity curves can take into account also range-through taxa. In the Uuga Cliff 8-20 (on average 14) species were recorded in qualitative (large; see methods above) samples, the highest values occurring in the Rebala Member.In quantitative (small) samples only 3-13, and on average 7.5 species were identified (Fig. 2).The total diversity curves including range-through species also show notable difference between small and large samples, suggesting that less than 50 g samples are clearly inadequate for documenting taxonomic diversity in the studied section and similar limestone successions.The total diversity based on large samples reaches 28 in the Rebala Member, 25 up to ca 3.7 m level, and displays a decreasing trend upwards. According to Hints et al. (2009) and Paris et al. (2004), the total diversity of chitinozoans in the Aseri-Lasnamägi-Uhaku interval of the entire Baltoscandian area is some 45 species, whilst the estimated mean standing diversity is only about 30.This is very close to what is observed in the Uuga section.It may be argued that this single section is taxonomically very diverse and complete, but more likely the hitherto provided assessments of the mean standing diversity underestimate the actual taxonomic richness of late Darriwilian chitinozoan faunas of Baltica. The diversity indices, such as the widely used Shannon index (see Hammer et al. 2001), take into account both the number of taxa as well as the number of individuals.The more taxa are present and the more evenly they are represented, the higher is the index.Generally higher values indicate smaller environmental stress.In the Uuga section the index is highly variable, fluctuating between 0.5 and 2. It is somewhat higher in the Rebala Member, roughly showing an increasing trend.Low values are encountered within the Pae Member, but these likely result from poor preservation of chitinozoans.In the Kostivere and Koljala members the Shannon diversity stays mostly between 1.0 and 1.5, with a probably random peak in sample 32.No clear trend can be seen in this interval.It should be noted that the Shannon index correlates rather well with the observed species richness.On the other hand, there is no obvious relationship between the diversity and abundance, except that the highest abundances are recorded in relatively low diversity samples (Fig. 2). Both the species richness and diversity index calculations show that generally the Rebala Member is characterized by somewhat higher diversity of chitinozoans than the overlying Pae, Kostivere, and Koljala members (but note that the data from the Pae Member may be biased due to poor preservation).Additionally certain small-scale diversity fluctuations can be observed, but not fully interpreted as of now.Compared to Grahn's (1984) data from Tallinn, northern Estonia, chitinozoan diversity is somewhat higher in the Uuga Cliff, but this may well be attributed to methodological differences and recent advancements in taxonomy. Biostratigraphy The general (qualitative) picture of chitinozoan distribution in the Väo Formation is relatively invariable, which probably indicates rather uniform sedimentological conditions as well as a short period of geological time.Many species, particularly of Desmochitina and Belonechitina, range throughout the sampled interval, being of limited use for biostratigraphy (Fig. 5).Others, however, can be useful for dating and subdividing the rocks of the Väo Formation. The Väo Formation corresponds to the Laufeldochitina striata chitinozoan total range zone (Nõlvak & Grahn 1993).In the Uuga section L. striata was recovered in most of the qualitative samples (Fig. 5).In smaller quantitative samples, however, only rare specimens were found, indicating that at least ca 300 g samples are needed to establish accurate biostratigraphy.The L. striata Zone is further subdivided into three subzones (Cyathochitina sebyensis, Conochitina clavaherculi, and Lower Conochitina tuberculata) that can also be recognized in the Uuga Cliff (Fig. 5). A new species of Baltochitina was found in the lower part of the Aseri Formation.The same form has previously been recorded in the Kunda Aru quarry, NE Estonia, in the lower or middle part of the Aseri Formation.Desmochitina grandicolla is also typical of the Aseri Formation, with only a single specimen recorded in the basal Väo Formation. The Aseri-Väo boundary is marked by the appearance of Belonechitina pellifera and Conochitina sp.n. 1. Cyathochitina sebyensis, Belonechitina crinita, Pterochitina retracta, and Tanuchitina tallinnensis also appear approximately at the same level.The basal part of the Väo Formation corresponds to the C. sebyensis Subzone.Tanuchitina tallinnensis and Conochitina sp.n. 1 disappear and the first specimens of Conochitina clavaherculi appear some 0.4 m above the base of the formation, marking already the base of the Con.clavaherculi Subzone. The interval between 0.8 and 1.7 m contains conochitinids with peculiar light-brownish soft covering (Fig. 3T).Although these forms may include different species, the temporal distribution of this structure seems to be restricted and may be of regional stratigraphical utility.The level of ca 1.2 m corresponds to the last appearance datum (LAD) of Lagenochitina tumida and the end of continuous and abundant occurrence of B. crinita.Only two specimens of the latter species are found higher up in the section in sample 43 (4.1 m from the base of the formation).A similar distribution pattern was observed also in the case of B. pellifera, whose continuous range ends at ca 2.4 m, at the lower boundary of the Uhaku Stage.Rare specimens are found ca 1.5 m above that level in sample 40.For regional biostratigraphy continuous ranges are indeed more useful. Almost continuous and abundant occurrence of Belonechitina cactacea begins at 3.4 m, coinciding with a lithological marker horizon known as the 'double discontinuity surface'.Conochitina aff.dolosa appears and Con.clavaherculi disappears at 3.6-3.7 m, marking the base of the Lower Con.tuberculata Subzone.Note that Nõlvak et al. (2006) proposed defining the base of the Lower Con.tuberculata Subzone as the appearance of the nominal species.It seems, however, to be more practical to follow the original definition for the base of the Con.tuberculata Subzone provided by Nõlvak & Grahn (1993).Thus, in this paper the base of the Lower Con.tuberculata Subzone is drawn at the top of the last occurrence of Con.clavaherculi.The actual appearance of the Conochitina tuberculata group, containing also Con. aff.tuberculata (Con.subtuberculata nom.nud. of Männil 1986, fig. 2.1.1),begins at 4.7 m in the uppermost Väo Formation.The first specimens of Con.tuberculata are recorded just below the lower boundary of the Kõrgekallas Formation at a depth of 4.9 m. The above-described pattern conforms generally well with earlier data (e.g.Grahn 1984;Männil 1986, fig. 2.1.1;Nõlvak 1999).In few cases, however, the ranges recorded in the Uuga section are slightly different.For instance, L. tumida seems to disappear earlier in the Uuga Cliff than in the Lasnamägi section discussed by Männil (1986).Biostratigraphically significant Cyathochitina regnelli and Baltochitina nolvaki ('Sagenachitina' in Männil 1986) were not recovered from the Uuga section.Männil & Rubel (1969) and Nõlvak (1972) applied recurrent abundance zones of Cyathochitina campanulaeformis, C. kuckersiana, and C. calix, and barren intervals in between them, to subdivide and correlate early Late Ordovician (Uhaku to Haljala stages) in northeastern Estonia.Männil (1986, fig. 2.1.1),using the term 'zonule' for this kind of units, extended the concept also to the Lasnamägi Stage of the Tallinn area.In the Uuga section species of Cyathochitina are common, but the recurrent 'zonules' of Männil (1986) cannot be unambiguously followed.Further data from between Pakri and Tallinn are needed to resolve this question.The range of C. sebyensis fits, however, well with previous data.The abundant occurrence of C. calix, which begins at the level of the Lasnamägi-Uhaku boundary in the Lasnamägi section (Männil 1986), is observed in slightly younger strata in the Uuga Cliff (sample 32, ca 0.5 m above the boundary). Apart from chitinozoan distribution, it is interesting to note that graptoloids were very rare in the Uuga section.Only a single specimen of Gymnograptus linnarssoni was found in sample 34 and G. cf.retioloides occurred in sample 54. CONCLUDING REMARKS Chitinozoans are common and diverse microfossils in the Väo Formation and its under-and overlying strata, with at least 36 species recorded, and up to 170 specimens per gram of rock.Their abundance and relative frequency show rather regular and possibly cyclic fluctuations, which are not correlated with lithological or geochemical data.Whilst some chitinozoan species seem to display environmental preferences on a larger scale, some other explanation is needed for successive frequency fluctuations within the Uuga section.Possibly these are related to changes in, e.g., temperature, currents, or nutrient supply that are not directly reflected in lithology. Numerous discontinuity surfaces that are common in the Väo Formation represent different events or processes.Only some of them coincide with notable changes in chitinozoan abundance and relative frequency, probably marking stratigraphical gaps.Hence quantitative data on chitinozoans may turn useful for interpreting sedimentary successions and possibly for stratigraphy. Chitinozoan biodiversity, which shows a marked peak in the late Darriwilian of Baltica, is complemented by the data from the Uuga Cliff.The highest chitinozoan diversities within the studied interval were recorded in the Rebala Member of the Väo Formation, where the standing species diversity exceeds 25.From the methodological point of view, it turned out that 5-20 g samples yielding many hundreds of vesicles are still too small for adequate diversity estimations as well as for detailed biostratigraphy. At least 11 biostratigraphical horizons, including the subzonal boundaries of the Laufeldochitina striata Zone, can be distinguished within the studied interval.The disappearance of abundant Belonechitina pellifera seems to serve as a good proxy for tracing the lower boundary of the Uhaku Stage, which is otherwise identified by graptolites and conodonts.These levels could be useful for regional refinement of biostratigraphical subdivision of the Väo Formation and dating parts of the Lasnamägi Building Limestone.In order to test if the observed chitinozoan ranges and frequency patterns are primarily time-controlled, environmentally caused, or simply represent changes in the hydrodynamic regime and local currents, data from other sections are needed. Fig. 1 . Fig. 1. (A) General map showing the distribution of shallowwater (Estonian Shelf) and deeper-water settings (Livonian Basin) during the Darriwilian.(B) Location of the Uuga Cliff on the Pakri Peninsula. Fig. 4 . Fig. 4. Detailed abundance curves of chitinozoans and scolecodonts in samples OM6-24 and OM6-25, based on nine successive 'subsamples' to achieve higher temporal resolution.For general abundance curve see Fig.2.The top of sample OM6-25 was on a separate small rock slab that could not be polished and scanned.The ca 1.5 cm gap between samples OM6-24 and OM6-25 is due to difficulties in taking monolithic samples from a vertical cliff wall.Note that the graph is drawn in the logarithmic scale. Fig. 5 . Fig. 5. Distribution of chitinozoans and biostratigraphy of the Uuga Cliff.Taxa of biostratigraphical importance are underlined.For lithological legend and abbreviations see Fig. 2.
2018-12-12T11:43:40.094Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "4b4284186495c455dcef2a8b35ad93a9cb2f6008", "oa_license": "CCBY", "oa_url": "https://kirj.ee/public/Estonian_Journal_of_Earth_Sciences/2010/issue_1/earth-2010-1-25-36.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4b4284186495c455dcef2a8b35ad93a9cb2f6008", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology" ] }
235527485
pes2o/s2orc
v3-fos-license
An Empirical Model For Validity And Verification Of Ai Behavior: Overcoming Ai Hazards In Neural Networks Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. This paper discusses hazards in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems with a particular focus on ANN. The paper provides a review of previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems with a focus on neural networks. Finally, the paper considers the high-level question of how to think most productively about the safety of forward-looking applications of AI. zigzag-shaped function [2]. The question now is how we arrange neurons in ANN to make it easier for a learning algorithm to find those biases and weights. For clarity and simplicity, the paper divides the most common ANN architectures based on three criteria: (1) number of layers, (2) flow of information and (3) neuron connectivity. Learning Algorithm Designing network architectures is a difficult task but training and teaching these networks are surely more difficult. To understand how ANN has been trained, it is better to start with a very simple one neuron example [26]. The principles which are used to teach a single neuron are also used to teach a whole network. However, a network level adds extra complexity which requires an additional step. Suppose you have a very simple neuron with one input and one output. You want to teach this neuron to do a certain task (for example to memorize a multiplication table for number 5). To teach this neuron, ANN researchers usually give it a so-called training set. A training set contains a number of different input values (1, 2, 3, 4, 5, 6 ...) paired with the correct output (5, 10, 15, 20, 25, 30 ...). One note in ANN model of learning is how AI researchers are setting the value of learning rate. Actually, learning rate is one of many other parameters which are left free for human and outside of ANN's control. For example, (1) the number of layers, (2) the number of neurons in each layer, (3) the size of training set, (4) the activation function type, and (5) regularization parameter as well as (6) the learning rate are some of those free parameters which are called hyperparameters [24,18] Choosing the right values of hyper-parameters is left for a person who manages the ANN. Bandura [18] criticizes those views of human learning which concentrate merely on neural patterns to interpret learning and argues that such views strip humans of agentic capabilities and a self-identity. In contrary, Bandura [18] conceives consciousness as an emergent property of brain activities which is not reducible solely to the property of neurons activity. In other words, the consciousness is higher-level force which is a result of lower-level neural activities but its properties are not limited to them. As clarified in this study, ANN design shows the need for consciousness force to manage and regulate ANN learning but this force does not occur as an emergent property of neural activity as Bandura proposes. Rather, it is a completely distinct entity which uses, guides and manages the neural activity and does not result from it. Therefore, overcoming hazards in the field AI becomes crucial to maximize societal benefit of AI given its significant expansion. RESEARCH METHOD Research has been developed and constructed based on a review of various books focusing on Russell and Norvig How can one enable meaningful human control over an AI system after it begins to operate? ("Ok, I built the system wrong, can I fix it?") RESULTS User-Centered Design (UCD) is a systematic approach that is used to overcome AI hazards during software development. UCD is typically divided into 5 main phases: User centered design focuses on software development using a top-down holistic approach. The main goal of UCD is to start by building a user navigation model first using multiple UCD tools and techniques in alignment with ANN development goal. The second major step will be to use this navigation model to define and design the ANN application structure using multiple prototyping techniques ( Figure 1). Figure 1: Traditional SE (software engineering) vs UCD As it can be seen in Figure 1, given the main departure from the traditional software development starting with application structure, the main focus of UCD is on the following different points: 1) Who are the users for ANN? 2) What do the users currently do? 3) What do the users want from the new software based on ANN? The first step of identifying the users starts by meeting with different stakeholders affiliated with the training domain. Two main aspects need to be considered: i. Different types of users are researched and identified as "User Roles". ii. Users can have different backgrounds, different experience, and different context to teach in. These differences are looked at using "User Personas" as a common UCD tool that is growing in popularity. The second part, "what the users currently do" is done via different UCD tools. Several users can be met during brainstorming sessions or design workshops to get their input on their daily work practices. A series of interviews and questionnaires could also supplement the work resulting in potential categories for data analysis. The third part, "what the users want" can be conducted with deeper analysis on the findings extracted in the previous part. This is where the navigation modeling is used as an innovative technique to envision the user needs in terms of a navigation structure that can be translated into an actual application structure. After successful completion of the User Research phase, the high-level and detailed design phases can start. The following sections provide an overview of the various phases of UCD. Phase 1 -User Research During the User Research phase, focus groups with engineering and computing systems staff can be organized to understand the course design process used by the participants. Participants can also be asked to fill an electronic questionnaire about software design tools that they currently use to create and manage their software Data collected from the focus groups about software design process can be categorized as inputs, processing and decision-making, and output artifacts. Phase 2 -High-level Design Once the user research provided a relatively clear idea and understanding of domain-and user needs, this initial design phase provides a high-level design with concepts identification, conceptual modeling and early prototyping for ANN. The main goal of high-level design is to plot down schematic ideas and steps into visual graphs and models; an early blueprint of ANN. This can be done by investigating different options and provide design alternatives to ensure a broad view before identifying a good design. Doing this early on, at high-level, sketchy, paper-based only, and without going into details could help provide several solution alternatives at a very low cost. The high-level design sketches can be discussed with the users to make sure what they said in unstructured dialogs and vague ideas and imaginations can now be concretely captured in design artifacts for further validation and clarifications. At this stage, there are 2 tools that are most suitable for the development stage: a) Navigation Model is one of the essential methods of design. A significant challenge in complex software is not the contents of each screen, but how the user mentally builds a mental view of how all screens are connected (like a city road map), and how to navigate between hundreds of screens to accomplish their task. In this regard, an effective technique-elastic prototyping-can be used an implementation of a participatory design to help designers and users build a navigation model together, greatly reducing time and effort needed. b) Prototyping (PT) is extensively used in UCD to visualize and validate all otherwise vague ideas and unclear expectations at low cost and high effectiveness. There exist three main categories of prototyping: Paper (low-level) PT, low-fidelity electronic (medium level) PT, and high-fidelity, detailed PT. Paper prototypes are very inexpensive and help capture several initial ideas and concepts, and validate them. After explaining their needs, users often change their minds when they see them on paper. Therefore, multiple paper PT sessions gives a head start in validating what users actually mean and need. After initial concepts, once design ideas and directions were identified, a medium fidelity prototyping stage can start where a sketchy visualization of key screens without contents are provided to be gradually validated them and added with initial contents. Phase 3 -Detailed Design At this stage, the focus is on the main high level solution, including details from different perspectives such as main application features, auxiliary features, concrete navigation models, menu options, visual and interaction consistency across all screens, exceptions and error massages and recovery, reliability assurances and, help. This phase can proceed in parallel with development phase as more details are uncovered and technical problems arise. User interface mockups can be created with details of various user inputs that will be solicited through the course design process. Phase 4 -Development and Development Support As implementation of essential features starts, close collaboration between designers and software engineers (software architects and developers) is essential to ensure the consistency of design and to prevent any deviations. Several technical problems require careful reconsideration of detailed design and even high-level design options. Iteration is a fundamental design approach that is extensively being used across the UCD process. Therefore, UCD is highly iterative and most of its phases are heavily overlapping to ensure design and development decisions are aligned at all times with the actual user needs. This phase of the project includes identifying appropriate technologies to be used for the development of the ANN application, design of the back-end database schema, installation and configuration of the server-side and client-side technologies, and development of the user interface screens for login, registration, index, and creation of an instructional module and the connectivity of these web pages with the backend database. Analysis of Technologies The purpose of analyzing various technologies during this phase of the project is to ensure rapid development with the latest technologies in the field of software development and use open source technologies wherever feasible. Towards this end, an analysis of web application frameworks, version control systems, server side technologies and client side technologies can be performed. System Architecture A Model-View-Controller (MVC) architecture is suggested as the underlying web application framework. MVC is a software architecture pattern which separates the representation of information from the user's interaction with it. The recommended architecture can be described as follows: • The foundation is the Java Virtual Machine (JVM). • There is a separation between the Java language and the JVM. • The final layer of the architecture is the application layer. This layer follows the Model-• View-Controller (MVC) pattern. • A controller handles requests and creates or prepares the response. A controller can generate the response directly or delegate to a view. • A controller can have multiple public action methods, each of which maps to a URI. DISCUSSION There are many ways of responding to information hazards. In many cases, the best response is no response, i.e., to proceed as though no such hazard existed. The benefits of information may so far outweigh its costs that even when information hazards are fully accounted for, we still under-invest in the gathering and dissemination of information. Moreover, ignorance carries its own dangers which are oftentimes greater than those of knowledge. Information risks might simply be tolerated. When mitigation is called for, it need not take the form of an active attempt to suppress information through measures such as bans, censorship, disinformation campaigns, encryption, or secrecy. One response option is simply to invest less in discovering and disseminating certain kinds of information. Somebody who is worried about the spoiler hazard of learning about the ending of a movie can simply refrain from reading reviews and plot summaries. At the same time, however, we should recognize that knowledge and information frequently have downsides. Future scientific and technological advances, in particular, may create information which, misused, would cause tremendous harm-including, potentially, existential catastrophe. It can also be hoped that new information technologies will bring about a vastly more transparent society, in which everybody (the watchmen included) are under constant surveillance; and that this universal transparency will prevent the worst potential misuses of the new technological powers that humanity will develop. CONCLUSION Even if our best policy is to form an unyielding commitment to unlimited freedom of thought, virtually limitless freedom of speech, an extremely wide freedom of inquiry, we should realize not only that this policy has costs but that perhaps the strongest reason for adopting such an uncompromising stance would itself be based on an information hazard; namely, norm hazard: the risk that precious yet fragile norms of truth-seeking and truthful reporting would be jeopardized if we permitted convenient exceptions in our own adherence to them or if their violation were in general too readily excused. It is said that a little knowledge is a dangerous thing. It is an open question whether more knowledge is safer. Even if our best bet is that more knowledge is on average good, we should recognize that there are numerous cases in which more knowledge makes things worse.
2021-06-22T17:55:23.702Z
2021-04-29T00:00:00.000
{ "year": 2021, "sha1": "01156eacb376b14a7a698c005cd5b33b7cd484b0", "oa_license": "CCBY", "oa_url": "https://rajpub.com/index.php/ijct/article/download/9009/8198", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4f96a076f90c7f522535e14281400d52ced30997", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248815720
pes2o/s2orc
v3-fos-license
Myeloperoxydase and CD15 With Glycophorin C Double Staining in the Evaluation of Skin Wound Vitality in Forensic Practice Background The determination of skin wound vitality based on tissue sections is a challenge for the forensic pathologist. Histology is still the gold standard, despite its low sensitivity. Immunohistochemistry could allow to obtain a higher sensitivity. Upon the candidate markers, CD15 and myeloperoxidase (MPO) may allow to early detect polymorphonuclear neutrophils (PMN). The aim of this study was to evaluate the sensitivity and the specificity of CD15 and MPO, with glycophorin C co-staining, compared to standard histology, in a series of medicolegal autopsies, and in a human model of recent wounds. Methods Twenty-four deceased individuals with at least one recent open skin wound were included. For each corpse, a post-mortem wound was performed in an uninjured skin area. At autopsy, a skin sample from the margins of each wound and skin controls were collected (n = 72). Additionally, the cutaneous surgical margins of abdominoplasty specimens were sampled as a model of early intravital stab wound injury (scalpel blade), associated with post-devascularization wounds (n = 39). MPO/glycophorin C and CD15/glycophorin C immunohistochemical double staining was performed. The number of MPO and CD15 positive cells per 10 high power fields (HPF) was evaluated, excluding glycophorin C—positive areas. Results With a threshold of at least 4 PMN/10 high power fields, the sensitivity and specificity of the PMN count for the diagnostic of vitality were 16 and 100%, respectively. With MPO/glycophorin C as well as CD15/glycophorin C IHC, the number of positive cells was significantly higher in vital than in non-vital wounds (p < 0.001). With a threshold of at least 4 positive cells/10 HPF, the sensitivity and specificity of CD15 immunohistochemistry were 53 and 100%, respectively; with the same threshold, MPO sensitivity and specificity were 28 and 95%. Conclusion We showed that combined MPO or CD15/glycophorin C double staining is an interesting and original method to detect early vital reaction. CD15 allowed to obtain a higher, albeit still limited, sensitivity, with a high specificity. Confirmation studies in independent and larger cohorts are still needed to confirm its accuracy in forensic pathology. Conclusion: We showed that combined MPO or CD15/glycophorin C double staining is an interesting and original method to detect early vital reaction. CD15 allowed to obtain a higher, albeit still limited, sensitivity, with a high specificity. Confirmation studies in independent and larger cohorts are still needed to confirm its accuracy in forensic pathology. Keywords INTRODUCTION The determination of skin wound vitality is a challenge for the forensic pathologist. The detection of an inflammatory infiltration based on histology is to date the gold standard, being highly specific but showing a very low sensitivity in recent wounds. Most notably, in the first minutes or hours after the infliction of a wound, standard histological examination may not determine whether the wound was inflicted in pre-or postmortem period. Indeed, the delay before the detection of first polymorphonuclear neutrophils (PMN) infiltration may vary from 10 min to 6 h (1, 2). Immunohistochemistry (IHC) is a cost effective and easy-to-use method that could allow to obtain a higher sensitivity (3). Upon the candidate markers, CD15 and myeloperoxidase (MPO) may allow to early detect PMN (4)(5)(6)(7). However, one potential pitfall on IHC slide is the difficulty to differentiate passive extravasation of PMN in hemorrhagic infiltration from true active diapedesis. The association with a red blood cells marker, like glycophorin C, could allow to increase specificity, by avoiding the count of inflammatory cells in hemorrhagic areas. The aim of this study was to evaluate the sensitivity and the specificity of CD15 and MPO, with glycophorin C costaining, in comparison with standard histology, in a series of recent medicolegal wounds and post-mortem controls, and in a prospective human experimental surgical model. Medicolegal Wounds Twenty-four individuals (20 men, 4 women, mean age = 51.0 ± 24.3 years) with at least one recent open skin wound were included at the mortuary of the University Hospital of Montpellier. Skin wounds consisted in 20 lacerations from polytrauma cases (traffic accidents, falls from high height) and 4 gunshot wounds. They were mostly located on the lower limbs and the torso. The time interval between trauma and death (survival time) was determined with medical records and police reports, including testimony from witnesses. It varied from a few seconds to 180 min (mean = 47 min). Bodies displaying putrefactive changes were excluded from the study, as well as individuals with severe malnutrition, known immunodeficiencies and immunotherapy. For each corpse, a post-mortem 2 cm-incision was performed with a scalpel in an uninjured skin area contralateral to the ante-mortem wound, shortly after arrival at the mortuary and before refrigeration. The elapsed time between death and the infliction of the post-mortem wound was comprised between 0 and 180 min (median: 40 min). In patients with multiple antemortem wounds, the wound of interest was selected based on its location (no skin sample was collected from the head or hands, for ethical reasons) and on its size (large wounds were preferred to small ones). At autopsy, a skin sample from the margins of each wound (ante-and post-mortem) and from an uninjured skin area located on the midline incision line (control samples) were collected on each corpse and immediately placed for fixation in 10% buffered formalin solution. A total of 72 skin samples were removed, including 24 samples from each of the conditions: ante-mortem wounds, post-mortem wounds, and healthy skin (control samples). The average post-mortem interval at the time of sampling was 66.3 ± 28.3 h (24-117 h). Surgical Wounds As a model of recent vital stab wound injury, the cutaneous surgical margins of abdominoplasty specimens were prospectively collected at the Department of Maxillofacial and Plastic Surgery of the University Hospital of Nancy, France. The precise time interval between incision and devascularization was recorded for each margin, ranging from 0 to 61 min (median: 24 min). As a model of early post-mortem wounding, a wound was inflicted with a sterile scalpel in the center of the specimens, 5 min. after devascularization. In the Pathology Department, tissue sampling was performed perpendicularly to the skin margins, on fresh tissue, before fixation in buffered formalin solution. Thirty-nine samples (26 pre-devascularization and 13 post-devascularization) were obtained from 13 patients. Standard Histology After formalin fixation and paraffin embedding, 5-µm sections were stained with hematoxylin, eosin, and saffron (HES), and a blind histological examination of the vital and post-mortem wounds was performed, taking into account the presence or absence of hemorrhagic infiltration and counting the number of PMN in 10 consecutive high-power fields (HPF) (×400 magnification; 0.237 mm 2 ). A quantitative evaluation of staining for CD15 and MPO was counted in 10 consecutive HPF (0.237 m 2 /field) on one representative slide per case, in the immediate vicinity of the wound margin, from the superficial dermis to the deep subcutaneous adipose tissue, taking into account all interstitial leucocytes showing stained cytoplasm, excluding intravascular cells and those within hemorrhagic areas, these latter being underlined by the anti-glycophorin C antibody. Statistical Analyses The presence of absence of interstitial hemorrhage was considered as a qualitative variable, and the numbers of PMN, positive MPO and positive CD15 cells as quantitative variables. Statistical analysis was performed with IBM SPSS Statistics version 27.0. software. To compare the different groups (ante-mortem vs. post-mortem; pre-devascularization vs. postdevascularization), the Fisher exact test was used for qualitative variables and the Mann-Whitney Wilcoxon test for quantitative variables. The correlation between quantitative variables was evaluated with the Spearman's coefficient correlation. A p-value lesser than 0.05 was considered as statistically significant. Ante-mortem and pre-devascularization wounds were considered as vital wounds, whereas post-mortem wounds, control samples, and post-devascularization wounds were defined as non-vital wounds. For the evaluation of sensitivity and specificity, true positivity was defined by a cell count number equal or greater to the defined threshold in vital skin wounds; false positivity by a number equal or greater to the defined threshold in non-vital samples; true negativity by a number lesser to the threshold in non-vital samples; false negativity by a number lesser to the threshold in vital wounds. Receiving operating characteristic (ROC) curves were performed for each marker, in order to screen for the most optimal threshold. Evaluation of Inflammation With Standard Histology In the medicolegal wounds, no significant inflammatory reaction was seen on standard histology slides ( Figure 1A). In the surgical wounds, a significant inflammatory reaction was found in 2 cases, showing a PMN infiltration (7 and 31 PMN/10 HPF). PMN evaluation showed a median number of 1 PMN/10 HPF (min.max.: 0-31) in vital wounds and 1 PMN/10 HPF (min.-max.: 0-3) in non-vital wounds ( Table 1). No significant difference was found between vital and non-vital wounds, in both autopsy cases and surgical wounds (p = 0.557 and p = 0.294, respectively). A significant correlation between the number of PMN/10 HPF and the survival/pre-devascularization time was found in surgical wounds (rho = 0.424; p = 0.031), but not in autopsy wounds (rho = 0.135; p = 0.511). With a threshold of at least 4 PMN/10 HPF, the sensitivity and specificity of the PMN count for the diagnostic of vitality were 16 and 100%, respectively. Interstitial Hemorrhage With standard histology, a significant interstitial hemorrhage was noticed in 88% of ante-mortem wounds (Figure 1A) vs. 44% of post-mortem wounds and 17% of control skin samples. Using the anti-glycophorin C antibody, an interstitial hemorrhage was noticed in 96% of ante-mortem wounds vs. 56% of post-mortem wounds and 25% of control skin samples. In the surgical model, standard histology and anti-glycophorin C antibody showed an interstitial hemorrhage in, respectively, 88 and 96% of pre-devascularization wounds vs. 54 and 84% of post-devascularization wounds. For the diagnosis of vitality, the sensitivity and specificity of the identification of interstitial hemorrhage on HES slides were 88 and 66%, respectively. With the anti-glycophorin C antibody, sensitivity raised to 96%, with a specificity of 50%. The ROC curve for the diagnosis of vitality showed that the area under the curve was higher for CD15 (0.78) than MPO (0.69) and standard PMN count (0.58) (Figure 2). With a threshold of at least 4 positive cells/10 HPF, the sensitivity and specificity of CD15 immunohistochemistry were 53 and 100%, respectively; with the same threshold, MPO sensitivity and specificity were 28 and 95%. With a threshold of at least 2 positive cells/10 HPF, the sensitivity of CD15 reached 65%, but with a lower specificity (81%). For MPO, sensitivity and specificity were 51 and 81%. The numbers of MPO and CD15 positive cells were significantly correlated with the standard histological count for PMN (rho = 0.339, p < 0.001; and rho = 0.333, p < 0.001, respectively). In the surgical model, the numbers of MPO and CD15 positive cells were significantly correlated with survival/predevascularization time (rho = 0.400, p = 0.043; and rho = 0.559, p = 0.003, respectively), but not in autopsy wounds (p > 0.50). DISCUSSION In the first minutes or hours, the standard histological examination may not be able to determine whether the wound was inflicted in the pre-or post-mortem period. While hemorrhagic infiltration was classically considered as a sign of vital reaction, several studies have shown that the extravasation of blood cells can also occur after death and does not represent a reliable marker in wound vitality diagnosis (8)(9)(10). Various methods can be used to detect markers of vitality, such as study of mRNAs or microRNAs (RT-PCR, in situ hybridization) and proteins (ELISA technique, Western blot, immunofluorescence, immunohistochemistry), focusing on the different phases of inflammation and wound healing (3). Most notably, in skin lesions, several studies about cell adhesion molecules (ICAM-1, VCAM-1, P-selectin, E-selectin, fibronectin, . . .) were published, reporting for some markers a good sensitivity in recent wounds, but limited by a significant risk of post-mortem false positivity (3,(11)(12)(13)(14)(15)(16). More recently, the use of microRNAs in forensic science has been proposed in various applications, including wound vitality, showing for few microRNAs interesting but still preliminary results (17)(18)(19). In this study, we propose an original method for the detection of early inflammation, based on an immunohistochemical double staining of leucocytes and red blood cells. We tested two markers of PMN: MPO and CD15. CD15 showed a higher sensitivity than MPO, which may be explained by its ability to also detect activated monocytes (20), in addition to PMN. Staining for CD15 was previously reported as a marker of early vital reaction, focusing in most of studies on brain trauma or other organs (7,21,22). However, as the timing and the intensity of inflammation may be influenced by the type of the trauma and the nature of the lesioned tissue (1,23), these studies cannot be applicable to skin injuries. In skin, we previously studied this marker in surgical and medicolegal wounds (5), and it was also reported to be an interesting marker in ligature marks (6,24) or for the assessment of the vitality in corpse dismemberment (25) and in decomposed bodies (26). Comparing with other methods, IHC staining of inflammatory cells allows the pathologist to have a morphological control of the signal, i.e., the recognition of leucocyte shape and the precise localization within the sample. In addition to anti-CD15 or MPO antibodies, we performed a double staining with the anti-glycophorin C antibody. Glycophorin C, like glycophorin A and D, is a sialylated glycoprotein in human erythrocytes membranes. Anti-glycophorin immunohistochemistry has been proposed in various studies for the identification of the hemorrhagic infiltration, most notably in decomposed bodies or in specific conditions, such as Amussat's sign or retinal hemorrhage (27)(28)(29)(30)(31). In our study, the association with the anti-glycophorin C antibody limits the risk of counting leukocytes originating from a passive extravasation of PMN in hemorrhagic infiltration, because red blood cells are more difficult to detect on IHC slides. It may also allow to detect more easily the wound margins. The limit of this method is a relatively low sensitivity in very recent wounds, albeit higher than standard histology. The sensitivity is closely related to the type of wound or experimental model. We included in the present study only recent wounds, with a survival or pre-devascularization time of few seconds or minutes in a significant number of cases, which may explain the low sensitivity, whatever the method. In the same series of medicolegal wound, we previously found a sensitivity of 21% for the evaluation of IL8 staining, which reached 46% when using IL8 in a multiplex immunoassay, normalized on healthy skin levels (32,33). Hence, we can conclude that IHC is less sensitive that immunoassay, but the latter has the disadvantage of needing fresh frozen tissue and to be a method which is not as largely developed as IHC, requiring training sets and data normalization, without morphological control. Measures of test accuracy such as sensitivity and specificity depend crucially on the selected threshold, and the optimal value of this threshold is a key question for forensic practice. Given the fact that a 100% sensitivity is probably unreachable in very recent wound, we aimed to obtain a theatrical 100% specificity, to strictly avoid false positivity and obtain a high positive predictive value for vitality assessment. A threshold of 4 CD15-positive per 10 HPF cells allowed to obtain a 100% specificity and would be probably more relevant that a lower threshold, which exposes to false positivity, albeit with better sensitivity. In a previous study in which we compared CD15, FVIIIra and tryptase in medicolegal stab wounds showing inflammation and in surgical specimens (breast reductions), we found similar results for CD15, with the same threshold of 4 cells per 10 HPF (sensitivity: 47%; specificity: 100%) (5). CD15 had also the advantage to show a very good inter-observer reproducibility (0.90) (5). CONCLUSION In conclusion, based on a series of recent medicolegal wounds associated with post-mortem controls and an experimental human model of surgical wounds, we showed that combined CD15/glycophorin C double IHC staining is an interesting and original method to detect early vital reaction. In comparison with standard histology and MPO staining, CD15 allowed to obtain a significantly higher, albeit still limited, sensitivity, with a high specificity. Confirmation studies in independent and larger cohorts are still needed in order to confirm its accuracy in forensic pathology. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The post-mortem study protocol was approved by the French Agency of Biomedicine, Nr. PFS15-003. The surgical model protocol was approved by the review board of the Direction of Research and Innovation, CHRU of Nancy, France (CPRC2013, DRCI, CHRU Nancy) and a written consent was obtained from patients for using surgical specimens (study promoter: CHRU Nancy, CPRC2012; sample collection DC2008-459). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS GG and P-AP designed the study, collected the samples, analyzed the data, and wrote the manuscript. AB collected the samples and the data and analyzed the histological preparations. SC, MB, and ES collected the samples. LM, EL, and PC participated to the elaboration of study design. All authors agree to be accountable for the content of the work. FUNDING This work was supported by the CHRU Nancy, INSERM U1256 NGERE, and by CHRU Montpellier.
2022-05-17T13:20:57.484Z
2022-05-17T00:00:00.000
{ "year": 2022, "sha1": "9d24254ec40a1def4c87425b7ad106ceffc35c71", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9d24254ec40a1def4c87425b7ad106ceffc35c71", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
227100197
pes2o/s2orc
v3-fos-license
Immune Phenotypes of Nasopharyngeal Cancer Simple Summary As for many solid cancers, nasopharyngeal cancer (NPC) interacts with the immune system. In this retrospective study, immune features of NPC were explored and assessed against Epstein-Barr virus status, clinical stage, and survival. Specific immune phenotypes were identified based on presence and distribution of CD8+ T-cells: i.e., “inflamed”, “excluded”, and “deserted” NPC, which carried important prognostic information. Presence and distribution of CD207+ cells, likely representing antigen-presenting dendritic cells, were demonstrated, suggesting a potential for immune cell targeting. Gene expression revealed differences in immune profiles between NPC and control tissue as well as between subgroups of NPC based on CD8 expression (high vs. low). Taken together, the observations may be of relevance to prognostication of NPC as well as for explorations into the field of immunotherapy. Abstract Nasopharyngeal cancer (NPC) features intralesional immune cells, but data are lacking on presence/distribution of T-cells and dendritic cells (DCs). Based on intralesional distribution of lymphocytes, a series of NPC biopsies (n = 48) were classified into “inflamed”, “excluded”, and “deserted” phenotypes. In addition, CD8+ T-cells and CD207+ DCs were quantified. The data were analyzed in relation to Epstein–Barr virus-encoded small RNA (EBER), Epstein-Barr virus (EBV) DNA, and survival. Separately, data on gene expression from a public database were analyzed. 61.7% of NPC lesions were “inflamed”, 29.8% were “excluded”, and 8.5% were “deserted”. While CD8+ cells were present in cancer cell areas and in surrounding stroma, CD207+ cells were observed largely in cancer cell areas. High CD8+ T-cell presence was associated with EBV+ disease, but no such pattern was observed for CD207+ DCs. There was a difference in disease-free survival in favor of “inflamed” over “excluded” NPC. Gene expression analysis revealed differences between NPC and control tissue (e.g., with regard to interferon activity) as well as between subgroups of NPC based on CD8 expression (high vs. low). In conclusion, NPC lesions are heterogeneous with regard to distribution of CD8+ T-cells and CD207+ DCs. NPC can be classified into immune phenotypes that carry prognostic information. CD207+ DCs may represent a target for immunotherapy with potential to facilitate the antigen cross-presentation necessary to execute cytotoxic T-lymphocyte responses. Introduction There is a need for new prognostic options and treatment principles for nasopharyngeal cancer (NPC), a malignancy frequently associated with Epstein-Barr virus (EBV), and measures targeting the immune system may offer such possibilities. However, this requires detailed knowledge about the cancer and its local microenvironment, particularly on the presence and distribution of intralesional immune cells and their targets and functions. In a recent study by Wang et al., the importance of the immune status of NPC was underscored: focusing on immune checkpoints PD-L1 and B7-H4 on tumor cells and PD-L1, B7-H3, B7-H4, IDO-1, VISTA, ICOS, and OX40 on intralesional immune cells, specific signatures were demonstrated to predict survival [1]. Interestingly, the association was particularly strong for patients with high levels of EBV DNA in plasma, suggesting the importance of EBV-antigen presence to the cancer and immune system interaction and indicating a need to monitor EBV status or even antigen levels in studies focusing on aspects of the immune system in NPC. Furthermore, in an even more recent study, Chen et al. demonstrated that specific gene signatures of macrophages, plasmacytoid dendritic cells (DCs), CLEC9A+ DCs, natural killer cells, and plasma cells were associated with improved progression-free survival [2]. With regard to interactions between the immune system and NPC, tissue-infiltrating lymphocytes (TILs) are of importance. The density/distribution of TILs has been investigated and identified as an independent positive prognostic factor [3]. Arguably, this lymphocyte population includes T-cells that, if appropriately instructed by DCs (e.g., via adjuvant targets such as C-lectin receptors), can facilitate antigen cross-presentation and can produce cytotoxic T-cell (CTL) responses [4,5]. Furthermore, the presence and distribution of lymphocytes, notably T-cells, as demonstrated for other cancers and indicated to predict, e.g., response to immunotherapy [6], may allow for classification of NPC into specific immune phenotypes. In addition to T-cell observations, following morphological findings suggesting a presence of DCs in NPC lesions [7][8][9][10][11][12], we recently reported on intralesional DC subsets and particularly highlighted CD1c+ myeloid cells expressing the C-lectin receptor CD207 [13]. However, taken together, data on the presence of T-cells, in particular CD8+ T-cells, and CD207+ DCs in NPC are scarce. Furthermore, it is unknown whether immune phenotypes exist in NPC and, if so, whether they influence survival. In this study, biopsy material from patients with NPC, of which we have previously reported data on intralesional levels of EBV DNA and survival [14], were revisited. In order to assess aspects of tumor heterogeneity, overall histology was examined focusing on cancer cells, cytokeratin, and Epstein-Barr virus-encoded small RNAs (EBERs). Based on the presence and distribution of lymphocytes (notably CD8+ cells), the material was classified into immune phenotypes. In addition, using a digital image technique, CD8+ and CD207+ cells were quantitatively assessed. The impact on survival of immune phenotypes as well as of CD8+ T-cells and CD207+ DCs was determined. The data were also analyzed in relation to intralesional EBV DNA. Finally, data on gene expression in the context of NPC and healthy control tissue were retrieved from a public database [15,16] and analyzed in silico focusing on CD8+ low and CD8+ high T-cell subgroups of NPC. Material Availability and Success Rate with Regard to Analyses For all 48 patients, EBER slides were retrieved and new slides were produced for visualization of CD8+ and CD207+ cells, respectively. One individual was excluded from the analysis due to insufficient material for immunohistochemistry (i.e., the obtained tissues were deemed too small and of poor quality): this patient was incidentally also the only one lacking follow-up. For the remaining 47 patients, it was possible to perform all but three analyses. Two of these were related to a case of a heavily EBER-positive tumor with dense infiltrates of lymphocytes, making a distinction of nontumor-infiltrated surrounding stroma (CD8 and CD207 analyses) too unreliable. The third was an analysis of CD207 where areas of surrounding stroma were lacking. Overall Histology and Cytokeratin/EBER Immunohistochemistry In Figure 1, selected NPC lesions are presented focusing on presence and distribution of cancer cells, cytokeratin, and EBERs. With regard to cancer cells, a marked inter-and intraindividual heterogeneity was observed. Accordingly, some NPC lesions (or parts of lesions) featured multiple but aggregated areas of cancer cells with pushing borders, whereas others (or other parts) were characterized by cancer cells infiltrating the surrounding stroma. A heterogeneity was seen also for the presence and distribution of cytokeratin (an epithelial cell marker), in this context a marker of cancer cells. Cancers 2020, 12, x 3 of 17 heterogeneity was observed. Accordingly, some NPC lesions (or parts of lesions) featured multiple but aggregated areas of cancer cells with pushing borders, whereas others (or other parts) were characterized by cancer cells infiltrating the surrounding stroma. A heterogeneity was seen also for the presence and distribution of cytokeratin (an epithelial cell marker), in this context a marker of cancer cells. 2.3. Immune Phenotypes: "Inflamed", "Excluded", and "Deserted" Immune phenotypes of the NPC lesions were estimated based on qualitative assessment of the presence and distribution of lymphocytes on slides stained for hematoxylin, cytokeratin, and CD8. Most lesions (61.7%) were of an immune "inflamed" phenotype (lymphocytes infiltrating cancer cell areas). Immune "excluded" (lymphocytes in the surrounding stroma but not infiltrating cancer cell areas) and "deserted" phenotypes (no lymphocytes in either cancer cell areas or the surrounding Figure 1. Overall histological view for three selected patients: Patient 1 with EBER-positive NPC (a-c), Patient 2 with EBER-negative NPC (d-f), and Patient 3 with EBER-positive NPC (g-j). A marked heterogeneity was seen between patients (all panels) and within a single sample (g-j), reflected as varying expression of both CK and EBER. The stainings used were Mayer's hematoxylin in combination with CK (green) and CD207 (brown) (a,c,d,f,g,i) and Red Counterstain II with EBER (black) (b,e,h,j). Colored horizontal bars indicate size: red = 3 mm, green = 100 µm, orange = 2 mm, and blue = 900 µm. Arrowheads (g,i) denote cancer cells expressing CK and EBER (gray), cancer cells with loss of CK (black), and normal CK-staining of epithelium as comparison (white). Abbreviations: EBER = Epstein-Barr virus encoded small RNAs, NPC = nasopharyngeal cancer, and CK = cytokeratin. 2.3. Immune Phenotypes: "Inflamed", "Excluded", and "Deserted" Immune phenotypes of the NPC lesions were estimated based on qualitative assessment of the presence and distribution of lymphocytes on slides stained for hematoxylin, cytokeratin, and CD8. Most lesions (61.7%) were of an immune "inflamed" phenotype (lymphocytes infiltrating cancer cell areas). Immune "excluded" (lymphocytes in the surrounding stroma but not infiltrating cancer cell areas) and "deserted" phenotypes (no lymphocytes in either cancer cell areas or the surrounding stroma) were less frequent: 29.8% and 8.5%, respectively. In Figure 2, selected NPC lesions are presented, demonstrating these three immune phenotypes. CD8 and CD207: Quantification in Whole Biopsies Following digital image dissection of whole biopsies (excluding artefacts, normal epithelium, germinal centers, and gland structures), areas eligible for analysis of CD8 and CD207, respectively, were 14.8 (6.5−28.7) mm 2 and 12.3 (5.5−22.4) mm 2 . The frequencies of pixels representing CD8 and CD207, respectively, were 2.17% (0.73−3.18) and 0.15% (0.040−0.58). There was no correlation between the CD8 and CD207 ratios. The morphological distribution patterns of CD8 and CD207 are depicted in Figure 3, and the pixel frequencies are indicated in Figure 4a. Based on ratios of pixels representing CD8, there were differences between the immune phenotypes: "inflamed" and "excluded" (p = 0.034, higher frequency for "inflamed"), "inflamed" and "deserted" (p = 0.0020, higher frequency for "inflamed"), as well as "excluded" and "deserted" (p = 0.022, higher frequency for "excluded"). No such differences were observed for ratios of pixels representing CD207 (Figure 5a www.mdpi.com/journal/cancers (brown). Left panels indicate the distribution of CD8+ cells, and right panels indicate CD207+ cells on the following section. The patterns shown are (a,b) low in CD8 and high in CD207, (c,d) high in CD8 and low in CD207, (e,f) high in both CD8 and CD207, and (g,h) low in both CD8 and CD207. CD8 expression was in general higher than CD207 expression. Green horizontal size-bars indicate 100 µm. Abbreviations: NPC = nasopharyngeal cancer and CK = cytokeratin. CD8 and CD207 frequency (%) of whole biopsies in relation to immune phenotypes presented as boxplots (median and IQR, with whiskers denoting 1.5 IQR and outliers as circles). (a) CD8 ratios differed between immune phenotypes: "inflamed" and "excluded" (p = 0.034, higher frequency for "inflamed"), "inflamed" and "deserted" (p = 0.0020, higher frequency for "inflamed"), and "excluded" There was a difference between CD207 ratios between areas of cancer cells and the surrounding stroma (p < 0.0001). In (a), an extreme outlier (frequency: 24%), deemed an accurate observation, is indicated with a red circle and an upwards pointing red arrow. Abbreviation: IQR = interquartile range. CD8 and CD207: Quantification in Areas of Cancer Cells and Surrounding Stroma For quantification of CD8+ cells, defined areas were selected representing surrounding stroma and the frequency of pixels was 2.27% (0.91-4.94) (Figure 4b). For quantification of CD207+ cells, defined areas were selected representing cancer cells and surrounding stroma. The frequencies of Cancers 2020, 12, 3428 7 of 17 pixels representing CD207 were 0.25% (0.040-1.39) for areas of cancer cells and 0.030% (0-0.065) for surrounding stroma, representing an 8-fold difference (p < 0.0001) (Figure 4c). There was no statistically significant correlation between ratios of CD8 or CD207 in defined areas (grouped by median values) and immune phenotypes. There was no correlation between the CD8 and CD207 ratios in the surrounding stroma. Clinical Performance Based on Immune Phenotype and Presence of CD8 and CD207 The "deserted" phenotype comprised four NPC lesions that all were EBER-negative. Out of these, three featured spread disease at diagnosis (stage IVC) and one featured an advanced local lesion (stage IVA). There was no difference in disease-specific survival (DSS) between immune phenotypes, but a there was a statistically significant difference in disease-free survival (DFS) between the "inflamed" and "excluded" phenotype (p = 0.0090) (Figure 6), the latter presenting the poorest prognosis. Since three out of four cases of immune "deserted" phenotype presented as spread disease, this subset was not included in the DFS analysis. There was no difference in DSS or DFS between high and low quantitates of CD8 and CD207 (grouped by median levels). CD8 and CD207: Quantification in Areas of Cancer Cells and Surrounding Stroma For quantification of CD8+ cells, defined areas were selected representing surrounding stroma and the frequency of pixels was 2.27% (0.91−4.94) (Figure 4b). For quantification of CD207+ cells, defined areas were selected representing cancer cells and surrounding stroma. The frequencies of pixels representing CD207 were 0.25% (0.040−1.39) for areas of cancer cells and 0.030% (0−0.065) for surrounding stroma, representing an 8-fold difference (p < 0.0001) (Figure 4c). There was no statistically significant correlation between ratios of CD8 or CD207 in defined areas (grouped by median values) and immune phenotypes. There was no correlation between the CD8 and CD207 ratios in the surrounding stroma. Clinical Performance Based on Immune Phenotype and Presence of CD8 and CD207 The "deserted" phenotype comprised four NPC lesions that all were EBER-negative. Out of these, three featured spread disease at diagnosis (stage IVC) and one featured an advanced local lesion (stage IVA). There was no difference in disease-specific survival (DSS) between immune phenotypes, but a there was a statistically significant difference in disease-free survival (DFS) between the "inflamed" and "excluded" phenotype (p = 0.0090) (Figure 6), the latter presenting the poorest prognosis. Since three out of four cases of immune "deserted" phenotype presented as spread disease, this subset was not included in the DFS analysis. There was no difference in DSS or DFS between high and low quantitates of CD8 and CD207 (grouped by median levels). Kaplan-Meier estimates of (a) DSS and (b) DFS for NPC based on immune phenotypes indicated a better prognosis in terms of DFS for the "inflamed" subtype compared to the "excluded" (p = 0.0090). Since three out of four patients with immune "deserted" phenotype presented with spread disease, this subset was not included in the DFS analysis. No differences were observed in the DSS analysis. Abbreviation: DSS = disease-specific survival and DFS = disease-free survival. Cancer Stage in Relation to Immune Phenotypes and to CD207 and CD8 In line with the findings that cases with a "deserted" phenotype were all stage IV disease at diagnosis (n = 4) and that cases of stage I disease (n = 4) were all of the "inflamed" phenotype, there was a difference between stage I-III and IV disease with regard to phenotypes (p = 0.042). No statistically significant associations were observed for other selected stage combinations or T-stage Figure 6. Kaplan-Meier estimates of (a) DSS and (b) DFS for NPC based on immune phenotypes indicated a better prognosis in terms of DFS for the "inflamed" subtype compared to the "excluded" (p = 0.0090). Since three out of four patients with immune "deserted" phenotype presented with spread disease, this subset was not included in the DFS analysis. No differences were observed in the DSS analysis. Abbreviation: DSS = disease-specific survival and DFS = disease-free survival. Cancer Stage in Relation to Immune Phenotypes and to CD207 and CD8 In line with the findings that cases with a "deserted" phenotype were all stage IV disease at diagnosis (n = 4) and that cases of stage I disease (n = 4) were all of the "inflamed" phenotype, there was a difference between stage I-III and IV disease with regard to phenotypes (p = 0.042). No statistically significant associations were observed for other selected stage combinations or T-stage differences. There were no statistically significant differences between stage or stage combinations, and CD8+ or CD207+, either for whole biopsies or for selected regions. Figure 8. Differences in intralesional EBV DNA load (copies/cell) for immune phenotypes are shown (medians and IQR with whiskers denoting 1.5 IQR and outliers as circles). A marked difference in DNA load was present between the "inflamed" and "deserted" phenotypes (p = 0.00034). The differences between "inflamed" and "excluded" and between "excluded" and "deserted" were not significant, though a trend was seen (p = 0.055 and p = 0.079, respectively). Higher load outliers (range 1237−94617 copies/cell) are indicated with red circles and upwards-pointing red arrows. Abbreviation: EBV = Epstein-Barr virus. Cell Type-Specific Gene Expression in NPC Transcriptional data available from the gene expression omnibus (GEO) database, including mRNA profiles of 31 NPC samples and 10 control nasopharyngeal samples, were in silico immunoprofiled and further assessed based on CD8+ T cell-related transcripts. Utilization of the CD8+ T-cell signatures, based on cell profiling scores from Puram et al. and Newman et al. via CIBERSORTX [17,18], enabled a subdivision of the NPC samples into groups of high and low scores. The immune profiling scores displayed a significant increase in CD4+ T cells in controls cf. both CD8+ high and low NPC, higher fibroblasts score in CD8+ low NPC cf. control, and a significant increase in macrophages for CD8+ high NPC cf. controls (Figure 9a). On conducting a similar analysis of 22 immune cell populations [17], enhanced expression of the signatures of CD4+ memory activated T cells and M1 macrophages were evident in the CD8+ low and CD8+ high group cf. controls. Further, the relative intensity weights for the signature of Natural Killer (NK)-activated cells were increased in CD8+ high NPC cf. controls. In contrast, signatures for the naïve and memory B cells were significantly lower in both NPC groups cf. controls (Figure 9b). The DCs (resting and activated) could not be determined for NPC cf. control tissue. As the CD8A gene is the most representative functional gene associated with CD8+ T cells [19], further investigation for genes correlated to CD8A was conducted. The analysis revealed 25 positively and 1 negatively correlated gene to CD8A. The expression score of these genes with the 10 immune populations [18] showed a significant association with CD4+ T cells and mast cells. In addition, LAG3 Figure 8. Differences in intralesional EBV DNA load (copies/cell) for immune phenotypes are shown (medians and IQR with whiskers denoting 1.5 IQR and outliers as circles). A marked difference in DNA load was present between the "inflamed" and "deserted" phenotypes (p = 0.00034). The differences between "inflamed" and "excluded" and between "excluded" and "deserted" were not significant, though a trend was seen (p = 0.055 and p = 0.079, respectively). Higher load outliers (range 1237-94617 copies/cell) are indicated with red circles and upwards-pointing red arrows. Abbreviation: EBV = Epstein-Barr virus. [17,18], enabled a subdivision of the NPC samples into groups of high and low scores. The immune profiling scores displayed a significant increase in CD4+ T cells in controls cf. both CD8+ high and low NPC, higher fibroblasts score in CD8+ low NPC cf. control, and a significant increase in macrophages for CD8+ high NPC cf. controls (Figure 9a). On conducting a similar analysis of 22 immune cell populations [17], enhanced expression of the signatures of CD4+ memory activated T cells and M1 macrophages were evident in the CD8+ low and CD8+ high group cf. controls. Further, the relative intensity weights for the signature of Natural Killer (NK)-activated cells were increased in CD8+ high NPC cf. controls. In contrast, signatures for the naïve and memory B cells were significantly lower in both NPC groups cf. controls (Figure 9b). The DCs (resting and activated) could not be determined for NPC cf. control tissue. Cell Type-Specific Gene Expression in NPC As the CD8A gene is the most representative functional gene associated with CD8+ T cells [19], further investigation for genes correlated to CD8A was conducted. The analysis revealed 25 positively and 1 negatively correlated gene to CD8A. The expression score of these genes with the 10 immune populations [18] showed a significant association with CD4+ T cells and mast cells. In addition, LAG3 was relatively correlated to the signatures of macrophages, B cells, and malignant cells. The expression levels of the CD244 gene, a cell surface receptor expressed on NK cells, T cells, and DCs [20,21], were associated with macrophages, DCs, and malignant cells. The only negative correlated gene, i.e., TALDO1, associated with fibroblasts as well as malignant cells and DCs (Figure 9c). Interestingly, for the 22 immune cell populations, all positively correlated genes exhibited significant association with gamma delta T cells and TALDO1 with the M2 macrophage population (Figure 9d). Gene expression data analysis showed that 48% (15/31) of the NPC cases displayed an interferon signature [22,23] in contrast to the remaining NPC samples and control tissue (Figure 9e). Out of these, 12 belonged to the CD8+ high group. Cancers 2020, 12, x; doi: www.mdpi.com/journal/cancers TALDO1, associated with fibroblasts as well as malignant cells and DCs (Figure 9c). Interestingly, for the 22 immune cell populations, all positively correlated genes exhibited significant association with gamma delta T cells and TALDO1 with the M2 macrophage population (Figure 9d). Gene expression data analysis showed that 48% (15/31) of the NPC cases displayed an interferon signature [22,23] in contrast to the remaining NPC samples and control tissue (Figure 9e). Out of these, 12 belonged to the CD8+ high group. Discussion In this study, primary biopsies from patients with NPC were analyzed. A marked inter-and intraindividual variation was observed with regard to growth pattern of cancer cells as well as to presentation of cytokeratin, EBERs, and immune cells. Based on presence and distribution of lymphocytes as established for other cancers [6], the lesions were classified into specific immune subsets, which carried prognostic information: DFS was better for the "inflamed" than for the "excluded" phenotype, while the "deserted" phenotype, arguably with the worst prognosis, was not eligible for analysis due to a majority of cases presenting with spread disease. CD8+ cells were present in areas of cancer cells and in the surrounding stroma, whereas CD207+ cells were observed largely in areas of cancer cells. The ratio of CD8+ cells were higher for EBV-positive cf. EBV-negative NPC. In contrast, no such differences were observed for CD207. Gene expression analysis revealed differences between NPC and control tissue (e.g., with regard to interferon activity) as well as between subgroups Discussion In this study, primary biopsies from patients with NPC were analyzed. A marked inter-and intraindividual variation was observed with regard to growth pattern of cancer cells as well as to presentation of cytokeratin, EBERs, and immune cells. Based on presence and distribution of lymphocytes as established for other cancers [6], the lesions were classified into specific immune subsets, which carried prognostic information: DFS was better for the "inflamed" than for the "excluded" phenotype, while the "deserted" phenotype, arguably with the worst prognosis, was not eligible for analysis due to a majority of cases presenting with spread disease. CD8+ cells were present in areas of cancer cells and in the surrounding stroma, whereas CD207+ cells were observed largely in areas of cancer cells. The ratio of CD8+ cells were higher for EBV-positive cf. EBV-negative NPC. In contrast, no such differences were observed for CD207. Gene expression analysis revealed differences between NPC and control tissue (e.g., with regard to interferon activity) as well as between subgroups of NPC based on CD8 expression (high vs. low). Taken together, the observations may be of relevance to prognostication of NPC as well as for explorations into the field of immunotherapy. The notion that head and neck cancer lesions, including NPC, are heterogeneous in nature (e.g., Wang et al. [3]), was confirmed by this study. With regard to patterns of cancer cell growth, this was evident for inter-as well as intraindividual comparisons. Inferentially, a similar heterogeneity was observed for cytokeratin (which in this context may be viewed as a cancer cell marker), which is typically expressed by NPC cells [24]. These observations, in combination with occasional findings of diffuse EBER patterns, indirectly suggest the possibility that intralesional cancer-associated antigen-levels, as previously suggested through analysis of EBV-DNA [14], and immune responses may also vary considerably. In this study, such a marked variability was observed for the presence and distribution of CD8+ T-cells and CD207+ DCs. Taken together, our observations suggest that NPC heterogeneity, including immune cell aspects, must be taken into account when, e.g., prognostic information is explored and candidate treatment targets are selected for this condition. Arguably, even individual information may be of importance to future immunological treatment possibilities. The use of digital image "microdissection" and quantitative digital analysis in this study represents efforts in that direction. However, a limitation of the technique is that it does not allow for quantitation of CD8+ cells in cancer nodules due to difficulties in discriminating between areas of cancer cells infiltrated by lymphocytes and the lymphocyte-rich surrounding stroma. In this study, as previously suggested for other cancers [6,25], the presence and distribution of lymphocytes (notably CD8+ T-cells) was utilized to classify NPC into specific immune phenotypes. The majority of NPC lesions were of an immune "inflamed" nature (61.7%), with lymphocytes infiltrating areas of cancer cells, while the minority was either immune "excluded" (29.8%) or "deserted" (8.5%), a classification that was verified by quantitative analysis of CD8+ T-cells. Of the immune phenotypes, the "deserted" subset was associated with the poorest prognosis. A high frequency of spread disease at diagnosis excluded this subset from an analysis of DFS. However, a clear difference in DFS was observed between the "inflamed" and the "excluded" phenotypes (in favor of the former). DSS showed a similar pattern, but the differences failed to reach statistical significance. We suggest that immune phenotype information should be evaluated further as a prognostic marker for NPC and in the context of treatment selection, for example, whether to combine first-line treatment (i.e., radiotherapy) with chemotherapy for the "deserted phenotype" even for low-stage disease or to offer checkpoint inhibitor immunotherapy to patients with NPC manifesting the "inflamed" phenotype, since observations in other types of cancers suggest that this phenotype is associated with a positive treatment response to such interventions [26,27]. Our observations extend previous work in the field: for example, the impact of TILs in NPC and their potential as prognostic markers [3]. However, a limitation of our study is its sample size, which does not allow for a comprehensive analysis of survival. The DC is specialized in antigen presentation and for a successful adaptive immune response to occur; either by the immune system through its own capacity or facilitated by "vaccinations", this cell type needs to be present and active. Through this study, as CD207 is considered a selective DC marker [28,29], our previous observation that NPC lesions feature CD207+ DCs was confirmed (likely reflecting CD1c+ myeloid cells) [13]. Importantly, our present observations added the information that these cells were largely distributed within the areas of cancer cells, i.e., the frequency of CD207+ cells was 8-fold greater in this compartment cf. the surrounding stroma. However, while CD207+ DCs were present in close relation to cancer cells, this was apparently not enough to induce a meaningful immunological response targeting cancer antigens, which would prevent NPC from occurring or to be killed off. We suggest that the DC presence in NPC represents treatment possibilities and that the C-lectin receptor CD207 and potentially other pattern recognition receptors known to facilitate cross-presentation of antigen [4,5] may be adjuvant targets. However, additional DC subsets must also be considered, and any target and its function must be considered in relation to the overall milieu of the NPC lesion, including aspects that exert immunosuppressive actions that may prevent antigen presentation. Furthermore, whether CD207 has a role in EBV-specific T-cell responses remains to be shown. The interplay between the immune system and cancer cells in NPC is not sufficient to eradicate the disease despite the fact that antigens are present and that key immune cells such as DCs and T-cells are available. More information is needed, and the present gene expression analysis, performed on available transcriptional data from EBV-positive NPC and control tissue [17,18], highlighted some immune features associated with NPC. High expressions of signatures related to interferon activity, M1 macrophages, and CD4+ memory activated T-cells were observed for NPC (cf. controls), albeit with marked heterogeneity between NPC samples. When comparing high vs. low CD8 vs. controls, high fibroblasts scores for CD8 low NPC were revealed. Furthermore, a high activated Natural Killer (NK) cell profile for CD8 high NPC was observed. (DCs could not be determined for NPC cf. control tissue.) Taken together, our observations suggest that there are immunological subsets of NPC and distinctions between NPC rich in CD8 (likely representing an "inflamed" phenotype) and low in CD8 ("excluded" and particularly "deserted" phenotypes). However, the gene expression data are difficult to put into a greater context given the lack of synchronous morphological assessment and clinical data, reflecting a general problem with gene expression data extracted from public databases. In a previous study, we explored the association between intralesional EBV DNA and survival [14]. When the material was split at a level of 70 copies of EBV DNA per cell, higher levels predicted a greater DFS. In agreement, in this study, when intralesional EBV DNA load was assessed in relation to immune phenotypes, high loads were associated with the "inflamed" phenotype and low loads with the "deserted" phenotype. For reasons yet to be defined, the associations to immune phenotypes appeared stronger when focusing on EBER than EBV DNA. Similarly, the association between CD8 ratios and "EBV status" was stronger for EBER than EBV DNA. Taken together, our observations confirm that overall immune features of NPC, notably the presence of CD8+ T-cells, depend on the presence of EBV. We suggest that analyses of EBV should be included whenever immune features of NPC are examined, and that intralesional EBV DNA should be complemented by analysis of EBER. Study Design and Patients The study was of a retrospective design and involved an analysis of formalin-fixated paraffin-embedded (FFPE) primary tumor tissue from a well-defined population of 48 patients with NPC diagnosed between 2001 and 2015. Data from this material have been reported previously focusing on intralesional EBV DNA and survival [14]. Through immunohistochemistry and based on lymphocyte presence and distribution, the tumors were classified into immune phenotypes. Furthermore, CD8+ and CD207+ cells were assessed using a quantitative digital image technique. The data were analyzed in relation to EBER, intralesional EBV DNA, clinical stage, and survival. Approval was granted by the Ethics Committee at Lund University (ref. no. 2014/117). In accordance with the approval, informed consent was not required, but the study was advertised in printed media with a possibility to opt out. Separately, gene expression data in NPC and control tissue were retrieved from a public database and analyzed in silico. Clinical Characteristics Patient characteristics have been previously reported [14]. Briefly, out of the 48 available NPC patients, one was lost to follow-up, and the median follow-up of the remaining 47 patients was 6.4 years; 31% of the patients were diagnosed with T1 lesions, 85% were N-positive, and 19% featured distant disease at the time of diagnosis (UICC's TNM classification system, 7th version). The 5-and 7-year overall survival (OS) was 75% and 65%, respectively. At histopathological examination, 75% of the tumors were EBER-positive non-keratinizing cancers. Eighty-three percent of the patients presented EBV DNA-positive lesions. Clinical data on the material retrieved for gene expression analysis in silico was restricted to diagnosis, EBER status (all were EBER positive), and stage (UICC's TNM classification system, 6th version). In the latter material, comprising data on 31 NPC patients, 21 featured stage I and II disease, 10 stage III disease, and none stage IV disease. Immunohistochemistry A modified double-staining immunohistochemistry protocol (EnVision Doublestain, Dako/Agilent, Glostrup, Denmark) was applied in order to simultaneously identify cytokeratin-positive cancer cells together with CD207+ DCs or CD8+ T cells. Briefly, 4-µm rehydrated sections were subjected to double-staining immunohistochemistry in the automated Autostainer IHC-robot (Dako/Agilent) using the EnVision Doublestain System kit K5361 (Dako/Agilent). Prior to immunohistochemistry, an antigen retrieval procedure was performed in a PT-link machine for heat-induced epitope retrieval (HIER) using low pH (pH 6) retrieval buffer. Endogenous peroxidase activity was blocked with H 2 O 2 . Sections were then incubated with an anti-CD207 (clone 12D6, Novocastra/Leica, Newcastle, UK; dilution 1:300) or anti-CD8 (clone C8/144B, Dako/Agilent, Glostrup, Denmark; dilution 1:400) primary antibody for 1 h at room temperature. After subsequent incubation with polymer/HRP-linked secondary antibodies for 30 min, the immunoreactivity was visualized using diaminobenzidine 3.3 (DAB) HRP chromogen (resulting in a brown opaque staining). Next, sections were incubated with Dako Double Stain Blocking Reagent (Dako/Agilent) to prevent additional binding of secondary antibodies to the first primary antibody. Sections were then incubated with an anti-cytokeratin primary antibody (clone CK AE1/AE3, Novocastra/Leica, dilution 1:300). This second-round immunoreactivity was also visualized with a polymer/HRP-linked secondary antibody (Dako/Agilent, dilution 1:300), but the visualization was performed with Vina Green HRP chromogen (Biocare Medical, cat BRR807A, Pacheco, CA, USA). Finally, sections were counterstained with Mayer's hematoxylin, air-dried, and mounted with Pertex (Histolab Products, Gothenburg, Sweden). In addition to the protocols above, previously stained slides for EBER evaluation (EBER-ISH for EBER1 and EBER2) were retrieved [14]. All slides were digitalized using the automated Scanscope XT digital slide scanner (Aperio Technologies, Vista, CA, USA). Evaluation and quantification were performed using the Aperio Imagescope Software (Aperio Technologies). Immunoprofiling Based on overall lymphocyte presence, and with special regards to CD8+ T-cell distribution (i.e., the assessment was performed on hematoxylin-slides stained for CD8), the lesions were classified according to immune phenotypes defined as "inflamed" (lymphocytes infiltrating cancer cell areas), "excluded" (lymphocytes in the surrounding stroma with no, or exceedingly few, infiltrating lymphocytes within cancer cell areas), and "deserted" (no or exceedingly few lymphocytes both in cancer cell areas and the surrounding stroma). In cases of highly heterogeneous lesions, the dominating pattern was chosen for the classification. This assessment was performed temporarily blinded to clinical data (J.S.N) and verified totally blinded by a second observer (S.S.): EBER slides were not used as support in the analysis. Quantification of CD8 and CD207 Immunoreactivity Slides stained for CD8 and CD207 were reviewed in a standardized manner utilizing cytokeratin (all slides were stained for cytokeratin) with EBER slides as support. Through digital image dissection, nonrelevant regions were excluded. These comprised physical artefacts (e.g., occasionally folded sections), normal epithelium, germinal centers, and gland structures. The dissections were performed on consecutive slides for CD8 and CD207, enabling a consequent and representative exclusion strategy. The Positive Pixel 9-algorithm (Aperio Technologies) was used to automatically segment out tissue background area as well as positive staining (i.e., brown DAB chromogen) by color threshold values, and then the ratio of the total analyzed tissue area with staining-positive pixels was calculated. Accordingly, this ratio represented the quantities of CD8 and CD207 in the target area. Thresholds to define staining (chromogen) positivity were the same for CD8 and CD207. In addition, areas (minimum 0.1 mm 2 after exclusion of nonrelevant regions as defined above) were selected to represent areas of cancer cells and areas of surrounding stroma (in direct proximity to cancer cells). CD207 positivity was quantitated for both locations. However, CD8 positivity was quantitated for the surrounding stroma only. The latter was due to a very high lymphocyte presence, which in combination with a variable degree of lymphocyte infiltration, rendered it impossible to quantitate CD8 within cancer cell areas in an objective and robust manner. When the presence of the marker (CD8 and DC207) was different between various areas of the tumor, the area with more pronounced presence was chosen. The selected areas were assessed for each marker following the same digital image quantification process as described above. All quantification procedures were performed blinded to the clinical data. Gene Expression Data Analysis Normalized mRNA data, generated with the Affymetrix Human Genome U133 Plus 2.0 Array, were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/geo/: accession number GSE12452) [15,16]. The analyses were performed using Qlucore Omics Explorer 3.6 (Qlucore, Lund, Sweden). Data were divided into two subsets: control tissue (n = 10), comprising 4 cases of nasopharyngeal tissue adjacent to but separated from NPC lesions and 6 cases of nasopharyngeal tissue from non-NPC subjects, and NPC tissue (n = 31, all EBER-positive). Utilizing CIBERSORTX [30], the gene expression data was immunoprofiled using signatures identified in two independent studies: one including the overall distribution of 10 cell populations using data from single-cell analysis of head and neck cancers [18] and the other defining 22 general tumor-associated immune cell populations [17]. The scoring from both studies for CD8+ T-cell signatures were used to divide the NPC samples into CD8+ high and low. Also, the material was analyzed for CD8A-correlated genes and expression levels based on an interferon signature (39 genes). Heat map clustering was performed to understand the relationship between CD8A-correlated genes and individual cell populations [17,18]. Statistics For the principal data set subjected to digital imaging, statistical analyses were performed using SPSS version 25 (IBM, Armonk, NY). Data were presented as medians with interquartile ranges (IQR). For comparisons of CD8 and CD207 ratios, respectively, between immune phenotypes of NPC and between tumor stages, an analysis of variance (the Kruskal-Wallis test) was followed by the Mann Whitney U-test. For comparisons of EBV DNA load between immune phenotypes, the Kruskal-Wallis test was followed by the Mann Whitney U-test. χ 2 -tests were performed to explore associations between CD8 and CD207 (grouped by median values) and EBER (present or not), EBV DNA (present or not), immune phenotypes, and stage. Correlations between CD8 and CD207 ratios and between these ratios and EBV-DNA were explored using the Spearman test. Comparisons of ratios for CD207 between areas of cancer cells and surrounding stroma was explored using the Wilcoxon Signed Ranks test. Survival, i.e., DSS and DFS in relation to CD8 and CD207, respectively, as well as to immune phenotypes, was described using Kaplan-Meyer curves, and significance levels were determined by the log-rank test. A Cox regression analysis was not performed due to the restricted sample size. The explicit p-value was provided when the p-value was <0.05 but >0.0001, whereas lower values were provided as <0.0001. CD8A-correlated gene markers and the interferon signatures were examined using the limma multigroup analysis (edgeR) [31]. A Pearson correlation of r > 0.7 and adjusted p-values less than 0.01 were considered statistically significant. For gene expression data analyzed in silico, statistical analyses were performed using GraphPad version 8.4.3 (GraphPad Software, La Jolla, CA, USA). The intensity fractions obtained were analyzed for differences in the immune cell populations using the two-way ANOVA, Tukey method. Conclusions In conclusion, NPC lesions are heterogeneous with regard to the presence and distribution of cancer cells as well as immune cells (CD8+ T-cells and CD207+ DCs). In addition, there are immune-related differences between subgroups of NPC based on CD8 expression (high vs. low). Arguably, this nature, in addition to a variable presence of EBV-associated antigen, must be taken into account when prognostic information is examined and candidate treatment targets are explored. With regard to distribution of lymphocytes (notably CD8+ T-cells), NPC may be classified into "inflamed", "excluded", and "deserted" immune phenotypes, and these subsets carry prognostic information that is easily accessible and that can be linked to EBV status. CD207+ DCs, likely representing the myeloid CD1c+ subtype, are present in NPC lesions. Intralesional DCs may represent a possibility for immunotherapy, and CD207 may be a specific target based on its ability to facilitate a cross-presentation of antigens necessary to produce antigen-specific cytotoxic T-cell responses.
2020-11-22T14:09:29.533Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "2772a22a7e28ddba6bc80559920b99b3397d4a3f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/12/11/3428/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28ad2530239caa4cecca46eaaf318de52911f43a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
211407392
pes2o/s2orc
v3-fos-license
Spatial Variation in Households’ Defensible Response in Oke-Ogun Area of Oyo State, Nigeria The paper assessed households’ use and location of security barriers and surveillance structure in Oke-Ogun area of Oyo State with a view to examining spatial variation in households’ response to crime in the area. Both primary and secondary data were employed. The total households of urban and rural settlements as identified in Oke-Ogun area of Oyo State were 44,421 and 175,568 respectively The sample frame constituted all households in the study area, and five out of 1000 of all the 219,989 total households which approximately amount to 1100 households constituted the sample size, Random sampling was employed in hierarchical selection of 5/00 of the total households in each settlement type, and a total number of 1100 copies of questionnaires were randomly administered Descriptive statistics such as percentage and Chi-Square were used to present summary of findings.. Findings revealed that fence, security gates/checkpoints and burglary guards were among the security barriers used. Higher order security consciousness was displayed in urban area with a substantial proportion of buildings surveyed (57.8%) having burglar guard on windows compared with 54.5% in the rural settlements. Observation from actual existence revealed that the quality of burglar guards in terms of strength and durability conformed to a regular spatial pattern; and that quality improved with increasing sizes (from rural to urban settlement) of settlement types. Also, surveillance structure in Oke-Ogun included the use of private security personnel which was at its lowest intensity and only 13.4% of all households surveyed signifying its usage. Vigilante group dominates every other security measures across the settlement types. The study concluded that response to crime in the area varied spatially. Introduction Crime is a global cankerworm and one of the most notable threats to rural liveability and urban development. In Nigeria, crime has all day long become a hydra-headed social monster pervading every dimension of human survival and stable life style. One is reminded of insecurity everywhere one goes in Nigeria. It is not just the window guards, burglar-proofs, fortified gates, security ramparts, sky rocketing defensive walls and the day by day news of bolder and more sophisticated crimes that indicate rampant menace, but the increased growth of troop movements in the neighbourhoods and the presence, day and night, of law enforcement agents and other armed guards patrolling the streets, highways and borders. Crime, as Omisakin (1998) expressed, "is a social menace, an undeniable stigma to national image and a significant source of threat to people's safety and wellbeing". Increasingly, people daily depend on the protection of vigilante groups and engage in fencing the home, learning and business environments; erecting neighbourhood gates and checkpoints. Private security agents are intensively hired by organisations and individuals who can afford to pay for a multitude of motives. These motives include deep-seated fears that crime will infringe on safe neighbourhoods, which are largely due to the perceived increasing intricacy of the society, and the reality of the finiteness and limitations of government resources that could be put at the disposal of effective neighbourhood policing. Day in, day out, safe and peaceful existence is worrisomely becoming a fiction of the past in many Nigerian neighbourhoods. Lives are constantly being threatened at residences, on the streets, even at places of worship (Adigun, 2012). At this juncture, there must be reconciliation with the fact that crime does not occur in vacuum and all its concomitant adhesions with society that are likened to two inseparable opposite sides of a coin are a problem worth studying. A "crime" as Peet (1975) rightly observes, "is a surface expression of discontents which lie deeply embedded in social system". Planning and development take place in geographic space. Routine Activity Theory and Defensible Space Concept predict that the circumstances that foster or deter crime are functions of social and structural factors that enable people to translate their criminal motivation into action (Felson, 1998). Environmental design, therefore, impacts on social problem, but it is difficult to generalize spatial dichotomy in crime occurrence and prevention between urban and rural environments. The degree of responsiveness of planning programmes often depends on how adequately or otherwise the relevant problem area has been identified. Friedmann (1960) has argued that nothing of a substance can be achieved by the planner until the right set of problem regions is identified and the right set of studies is carried out. Ige et al, (2010) has also asserted that any meaningful and sustainable policies and programmes targeted at curtailing criminal activities and improving the quality of human life while living within the carrying capacity of supporting ecosystem in the communities must take into consideration the physical environment within which crime occurs in various locations. Therefore, responses to crime has not been comprehensively studied in Nigeria because urban crime pattern pervades her limited crime research efforts. There is enough information to conclude that the magnitude of the problem is quite serious and that Nigeria's crime problem is essentially beyond urban environment. The severity of crime in Nigeria is evident in daily news of bolder and more sophisticated crimes in both rural and urban places, despite the age long crime research and long history of present crime control measures, that urban environment with an overall conception of and policy on crime and its control have no less serious crime effects than rural environment. In the light of the foregoing, it is imperative that we understand how crime incidence is controlled with a view to providing better information that will facilitate effective policy response for ameliorating crime occurrence and its associated effects. The study therefore aims at analysing households' responses to crime in Oke-Ogun area of Oyo State. The objectives of the study are to examine households' use and location of security barriers, assess surveillance structure and households' preference for keeping Neighbourhood safe in Oke-Ogun. Materials and Method The Study Area The study area is Oke-Ogun area in Oyo State. Oke-Ogun area is the north-western region of Oyo State, and is made up of rural communities and large rural centres (small towns) located in the northern and north-western parts of Oyo State, Nigeria. Geographically, Oke-Ogun approximately stretches between latitudes 07 o 28 ' and 08 o 38 ' North and longitudes 03 o 02 ' and 04 o 44 ' East ( Figure 1). Oke-Ogun shares boundary with Kwara State in the North, in the South by Ogun State, Ibarapa North and Ibarapa East local government areas; in the East by Atiba and Oyo West local government areas, and in the West by Republic of Benin Oke-Ogun area is a borderland consisting of ten local government areas. The ten LGAs are districted into three zones, namely; (i) Border zone: This comprises Saki West, Atisbo and Iwajowa local government areas. The local government areas within this zone share boundary with the Republic of Benin (ii) Near border zone: This comprises Saki East, Itesiwaju, Kajola and Iseyin local government areas. These local government areas share boundary with the local government areas sharing boundary with the Republic of Benin. This implies that they are indirectly connected to the border. (iii) Far border zone: It comprises Orelope, Olorunsogo and Irepo local government areas. These local government areas share boundary with Kwara State which shares boundary with the Republic of Benin. Also, these local government areas are indirectly connected to the border. The people of Oke-Ogun are mostly Yoruba. The regional accent of Oke-Ogun is called "Onko". Some ethnic groups like Ibaruba, Filani, Aketepe, and foreigners from Republic of Benin and Togo are found practising agriculture in the area. Results and Discussion The protective measures are explained within the concepts of Defensible Space Concepts and application of the elements of Crime Prevention through Environmental Design (CPTED). The elements considered to create secure environment by residents include security barrier, territoriality, surveillance, lighting and landscaping. (i) Use and Location of Security Barriers in Oke-Ogun Fence, fencing material, height of the fence and material on the top of fence, burglar guard, burglar guard material, location of burglar guard on structures, material used window and doors were among the security barrier studied. Burglar Proof The analysis of data collected on the use of burglar guard as security barrier across settlement types in Oke-Ogun revealed that 13.9% of all the buildings sampled had no burglar guard, while a good proportion (86.1%) had it at various degrees ( Table 1). Out of 86.1% aforementioned, 55.2% had it installed on window only, 19.4% on windows and doors, and 4.2% on door only while 7.4% on all openings into the buildings. The higher proportion of all the buildings without burglar proof (14.5%) were found in the rural settlements while 11.5% of buildings without burglar proof were found in urban settlements. Higher order security consciousness was displayed in urban, in line with expectation, with a substantial proportion of buildings (57.8%) having burglar guard on windows compared with 54.5% in the rural settlements ( Table 1). The chi-square performed across settlement types at (P<0.05) indicates that the difference in location of installed burglary was significant. The implication of this is that there was a significant difference between location of installed burglary by households and the settlement types in Oke-Ogun. The installation of burglar guard in the urban settlements was not only high to showcase their high level security consciousness but also of high quality in material used. In line with expectation, the greatest proportion of buildings with iron/steel works (85.3%) were found in urban, and 70.1% of iron/steel works in rural settlements (Table 2). Rural settlements took the lead while urban settlements followed in the use of planks and others like (woven grass) as material for burglary guard. Also, the use of concrete mullion also dominated the rural settlements with 5.7% compared with 1.4% urban settlements. Wood/plank, woven grass, wire mesh and concrete mullion are considered weaker burglar guard than iron-steel. The strength of window and door guard used by the households might partly depend on the degree of security consciousness of the households. This consciousness might be majorly based on actual experience of crime or the fear of crime. The Pearson chi-square performed at 0.05 level of significance indicated that there was significant difference in the use of burglar guard among settlement types ( Table 2). As observed from actual existence, the quality of burglar guards in terms of strength and durability conformed to a regular spatial pattern; and that quality improved with increasing sizes (from rural to urban settlement) of settlement types. Door and Window Materials The commonly used material for door making was iron/steel (44.6%) followed by wood/plank (43.0%) and flush door (10,6%) then glass panes (1.8%). A very good proportion (54.3%) of iron/steel doors were found in urban settlements compared with below average proportion of 44.6% in the rural settlements. The use of wood/plank dominated the door materials in rural areas while the use of iron/steel dominated in urban settlements (Table 3). The analysis of data collected on materials used for window in Oke-Ogun Region revealed that glass pane (42.7%) was the commonly used materials, followed by wood plank (34.0%) and iron/steel (21.2%), then louver blade 2.0%). Urban settlements do not conform to the distribution pattern on the use of material for window as obtained at spatially aggregated level. The use of iron/steel was ranked second instead of wood/ plank (Table 4). From direct observation and in-depth interview the quality of materials used for window tends to be a function of affluence. Also, the chi-square performed across settlement types at (P<0.05) indicates that the difference in materials used for window is significant. The implication of this is that there is a significant difference between materials used for window by households and the settlement types in Oke-Ogun. The materials used for door and window guard more often than not might not only depict affluence but also indicate the level of security consciousness of the inhabitants. Wood plank is considered to be of lower quality and a less effective security barrier when compared with iron/steel works. Iron / steel are considered a stronger material for window and doors. The uses of windows and doors that are made up of wood plank and iron/steel are defeated based on translucency to see through except in few cases where glass panes are inserted into a leaf of a wooden and steel door and or fanlight of wooden windows to provide the transparency needed. However, flush doors, glass panes and louver blades are fashionable and high level security barriers, though they are not as strong as wooden and iron/steel materials yet transparent enough to detect intrusive interference of intruders from inside the buildings without intruders' knowledge. Every coin has two sides, the use of iron/steel works as a burglar guard material prevent easy entry into property by increasing time of criminal operation, this speed up the risk of criminal being caught and therefore act as a more effective security barrier. Fencing and Material Used More than one-third (37.1%) of the building sampled had no fence in the Oke-Ogun. In line with expectation, 38.8% of the building sampled in the rural settlement had no fence while 30.6% of the building sampled in the urban settlements had no fence (Table 5) X 2 = 27.32, Df = 4, P<0.05) = 7.78 The commonly employed fencing material in Oke-Ogun was concrete wall (33.4%) followed by hedges (12.8%) and bamboo/wood (9.5%) then barbed wire (7.1%). In each of the settlement types, concrete wall dominated the sampled buildings that had fence. Rural areas had the greatest proportion of hedges (20.1%) while urban areas were of more of barbed wire (11.7%) than rural settlements. The chi-square performed across settlement types at (P<0.05) indicates that the difference in materials used for fence is significant. The implication of this is that there is a significant difference between materials used for fence by households and the settlement types in Oke-Ogun. As expected, the proportion of bamboo/wood among fenced buildings were more in rural than urban areas (see a typical rural fence in Plate 1) Further analysis revealed that most of the buildings with bamboo/wood were used for residential/agricultural purpose. Further investigation also revealed that most of the building that made use of hedge as fence material was of old types. The use of hedges as security barrier is not adequate in this modern age when criminal are more daring and sophisticated in their nefarious activities. Plate 1: Fenced Materials in the rural areas Height and Material on Fence The level of security consciousness of households was not only revealed by the material used for fencing, it was also shown by the height of fence and the materials on the top of the fence. In city planning and building industry, the commonly recommended height of ideal concrete fence for residential building is between 1.00 and 3.00 metres (see Plate 2). The concrete if imperforated should not be more than 1.00 metres. Therefore, an ideal building fence should be made up of perforated blocks or iron/steel works with inserted spaces which should be on top of 1.00 metres imperforated fencing materials. Plate 2: An Ideal Fence The specification is fully adopted in Abuja city planning and it is highly enforced on developers. This specification, while supporting maximum security does not undermine the healthy living and the aesthetic value of buildings. Fences that are imperforated tend to undermine ventilation and sink the visual beauty of buildings into oblivion. This often sends signal that there are likely valuable materials which needed to be protected in such buildings. Among sampled buildings that had fences, 27.8% of them had estimate height that ranges between 2.0 and 2.5 metres and 11.6% with heights above 3.0metres (Table 6). The highest concentration of sky rocketing fences in the urban settlements might be due to the fact that multi-storey buildings were more concentrated in these settlements. Substantial proportion of the buildings with no material on top of the fence in the rural settlements was 29.6% compared with urban settlement with 23.0% (Table 7). In line with expectation, 15.7% of buildings with materials on fence had broken bottles on top in the urban settlement, and 12.8% in the rural settlements. From every indication, it is crystal clear that urban settlements were more security conscious than rural settlement because they were made up of heterogeneous people with divorced behaviours and motives. The use of burglar guard, fence, and material on fence, transparent doors and window as physical barrier helps in monitoring and or restricting unduly physical access. They are also efficient element of demarcating, various zones of defence. Entrances into certain parts of buildings are restricted with effective use of the aforementioned materials and the like. In Oke-Ogun, the buildings that were recently constructed were made of high fences that dwarfed bungalows and on these fences were mounted materials to serve as deterrent to undesirable elements that might want to scale the walls (see Plate 3 & 4), Materials on fence-bricks varied from broken bottles (13.5%), to spiral barbed wire (12.4%) or net barbed wire (37.0%) with the barbed wire being sometimes electrified to iron nails (9.0%) Plate 4: A Typical Fence that Dwarfs Building in Urban Areas Street Features Present within Settlement Types In the study area, the percentage share of each feature present goes thus; bumps (13.9%), street gate (52.5%), security check point (8.8%), warning sign or restriction (12.2%). A substantial proportion of buildings with street gates (79.9%) were found in the urban settlements (Table7.8). 78 Warning signs or restriction as observed on streets include nobody is allowed to pass through this area between 11 o'clock p.m. and 5 o'clock a.m. Beware of dogs, you are under surveillance beware etc.). Bumps made on the streets were not only means of controlling reckless driving in the neighbourhood but ways of trapping the thieves on bike. Bumps in some neighbourhood especially in the urban settlements were used as check point by security men to authenticate the right ownership to vehicles because cyclists and drivers had to slow down to the barest minimum speed before passing through the bumps. The general thought of people that bumps retard the speed of escaping criminals was debunked by others that bumps were favourite spots for vehicle snatchers at night to punch on unsuspecting vehicle owners slowing down at bumps. (ii) Surveillance Structure in Oke-Ogun Use of Security Measures in the Study Area The use of private security personnel was at its lowest ebb in Oke-Ogun area and only 8.9% of households sampled signifying the usage. Out of this, a small proportion of households, 9.4% in the urban and 8.8% in rural settlements were with private security guards manning individual buildings in the study area (Table 9). The low usage of private security personnel/guards across the settlement types in Oke-Ogun might be due to the occupational status and the low level of income of the inhabitants. It was observed that the use of private security guards was an option available to a wealthy minority. The use of vigilante group as surveillance measure by households, whether undertaken by residents or persons paid by the community was widespread in Oke-Ogun in every settlement type, and it considerably varied across settlement types. At aggregated level, 26.6% of all households sampled in rural and urban settlements made use of vigilante group. From the aggregate percentage of 24.4% of sampled households in Oke-Ogun that made use of security dogs, rural households (26.2%) dominates followed by urban households (24.4%). Appliances at home were very rare to come by in rural areas, while 17.6% of sampled urban households made use of appliances at home, 0.3% of the sampled rural households made use of it. The total aggregate percentage of all buildings sampled for the use of appliances at home was 4.3%. The total number of respondents surveyed that declared the use of charm as protective measure was 5.5% and the use were in ascending order from rural to urban areas. Households in the urban areas were more security conscious than rural settlement in Oke-Ogun An affirmation of good functioning of street light was very low in the study area. Only 18.1% of all respondents in Oke-Ogun gave opinion that the functionality of street light was good while 25% of the respondents said that the functionality was fair. Of all respondents, 43.4% gave opinion that the functionality of street light was bad, and 13.5% of the respondents gave no opinion because there was no street light in their residences (Table 10). X 2 =86.47 Df=3 (P<0.05) = 6.25 Further analysis therefore revealed that the significant proportion of respondents across settlement types who gave opinion that the functionality of street light was good and fair majorly based their opinion on newly erected solar light in some wards in the study area. More so, analysis revealed that not more than 5% of all streets in towns of Oke-Ogun had street light. This implies that greater portion if not total, of the town would be in the darkness at night time, thus creating avenue for devilish works of night marauder. The situation might be worse in the rural settlements. And this might be the major reason why most houses made use of security dogs in rural and urban settlements for giving alerts on street marauder in the night. Policy/ Planning Implication and Conclusion The observed high fences that dwarf the buildings in the study area manifested unhealthy consciousness of crime that might threaten the community's psyche with terror, and the failure of regulatory bodies. This violates the right height for the right construction for the right purpose in order to achieve spatial functionality that will not undermine ventilation. For planning and policy oriented remark, it is therefore suggested at this point in time that more Police officers should be recruited to monitor vulnerable spaces to augment the use of vigilante group that was widely spread and officers be strengthened with sophisticated security gadgets. These, apparently, would help partly to reduce the volume of crime occurrences in general and partly to redistribute criminal activities spatially. The argument being that as far as no area has more than its share of criminality, all things being be equal, no problem area would become incurable and pervasive in its effect if attacked at the present time. It is recommended that height of ideal concrete fence for residential building should not be more than within 1.5 metres. The concrete if imperforated should not be more than 0.9 metre. Therefore, an ideal building fence should be made up of perforated blocks or iron/steel works with inserted spaces which should be on top of 0.6
2019-10-10T09:26:56.574Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "8195bd19dbf3b68321cf29545895e25152b249e4", "oa_license": "CCBY", "oa_url": "https://www.iiste.org/Journals/index.php/JLPG/article/download/49735/51813", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0a18f02ecc8c0fefc09301ea74d983100eb5cb02", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
247791590
pes2o/s2orc
v3-fos-license
Circulating microRNAs as predictive biomarkers of coronary artery diseases in type 2 diabetes patients Abstract Background Type 2 diabetes mellitus (T2DM) is an increasing metabolic disorder mostly resulting from unhealthy lifestyles. T2DM patients are prone to develop heart conditions such as coronary artery disease (CAD) which is a major cause of death in the world. Most clinical symptoms emerge at the advanced stages of CAD; therefore, establishing new biomarkers detectable in the early stages of the disease is crucial to enhance the efficiency of treatment. Recently, a significant body of evidence has shown alteration in miRNA levels associate with dysregulated gene expression occurring in T2DM and CAD, highlighting significance of circulating miRNAs in early detection of CAD arising from T2DM. Therefore, it seems crucial to establish a link between the miRNAs prognosing value and development of CAD in T2DM. Aim This study provides an overview on the alterations of the circulatory miRNAs in T2DM and various CADs and consider the potentials of miRNAs as biomarkers prognosing CADs in T2DM patients. Materials and Methods Literature search was conducted for miRNAs involved in development of T2DM and CAD using the following key words: “miRNAs”, “Biomarker”, “Diabetes Mellitus Type 2 (T2DM)”, “coronary artery diseases (CAD)”. Articles written in the English language. Result There has been shown a rise in miR‐375, miR‐9, miR‐30a‐5p, miR‐150, miR‐9, miR‐29a, miR‐30d, miR‐34a, miR‐124a, miR‐146a, miR‐27a, and miR‐320a in T2DM; whereas, miR‐126, miR‐21, miR‐103, miR‐28‐3p, miR‐15a, miR‐145, miR‐375, miR‐223 have been shown to decrease. In addition to T2DM, some miRNAs such as mirR‐1, miR‐122, miR‐132, and miR‐133 play a part in development of subclinical aortic atherosclerosis associated with metabolic syndrome. Some miRNAs increase in both T2DM and CAD such as miR‐1, miR‐132, miR‐133, and miR‐373‐3‐p. More interestingly, some of these miRNAs such as miR‐92a elevate years before emerging CAD in T2DM. Conclusion dysregulation of miRNAs plays outstanding roles in development of T2DM and CAD. Also, elevation of some miRNAs such as miR‐92a in T2DM patients can efficiently prognose development of CAD in these patients, so these miRNAs can be used as biomarkers in this regard. | INTRODUC TI ON About half a billion people across the globe are affected by diabetes. This disease predisposes a significant number of people to life-threatening complications such as heart conditions and other diabetes-related morbidities. 1 Sedentary lifestyles, obesity, aging, and urbanization are the contributing factors for type 2 diabetes mellitus (T2DM). Additionally, hereditary backgrounds are involved in the etiology of the disease. 2 T2DM is characterized by chronic hyperglycemia and faulty metabolism of carbohydrates, lipids, and proteins, resulting from insulin resistance and insufficient insulin secretion. 3 Regarding dysregulated metabolism of lipids, patients with T2DM are highly susceptible to cardiovascular diseases, particularly coronary syndrome. 4 Acute coronary syndrome (ACS) is an expression to describe may not cause changes on an electrocardiogram (ECG). In addition, the blockage may be partial or temporary, and so the extent of the damage is relatively small. However, ST-segment elevation myocardial infarction (STEMI) arises from the prolonged blocked blood supply. 5 Establishing simple and valid clinical biomarkers is crucial in detecting and treatment of different coronary artery diseases (CAD) and T2DM. microRNAs (miRNAs) can be used as biomarkers which are highly stable in circulation and specific to disease and organ; thus, they have potential to be applied as reliable diagnostic biomarkers. 6 miRNAs are stable single-stranded RNAs modulating various biological processes by regulating gene expression via binding to the 3'-untranslated regions of target mRNAs at the post-transcriptional level. 7 They are primarily located within the introns of the host genes and are transcribed by RNA polymerase II. 8,9 The human genome encodes roughly 1000 miRNAs, more than 100 of them have been identified in the serum of healthy subjects. Unlike intracellular mRNAs, circulating miRNAs are significantly resistant against degradation by RNases. These miRNAs may also be produced by blood cells or tissues like heart, lung, liver, and kidney. 6 They are mostly preserved in macrovesicles such as exosomes, microparticles, and apoptotic bodies, possibly protecting them against degradation by RNases. Their stability and source specificity make miRNAs as reliable candidates in detecting diseases like CAD and T2DM. 6,10 | ROLE S OF miRNA s IN T2DM DE VELOPMENT Dysregulation of miRNAs has been demonstrated widely in T2DM (see Table 1). There is a rise in, miR-9, miR-30a-5p, miR-150, miR-9, miR-29a, miR-30d, miR-34a, miR-124a, miR-146a, miR-27a, and miR-320a in T2DM. On the contrary, other miRNAs such as miR-126, miR-21, miR-103, miR-28-3p, miR-15a, miR-145, miR-375, and miR-223 were shown to be decreased. 11,12 Kong et al. studied the expression pattern of several miRNAs associated with the pathogenesis of T2DM and compared the results among the recently diagnosed cases of T2DM, pre-T2DM subjects, and T2DM-susceptible subjects with normal glucose tolerance. 13 They showed the levels of miRNAs were higher in T2DM patients compared to T2DM-susceptible subjects. Additionally, the expressions of some miRNAs were significantly lower in the pre-T2DM individuals in comparison with T2DM patients. This study also demonstrated the role of miR-29a, miR-9, miR-30d, miR-124a, miR-146a, miR-34a, and miR-375 in fine-tuning insulin secretion and pathogenesis of T2DM, while their expression levels remained steady in the pre-T2DM stage, refusing their potential relevance as disease-specific markers. 14 The role of miR-146a in the inflammation and insulin resistance was shown to be more significant in patients with T2DM than in those with normal glucose tolerance (NGT). There was a significant lower expression level of miR-146a in T2DM subjects. 15 Further association between miRNAs and metabolic syndrome was demonstrated; mirRNA-150, miR-192, miR-27a, miR-320a, and miR-375 were upregulated in T2DM, highlighting their part in the regulation of hyperglycemia. This finding made great strides in introducing the clinical application of these miRNAs in the risk assessment of T2DM and metabolic syndrome. Stepien et al. analyzed the angiogenic capacity of ectosomederived miRNAs among T2DM patients. They reported that miR- miR-29a-5p, miR-374a-5p, miR-30c-5p, and miR-199a-3p were significantly elevated in ectosomes obtained from T2DM patients [57]. Also, they showed that the expression levels of miR-193b-3p and miR-95-3p in the ectosomes-enriched plasma were notably higher in T2DM, while the expression of miR-409-3p was lower in T2DM. These findings may highlight the role of miRNAs in dysregulated angiogenesis and development of vascular complications in patients with T2DM. 18 Table 1 shows the status of different miRNAs in T2DM. | ROLE S OF miRNA IN C AD Micro-RNAs play a critical role in cardiac development and pathological processes such as AMI (including NSTEMI and STEMI) and other cardiovascular diseases such as arrhythmias, hypertrophy, heart failure, and atherosclerosis. 31 More than 200 miRNAs have been identified in the heart tissues. miRNAs such as miR-1, let-7, miR-133, miR-126-3p, miR-30c, and miR-26a were found to be prevalent in the cardiac muscles and miR-145, let-7, miR-125b, miR-125a, miR-23, and miR-143 in the arterial smooth muscles. 32,33 Additionally, miR-122 and miR-1 have been introduced as myocardium-specific markers. 34 Alterations in some miRNAs can be used as diagnostic biomarkers for coronary artery diseases, such as the downregulation of miR-378, 196-5p, and 3163-3p, which can assist in distinguishing CAD patients from normal subjects. 35 The analysis of patients with both stable CAD and acute coronary syndrome showed upregulation of miR-92a-3p and miR-206 44,45 ; while reduced levels were observed in miR-939 and miR-181-a-3p and 181-a-5p.. 13 The prognostic and diagnostic value of miRNAs is shown in After 12 months, 19 percent of the patients experienced a cardiac event; thereby, miR146a level turned out to be a useful independent prognostic marker of these events. Furthermore, in circulating endothelial progenitor cells, miR-221 and miR-222 were found to be elevated in patients with CAD compared to those with no indication of CAD. 69 | microRNA IN T2DM PRED I C TING C AD Diabetic heart disease (DHD) is defined as the heart disease in diabetic individuals including CAD, heart failure, and/or cardiomyopathy. 70 It mostly arises from obesity, physical inactivity, advanced age, and metabolic syndrome. [71][72][73][74][75] In a study performed by Ramzan et al., miR-15a-5p, miR-17-5p, miR-370-3p, and miR-375 significantly predicted metabolic syndrome. Analysis of predictive miRNAs showed miR-15a-5p and miR-17-5p. were involved in the regulation of metabolic pathways, including insulin, wnt, fatty acid metabolism, and AMPK. 76 A study of the expression of miR-126 and miR-26a showed significant differences between patients with and without T2DM. Both miRNAs were downregulated in T2DM patients. Interestingly, patients with less miR-26a and miR-126 were more prone to develop subsequent CAD. Also, miR-24 was shown to reduce in T2DM-CAD patients, while the mRNA of YKL-40, an inflammatory mediator involved in endothelial dysfunction, was elevated in both T2DM-CAD and CAD patients. 77 Furthermore, it was reported that the reduced levels of miR-145, miR-9, miR-15a, miR-103, miR-28-3p, miR-29a, miR-223, miR-126, and miR-375 are reliable predictors of CAD patients in type 2 diabetes. miR-24 and its target chitinase 3-like 1 (Chi3l1/YKL40) were also reduced in type 2 diabetes patients with CAD. 77,78 Additionally, hyperglycemic-induced alterations in miRNA in patients with diabetic heart issues are likely to be irreversible even after glycemic control. 79 The levels of miR-92a in diabetes patients were shown to precede the emerging of acute coronary at least 2 years before. This study showed that these T2DM patients had a significant increase in the levels of miR-92a compared to coronary heart disease. Also, this study indicated that miR-92a is associated with the coincidence of diabetes and heart disease, as well as high blood pressure and HbA1c. 80 It is also implicated that miR-133a and miR-373 mediate signaling of myocytes enhancer factor2C (MEF2C) in diabetic cardiomyopathy, which is an essential transcription factor underlying myocardial hypertrophy and cardiac fibrosis. 81,82 Table 3 shows miRNAs involved in metabolic syndrome with similar roles in T2DM and CAD. | CLINI C AL PER S PEC TIVE AND CON CLUS ION Since their identification, cardiovascular miRNAs have always been regarded as key players in regulating cardiac gene expression under normal and pathological conditions. 97 Since T2DM patients are highly susceptible to develop CAD, so establishing reliable biomarkers to prognosis CAD in these patients can play a significant part in mitigation T2DM burden. Dysregulation of miRNAs occurs in T2DM and CAD and underlying pathological events in both situations (see Tables 1,2, and 3). A pile of studies has shown that miR-1, miR-132, miR-133, and miR-373-3-p increase in both T2DM and CAD (see Table 3). Some of these miRNAs emerge long before cardiac events. For instance, the levels of miR-92a in acute coronary patients were shown to be preceded by diabetes for at least two years. 80 Accordingly, these miRNAs may assist in following up T2DM patients who are prone to develop CAD. Regarding the Role of miRNAs in atherosclerosis which is the primary cause of CAD, it is reported that microRNAs participate either beneficially or harmfully in almost all molecular pathways of atherosclerosis and arterial remodeling, including endothelial dysfunction, monocyte activation, arterial wall invasion, platelet, and vascular smooth muscle cell activation. 98 Atherosclerosis has been regarded as a disease of chronic inflammation. 99,100 Therefore, miRNA may play their part in heart-related diseases through induction of inflammation. miR-92a has been shown to contribute to cardiovascular disease development in diabetes mellitus through NF-κB and downstream inflammatory pathways. 101 Also, miR-132 was shown to inhibit the expression of SIRT1 and induce the pro-inflammatory processes of vascular endothelial inflammation through blocking the SREBP-1c metabolic pathway. 102 mir-1, miR-122, miR-132, and miR-133 are related to subclinical aortic atherosclerosis associated with metabolic syndrome. 103 Therefore, these miRNAs may strongly prognose the development of CAD in T2DM patients. Along with miRNAs, the inflammatory biomarkers can be used as predictors of severity and prognosis in CAD in T2DM patients to stratify the risk of these patients, to take the best therapeutic approach, and predict the results after interventions. 99 Some limitations should be regarded in future studies. Most studies have evaluated miRNAs in populations of fewer than 100 subjects, so bigger sample sizes seem to be crucial in future studies. Also, the value of the miRNAs in the prognosis of the diseases, such as the risk of MI, should be assessed. Furthermore, it would clarify whether miRNAs levels are practical tools to evaluate the response to therapy,. 104 Also, low levels of total RNA in plasma or serum, which make the miRNA amplification often necessary to measure circulating miRNAs. U6 RNA or other miRNAs have been used as internal controls. While these miRNAs may be steady in several cases, in other pathological conditions they may fluctuate; therefore, they are not reliable as an internal control. 10 ACK N OWLED G M ENTS We appreciate all who assisted us in preparing and publishing this review. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest. AUTH O R CO NTR I B UTI O N S Conception and design of the research and obtaining financing: Abolhasani S; supervising the team: Golnoosh Mahjoob; preparing and writing of the manuscript: Yassin Ahmadi; critical revision: Huda Fatima Rajani; participated in preparation of the manuscript: Khanbabaei N. DATA AVA I L A B I L I T Y S TAT E M E N T The datasets collected and analyzed during the current study are available from the corresponding author on reasonable request. Furthermore, the name of repositories and reference numbers can be found in online repositories.
2022-03-31T06:22:55.098Z
2022-03-29T00:00:00.000
{ "year": 2022, "sha1": "0ad5432b6fd686ab03bff1f5209312423e69df57", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "695bde68acdf46ade37ae7a9ff374049ec5658e9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245131612
pes2o/s2orc
v3-fos-license
Silicon and Strontium abundances of very metal-poor stars determined from near-infrared spectra Silicon and Strontium are key elements to explore the nucleosynthesis and chemical evolution of the Galaxy by measurements of very metal-poor stars. There are, however, only a few useful spectral lines of these elements in the optical range that are measurable for such low-metallicity stars. Here we report on abundances of these two elements determined from near-infrared high-resolution spectra obtained with the Subaru Telescope Infrared Doppler instrument (IRD). Si abundances are determined for as many as 26 Si lines for six very and extremely metal-poor stars (-4.0<[Fe/H]<-1.5), which significantly improves the reliability of the abundance measurements. All six stars, including three carbon-enhanced objects, show over-abundances of Si ([Si/Fe]~+0.5). Two stars with [Fe/H]~-1.5 have relatively small over-abundances. The [Mg/Si] ratios agree with the solar value, except for one metal-poor star with carbon excess. Strontium abundances are determined from the triplet lines for four stars, including two for the first time. The consistency of the Sr abundances determined from near-infrared and optical spectra require further examination from additional observations. Introduction Low-mass stars with very low metallicity found in the Milky Way are believed to have formed in the very early stage of chemical evolution, reflecting the products of the first and early generations of massive stars and supernova explosions (e.g., Nomoto et al. 2013). Observational studies of the elemental abundances for very metal-poor (VMP: [Fe/H]< −2) 1 stars play unique roles to constrain the nucleosynthesis processes and the characteristics of their progenitor stars in the early universe. The most useful abundance ratios are those between the α-elements and iron, which reflect the masses of the progenitors of core-collapse supernovae (e.g., Heger & Woosley 2010;Ishigaki et al. 2018) as well as the contributions from type-Ia supernovae: these provide useful constraints on chemical-evolution models and formation scenarios of the Milky Way halo, including the accretion of dwarf galaxies. Neutron-capture elements are also important as records of explosive events such as neutron star mergers and exotic supernovae (Kajino et al. 2019;Cowan et al. 2021). Among the α-elements, Si, as well as Mg, are the most abundant (log ǫ(Si)= 7.51 and log ǫ(Mg)= 7.60; Asplund et al. 2009) and are a key for studying early chemical enrichment. Silicon is also a major source of dust grains that play crucial roles in star formation and stellar mass loss. There are, however, only a few Si spectral lines in the optical range that are useful to determine Si abundances in VMP stars, whereas Mg abundances are studied based on several lines in the optical range with a variety of strengths. In particular, for extremely metal-poor (EMP: [Fe/H]< −3) stars, most of the Si abundance results reported to date (e.g., Cayrel et al. 2004;Yong et al. 2013) rely on only two lines in the blue range (390.5 and 410.3nm), which in low-mass metal-poor stars are usually weak, and where spectrometers are less efficient. As a result, Mg and Ca are more frequently used to represent the α-elements. However, Si should be investigated as a major product during both massive star evolution and supernovae explosions, whereas Mg and Ca mostly represent the products in massive star evolution and supernova explosion, respectively. Standard models of nucleosynthesis and chemical evolution do not predict large scatter in the abundance ratios of [Si/Fe], and special mechanisms would be required to explain outliers. Thus, more reliable Si abundances based on larger numbers of spectral lines are required to examine the abundance scatter and to identify the presence of outliers, if any. Strontium is also a key element to constrain neutron-capture processes in the early Galaxy. Many processes and sites are proposed for Sr production: the (main) r-process, the weak-r process or Lighter Elements Primary Process (LEPP) (Wanajo & Ishimaru 2006;Travaglio et al. 2004), the main s-process in the case of objects affected by mass transfer from Asymptotic Giant Branch (AGB) stars in binary systems (CEMP-s, or CH stars), and the weak s-process in massive stars (e.g., Käppeler et al. 2011). There are two resonance lines in the blue range, which are very useful to determine Sr abundances in EMP stars. The lines are, however, too strong to determine accurate abundances in stars with relatively high metallicity ([Fe/H] > ∼ −2) or with excesses of Sr. Unfortunately, there are no other useful Sr lines with moderate strengths in the optical range. As a result, Sr abundances are less certain than abundances of another key neutron-capture element, Ba, which has weaker lines in the red spectral range. These two elements, Si and Sr, both have useful spectral lines in the near-infrared range. There are many Si lines with a variety of strengths that are detectable in red giants in the Y -, J-and H-bands, even for stars with low metallicity. Si abundances are studied based on H-band spectra by APOGEE (Jönsson et al. 2020). Most of the targets are disk stars, but some metal-poor stars with [Fe/H] < ∼ −2 are also covered. This demonstrates that near-infrared spectra with higher resolution are useful to study Si abundances in VMP and EMP stars. There are triplet lines of Sr in the Y -band, which are detectable in EMP red giants, but they are not as strong as those of the resonance lines in the blue region. We here report on abundance analyses of these lines to obtain reliable Si and Sr abundances for six metal-poor stars with [Fe/H] from −4 to −1.5. Our near-infrared observations are reported in Section 2. Section 3 provides details of the abundance analyses and error estimates. The Si abundance results and detection limits for future studies are discussed in Section 4 Observations The near-infrared spectra were obtained with the Subaru Telescope InfraRed Doppler instrument (IRD; Tamura et al. 2012;Kotani et al. 2018) on July 25, 2020 (UT). The spectra cover the Y -, J-and H-bands with spectral resolution of R ∼ 70,000. One pixel corresponds to about 6 pm at around 1 µm, resulting in about 2.4 pixel sampling of the resolution element. The objects studied to determine Si and Sr abundances are listed in table 1. They are metal-poor stars that have been well-studied by previous work to determine elemental abundances from optical spectra. HD 221170, HD 4306, and LAMOST J 2217+2104 are metalpoor red giants with a variety of [Fe/H] values from −3.9 to −2.2. LAMOST J 2217+2104 is a carbon-enhanced star with excesses of Mg and Si (Aoki et al. 2018). BD+44 • 493 is an extremely metal-poor ([Fe/H]= −3.8) subgiant star with carbon excess (Ito et al. 2009). HD 201626 is a very metal-poor CH star showing large excesses of carbon and heavy neutron-capture elements. The variation of radial velocities of this object (McClure & Woodsworth 1990), as well as the abundance pattern, indicates that this star was affected by mass accretion from the companion in a binary system when it was an AGB star (Van Eck et al. 2003;Placco et al. 2015). For this star, many weak Si lines in the optical range have been measured by Placco et al. (2015), due to the relatively high metallicity ([Fe/H]= −1.5) and low temperature of this object. HD 25329 is a cool main-sequence star with [Fe/H]= −1.6 (Luck 2017). The lines of main-sequence stars are weaker than giants in general because of the larger continuous opacity of H − in cool mainsequence stars. However, the Si and Sr lines in the near-infrared range are detectable in cool main-sequence stars with this metallicity. Data reduction of the IRD spectra was conducted using the pipeline based on PyRAF, which adopts the data processes reported in Kuzuhara et al. (2018) and Kuzuhara et al. (in preparation). The procedure includes bias correction, removal of correlated read-out noise, and extracting spectra for stellar and calibration data by tracing spectra on 2D images using flat-fielding images. The wavelength calibrations of the extracted stellar spectra are made by comparing the Th-Ar spectra obtained in our program with the reference Th-Ar spectra. The wavelengths of the reference spectra have been carefully calibrated by the IRD team based on the Th-Ar atlas of Kerber et al. (2008) and the spectra of laser frequency comb (Hirano et al. 2020). Telluric absorption lines are identified by comparing the spectra of bright metal-poor stars in our sample. The lines that show no wavelength shift for all spectra, regardless their radial velocities, are treated as telluric lines. Stellar spectral lines that are not affected by telluric lines are selected for the abundance analysis in the present work. This treatment does not significantly reduce the number of available lines for abundance analyses. The stellar parameters required for abundance analysis based on model atmospheres are taken from the literature and listed in table 1. In the most studies, the effective temperatures and the surface gravities are determined from colors (e.g., V −K) and assumption that the same Examples of the spectra are shown in figure 1. The signal-to-noise (S/N) ratios of the spectra at 1050 and 1600 nm, which are estimated from photon counts, are given in table 2. Equivalent width measurements Spectral line data for Si and the Sr triplet are taken from VALD (Kupka et al. 1999) and Grevesse et al. (2015), respectively. The source of the Si line data in VALD is Kelleher & Podobedova (2008). According to their evaluation, the accuracy of the transition probabilities of Si lines used in our analysis is B or C+, which corresponds to errors of 0.06 dex or better in loggf values. The line data, i.e., wavelengths, lower excitation potentials, and transition probabilities Errors of the equivalent widths (σ W ) are estimated at the wavelengths representing the Y -,J-and H-bands by the formula of Norris et al. (2001), adopting R = 70, 000, n pix =10, and the S/N ratios given in table 2. The σ W values range over 0.2-0.9 pm, depending on the data quality. These values are used to estimate the abundance errors due to spectral quality (see below). Si abundances Abundances of Si and Sr are determined by the standard LTE analysis using model atmospheres from the ATLAS/NEWODF grid (Castelli & Kurucz 2003) with enhancement of the α elements. Abundance analyses are made employing a one-dimensional LTE spectral synthesis code that is based on the same assumptions as the model atmosphere program of Tsuji (1978). The line broadening from the approximation of Unsold (1955), enhanced by a factor of 2.2, is adopted as done by Aoki et al. (2005). We confirm that this treatment well reproduces the line profiles calculated with broadening parameters of Barklem et al. (2000) for the lines for which the parameters are available. Our synthetic spectra well reproduce the Si lines in the solar spectrum for lines with equivalent widths smaller than 20 pm. For stronger lines, the line core profile is not well reproduced. This would be due to the non-LTE effect, as reported by Zhang et al. (2016) who studied the effect for Si lines in the H-band. The non-LTE effect is larger for stronger lines, and is not significant for weak lines found in metal-poor stars. The recent study by Masseron et al. (2021) for H-band lines reports that the non-LTE effect is dependent on spectral lines, but is smaller than 0.05 dex for metal-poor stars in globular clusters. We note that they also conducted 3D-LTE analysis and report significantly lower Si abundances. They 6 conclude, however, that more extended self-consistent 3D-NLTE computations are required. Another source of errors in abundance measurements is the uncertainties of the spectral line data. We calculate the differences of abundances derived from individual lines from the mean abundance (bottom line of table 4) for each star (δ i = log ǫ i − < log ǫ > for line i). Then we obtain the average of the abundance differences for each line, which is given in table 4 as < δ i > for lines that are measured in more than two stars. Excluding the two lines at 1088 nm and 1199 nm, the deviations are smaller than 0.1 dex. We might correct the abundances from individual lines using these results. However, since they are based on at most five objects and still uncertain, we do not make corrections in the present work. Excluding the line at 1199 nm, which shows the largest deviation (0.177 dex), the standard deviation of the < δ i > values is 0.06 dex. This value (σ line ) is comparable to the uncertainties of the transition probabilities of Si lines (see § 3.1). The σ value given in table 5 is mostly explained by σ log ǫ and σ line . The random error of the abundance measurement is given as σN −1/2 . We estimate the errors due to uncertainties of stellar parameters from the abundance changes for changes of the stellar parameters, ∆T eff = 100 K, ∆logg = 0.3 dex, ∆[Fe/H]= 0.3 dex, and ∆v turb = 0.5 km s −1 for HD 221170. The quadrature sum of the changes is 0.13 dex, which is dominated by the changes of micro-turbulent velocity and effective temperature. This value, σ param , and the random error obtained above are added in quadrature to derive total errors, which is shown in the top panel of figure 3. (Aoki et al. 2018). This object is a CEMP-no star with a large excess of Mg. The Si is also over-abundant, but is not as much as for Mg. An interpretation of this peculiar abundance ratios is a larger-scale mixing and fallback, resulting in smaller amount of ejecta from the Si layer and inside (Ishigaki et al. 2018). Excluding this object, the scatter of [Mg/Si] is very small. The apparently quite low scatter requires confirmation from measurements for a larger sample of metal-poor stars. [Sr/Fe] values of HD 25239 and HD 221170 follow the trend found in halo stars (0.0 <[Sr/Fe]< 0.5) in very metal-poor stars. The value of HD 4306 is within the wide distribution of [Sr/Fe] found in extremely metal-poor stars. The CH star HD 201626 exhibits a clear excess of Sr, whereas it is smaller than found for heavier neutron-capture elements, e.g., Ba (Placco et al. 2015). This is an anticipated result from the s-process models for very low-metallicity, where heavier elements are more enhanced due to the higher neutron exposures to smaller amount of seed nuclei. The errors of equivalent widths (σ W ) at around 1 µm are about 0.4 pm (table 2) for a spectrum with S/N ∼100. If 3σ W is adopted as an upper limit, the detection limit of the Si abundance is [Si/H]∼ −4.5 and −4.0 for red giants (T eff ∼ 5000 K) and subgiants (T eff ∼ 5500 K), respectively. This indicates that future measurements of Si abundances for VMP stars from high-resolution near-infrared spectra are very promising, and the abundance trends and scatter of [Si/Fe] will be well-determined. The detection limit of the Sr abundance from the near-infrared triplet lines estimated by the same assumption is [Sr/H]∼ −3.6 and −2.9 for red giants and subgiants, respectively. This means that these lines are not sufficiently strong for abundance measurements of Sr in EMP stars if Sr is under-abundant. Indeed, no Sr line is detected in our spectra of BD +44 • 493 and LAMOST J 2217+2104. Instead, these lines are useful to determine precise abundances with relatively high Sr abundances, in which the resonance lines in the blue range are too strong and/or severely affected by blending of other lines. Hence, the near-infrared triplet lines and the blue resonance lines are complementary to cover the wide ranges of Sr abundance ratios in VMP and EMP stars. The scatter of [Sr/Fe] found in metal-poor stars is as large as 3 dex (e.g., McWilliam et al. 1995;). This is much larger than the measurement errors from the resonance lines. Improving the abundance measurements using the triplet lines will contribute to determine more detailed distributions of these abundance ratios, which may identify some fine structure or clustering in the abundance distributions (e.g., Roederer 2013;Aoki et al. 2020). Summary and concluding remarks We have determined Si and Sr abundances for six metal-poor stars from measurements of spectral lines identified in high-resolution near-infrared spectra obtained with the Subaru Telescope InfraRed Doppler instrument (IRD). The Si abundances derived from infrared spectra exhibit clear trends and over-abundances. Further measurements of the near-infrared lines will provide reliable Si abundances to determine the abundance trends and scatter, which can be used to place strong constraints on chemical-evolution models. The Sr triplet lines in the J-band are also useful for determining the abundance distribution of this element in metal-poor stars covering objects with high Sr abundances. (Cayrel et al. 2004), crosses (Yong et al. 2013), and open squares (Jacobson et al. 2015). 1 The solar abundance of log ǫ⊙(Si)= 7.51 is adopted. 2 Taken from the literature given in table 1
2021-12-15T02:16:02.738Z
2021-12-14T00:00:00.000
{ "year": 2021, "sha1": "f27bad06fd44dee37c1b037e21da11a801860961", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f27bad06fd44dee37c1b037e21da11a801860961", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119572777
pes2o/s2orc
v3-fos-license
Derivatives of the L^p cosine transform The $L^p$-cosine transform of an even, continuous function $f\in C_e(\Sn)$ is defined by: $$H(x)=\int_{\Sn}|\ip{x}{\xi}|^pf(\xi) d\xi,\quad x\in {\R}^n.$$ It is shown that if $p$ is not an even integer then all partial derivatives of even order of $H(x)$ up to order $p+1$ (including $p+1$ if $p$ is an odd integer) exist and are continuous everywhere in ${\R}^n\backslash\{0\}$. As a result of the corresponding differentiation formula, we show that if $f$ is a positive bounded function and $p>1$ then $H^{1/p}$ is a support function of a convex body whose boundary has everywhere positive Gauss-Kronekcer curvature. Introduction Recent research in convex geometry has repeatedly utilized two important integral transforms of functions defined on the unit sphere S n−1 in R n . These are the cosine transform and the spherical Radon transform, both acting on C ∞ e (S n−1 ), the space of infinitely differentiable even functions on S n−1 , by: where , denotes the scalar product, dξ the spherical Lebesgue measure, and x ⊥ the n−1 dimensional subspace orthogonal to x. It is well known that T and R are both continuous bijections of C ∞ e (S n−1 ) onto itself, (the topology on C ∞ e (S n−1 ) taken as uniform convergence of all derivatives). This fact allows an extension of both transforms, by duality, to bi-continuous bijections of the dual space D e (S n−1 ) of even distributions on S n−1 . A pleasant consequence of this extension is that we may assign precise meanings to the symbols Rρ, R −1 ρ, T ρ, T −1 ρ, for a given even distribution ρ ∈ D e (S n−1 ). In particular, one talks about the cosine transform of an L 1 function, or the spherical Radon transform of a measure. These purely analytic manipulations turned out it to have surprisingly far reaching consequences. For example, the key to the ultimate solution of the Busemann-Petty problem, (which was one of the most intriguing unsolved problems of convex geometry) was uncovered by Lutwak in [17], where the notion of intersection body was invented. An origin symmetric convex body is called an intersection body if its radial function is realized as a spherical Radon transform of a positive measure on S n−1 . Lutwak reduced the Busemann-Petty problem to the analytic question of whether R −1 ρ is a positive measure whenever ρ is a radial function of a centrally symmetric convex body. The answer is yes, if and only if the dimension is at most 4. Although in general it was known that for sufficiently large n the Busemann-Petty problem has a negative answer in R n (see [3]), the curious dependence on the dimension and the precise role of convexity were not understood until they were revealed by means of sophisticated analysis in [12]. The relevance of the cosine transform to convex geometry becomes clear through the concept of zonoids, also called projection bodies. These are bodies that can be approximated to any degree of accuracy, in the Hausdorf metric sense, by finite vector sums of intervals, called zonotopes. Every zonotope has a center of symmetry (namely, the sum of the centers of the intervals). Up to translation, every zonotope Z has therefore the form Z = Then In other words, the support function of a zonotope is a cosine transform of a positive, discrete measure. A standard approximation argument yields a fundamental theorem: A centrally symmetric convex body is a zonoid if and only if its support function is a cosine transform of a positive measure. The measure µ in (1) is called the generating measure of Z. Generalizing this concept, Weil [21] proved that to every centrally symmetric convex body K ⊂ R n corresponds a unique generating distribution, that is, a continuous linear functional ρ K on the space C ∞ e (S n−1 ), whose domain can be extended as to include the functions | u, · | with u ∈ S n−1 , such that ρ K (| u, · |) = h K (u) for every u ∈ S n−1 . Recall that positive distributions are in fact positive measures. Thus in the context of zonoids Weil's result is particularly useful -it provides a-priori a functional, namely T −1 h K , whose positivity is to be checked. Interestingly, the cosine and spherical Radon transforms are related by: where ∆ n is the spherical Laplace operator on S n−1 , and c n > 0 (see [4]). The inversion formula (2) proved a useful analytic tool in constructing examples of non-smooth zonoids whose polars are zonoids [16], and of convex bodies whose generating distributions have large degree [15]. Often one thinks of h Z (x) in (1) as representing the norm of some space, which in this case is isometric to a subspace of L 1 (S n−1 , µ). A natural generalization is then to look at functions of the form If µ is positive, H is continuous, convex and 1-homogeneous, hence a support function of some convex body, and also the norm of some normed space, which is evidently isometric to a subspace of L p (S n−1 , µ). The r.h.s of (3) is called the L p -cosine transform of the measure µ, and is denoted by T p µ. If p is not an even integer, the measure µ in (3) is uniquely determined by the norm on the left hand side. For p = 1, this was first proved by Alexandrov [1] and rediscovered several times since. In [19], Neyman proved that if p is not an even integer, the linear span of the functions | x, · | p , defined on S n−1 and indexed by x ∈ R n , is dense in the space C e (S n−1 ) of continuous even functions on S n−1 . In particular, µ in (3) is uniquely determined. If p is an even integer, the functions | x, · | p span precisely the subspace of homogeneous (even) polynomials of degree p (see [19]), so that there is no longer uniqueness in the representation (3). The inversion problem for the L p -cosine transform of L 1 functions has been treated in [7] in several important special cases. The general case of inversion has apparently been neglected. In a recent paper [18], the cosine transform of a continuous function was shown to be a C 2 function. In the first section below, this result is generalized in two ways. First, it is proved that for a nonnegative integer k, the 2k + 1-cosine transform of a continuous function is of class C 2k+2 . The proof below invokes Fourier transform techniques developed by Koldobksy in a series of papers ( [6,7,8,9,10]). Then, we deal with the L p -cosine transform where p > 1 is not an integer, and show that if f is a bounded function, then T p f has continuous derivatives of the largest even order smaller than p + 1. For second order derivatives, this was done in a more general setting in [11], using other methods. The first section is concluded with an additional result, asserting that the cosine transform carries L 1 (S n−1 ) into C 1 (S n−1 ). Our main application is expounded in section 2, where we show that if H p = T p f with f positive and bounded, then H is a support function of a centrally symmetric C 2 + convex body. That is, the boundary of the body has everywhere positive Gauss-Kronecker curvature. This should be compared to Theorem 2 of [18], which asserts that zonoids (i.e, the p = 1 case) whose generating measures are continuous functions may fail to have positive Gauss-Kronecker curvature at some boundary point only because all the principal radii of curvature evaluated at the corresponding outward unit normal are zero (whereas in general the curvature may not exist due to just one vanishing principal radius of curvature). Theorem 2.1 Let n ≥ 2 and suppose that and for every multi-index α with |α| = 2k + 2, one has for each x ∈ R n \{0} In case the differentiation-order |α| strictly smaller than p + 1, (and even) the assumptions on f can be somewhat relaxed, and the corresponding differentiation formula is different. For these reasons the result is formulated separately. Theorem 2.2 Let n ≥ 2 and suppose that where p > 1, p = 2k and f ∈ L ∞ (S n−1 ). Let α be a multi-index such that |α| is even and |α| < p + 1. Then H ∈ C |α| (R n \{0}) and For the proofs, we use distribution theory and Fourier transforms. As usual, let S(R n ) denote the space of rapidly decreasing infinitely differentiable functions (test functions) in R n , and S ′ (R n ) is the space of distributions over S(R n ). The Fourier transform of a distribution f ∈ S ′ (R n ) is defined by (f ,φ) = (2π) n (f, φ), for every test function φ. Proof of Theorem 2.1 For every test function φ(x) supported in R n \{0} consider another test function ψ(x) = x α 1 1 · · · x αn n φ(x). Since |α| is even, so is ψ. From lemma 2.2 of [7] we have Therefore, By the well-known connection between the Fourier transform and the Radon transform (see [5]), the function t → (2π) n φ(−tξ) is the Fourier transform of Put g(ξ) = n k=1 ξ α k k f (ξ), and let R denote the spherical Radon transform. Since for n ≥ 2 the function ||x|| −1 2 is locally integrable, we have Self-duality of the spherical Radon transform was used here. Consequently, On the other hand, the well known connection between differentiation and Fourier transforms gives: Recall thatφ = (2π) n φ(−x). Therefore, for every distribution f and an even test function φ, one has (f,φ) = (f , φ). Since φ(x) is an arbitrary even test function (with 0 / ∈ supp φ) (7), (8) together imply that the Fourier transforms of the distributions are equal distributions in R n \{0}. Therefore, the distributions in (9) can differ by a polynomial only ( [5], p. 119). Since both distributions are even and homogeneous of degree −1, the polynomial must be identically zero. Hence the distributions in (9) are equal. To show that H is a C |α| function we must show that D α H exists also in the classical sense and is continuous. As is well known in the theory of distributions, classical and distributional derivatives coincide if the distributional derivative in question happens to be a continuous function. ( [13], p. 136). Since f is continuous, so is the spherical Radon transform of n k=1 ξ α k k f (ξ). Therefore D α H(x) is a continuous function, and we have (4). The proof of Theorem 2.2 uses the same technique. Instead of (6) we now have: Therefore, Since p − |α| > −1, and p − |α| is not an even integer, we can apply Lemma 2.1 of [7]: The connection between differentiation and the Fourier transform yields in this case: Together, (11) and (12) imply that the Fourier transforms of the distribu- are equal distributions in R n \{0}. As before, the distributions in (13) is also continuous in the case |α| − 1 < p < |α|, pick a sequence x m = 0 such that lim m→∞ x m = x 0 = 0. For sufficiently large m, we have | x m , ξ | ≥ | x 0 , ξ |/2 for each ξ ∈ S n−1 . Therefore, the integrand in the right hand side of (13) is almost everywhere bounded above by the function T f with f ∈ L 1 and f > 0, then Z is a strictly convex zonoid, so h Z is differentiable in R n \{0}. The proof is completed by noting that support functions differentiable in R n \{0} are already continuously differentiable there. Application to curvature and convexity The main result in this section is the following where p > 1, p = 2k and f is a positive element of L ∞ (S n−1 ). Then H(x) is a support function of a centrally symmetric convex body that has everywhere positive Gauss-Kronecker curvature. The proof is largely based upon the next lemma. Proof . Differentiating, one finds: If H p = T p f , then by Theorem 2.2 Moreover, differentiation under the integral sign can easily be justified and Next, applying the triangle inequality and the Cauchy-Schwartz inequality: ≥ 0. In case of equality, we have equality in the triangle inequality, and in the Cauchy-Schwartz inequality, applied to the functions | u, · | p−2 2 |ξ 1 | and | u, · | p 2 . Therefore, for every ξ ∈ supp f , we have for some real constants s, t not both zero: We can not have s = 0 (resp. t = 0), for then the support of f would have to be contained in u ⊥ (resp. e ⊥ 1 ), which contradicts . By the first part of the proof, applied to H • U and f • U in place of H andf , we get: whenever v ∈ S n−1 and v = e 1 . Therefore D 2 u H(v) > 0 whenever ξ = u, as was asserted. Proof of Theorem 3.1 By Theorem 2.2, H p , and therefore H, are C 2 functions in R n \{0} Since f is positive, H is a support function of some (strictly) convex body, say, K. To show that K is of class C 2 + , it suffices to show that K has everywhere positive principal radii of curvature. ( [20], p. 111). Let T u denote the tangent space to S n−1 at u. The principal radii of curvature are eigenvalues of the reverse Weingarten map W u : where W u is d(∇H) u . ( Note that since the gradient ∇H(u) is the unique point on the boundary of K at which u is an outer normal vector, its gradient d(∇H) u maps the tangent space T u into itself). By [20], p. 108, Lemma 2.5.1, Therefore, if λ is an eigenvalue of W u with an eigenvector v, then λ = . As explained in [20] p. 110, , which by Lemma 3.2 is positive. Remark Theorem 3.1 no longer holds for p = 1. In fact, we can have h = T f with f ∈ C ∞ e (S n−1 ) and f > 0, but nonetheless h is not C 2 + . Any zonoid whose support function is C ∞ but not C 2 + will do. A special case of Theorem 2.1, for k = 0, was proved (in an elementary way) recently in [18]. Clearly, the L p -cosine transform of a positive measure is a convex function, if p ≥ 1. However, there are also L p -cosine transforms of signed measures, possibly not positive, that are convex functions. A theorem by Lindquist [14] asserts that the cosine transform T f (x) defines a support function if and only if S n−1 ∩u ⊥ ξ, x 2 f (ξ) dξ ≥ 0 (15) for all u ∈ S n−1 and all x ∈ S n−1 ∩ u ⊥ . As was observed in [18], the expression in (15) Put k = 0 in (4). The result is: This in turn implies that for u ∈ S n−1 , the Hessian matrix H ′′ evaluated at u is given by: Therefore H ′′ u x, x becomes the integral in (15). All this was pointed out in [18]. Applying the same reasoning to (5), we get for p > 1 (p not an even integer): Hence we derive the following result -a p-version of Lindquist's criterion, which is an immediate consequence of the previous equation. for all u ∈ S n−1 , x ∈ R n
2019-04-12T09:10:08.827Z
2001-11-27T00:00:00.000
{ "year": 2001, "sha1": "54ca70eec97277581b13791d5adec7f3bcb70fea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8e46f9c1456377300765ecefb20c6594ae27491a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221934626
pes2o/s2orc
v3-fos-license
The Effect of Family Life Education Training on Marital Satisfaction and Conflict Resolution Skills of Married Couples Marriage continues to be an important social institution in India even in a time of changing social mores. Marriage is “not merely a set of social arrangements but also ideas, beliefs and values by which those arrangements are sustained” (Beteille, 1992). Traditionally, a marital union was regarded as perpetual and as an indissoluble sacrament in the cultural milieu of Indian society (Aura, 2008; Ramachandrappa, 2012). Marriages in India are not limited to the bond between the couple, but are perceived as a relationship between two families which are brought together socially (Aura, 2008). However, modernising forces have altered the sociocultural fabric of India, influencing the structure, functioning and role expectations of familial and marital relationships (Madan, 1993; Singh, 2002; Sharangpani, 2010). Introduction Marriage continues to be an important social institution in India even in a time of changing social mores. Marriage is "not merely a set of social arrangements but also ideas, beliefs and values by which those arrangements are sustained" (Beteille, 1992). Traditionally, a marital union was regarded as perpetual and as an indissoluble sacrament in the cultural milieu of Indian society (Aura, 2008;Ramachandrappa, 2012). Marriages in India are not limited to the bond between the couple, but are perceived as a relationship between two families which are brought together socially (Aura, 2008). However, modernising forces have altered the sociocultural fabric of India, influencing the structure, functioning and role expectations of familial and marital relationships (Madan, 1993;Singh, 2002;Sharangpani, 2010). Marital satisfaction is a mental state that reflects the perceived benefits and costs of marriage to a particular person. It is a multidimensional concept that includes different factors such as personality features, financial matters, child rearing styles, and The purpose of this study was to determine the effectiveness of family life education training on marital satisfaction and marital conflicts in married couple. At this experimental study with pre-test-post-test design and one twenty sample were selected in Maheshrammandal, Telangana state, purposively selected for this study. First, the pre-test was administered for both men and women. The pre-test was consisted of a questionnaire on demographic information, marital satisfaction and marital conflicts. Then intervention was presented with ten 45-minute sessions on family life education training. At the end of trainings, which lasted for 6 weeks, both groups filled the questionnaires. The data were analysed using the SPSS and frequency, percentage and paired T-tests. The results showed that there was significant difference between marital satisfaction and reduction of marital conflicts of married couple. It can be concluded that family life education program had a positive and significant effect on the improve the marital satisfaction and reduces couples" marital conflict and it"s components. (Tazekand, et al., 2013). Enquiry on marital satisfaction and the factors that influence marital satisfaction is vast and covers many areas relating to this topic. Couple"s agreement on the style of relation, emotions expression, sexual relation, leisure time activities, home duties sharing, duration of being beside each other, external network and supply and incompatible explanations can affect marital satisfaction (Vangelisti and Huston, 1994;Bradbury, et al., 1996). K e y w o r d s Conflict is inevitable process in any marriage. The opposing needs and interests of the couples lie at the core of marital conflict. Conflict in general is described as the process that begins when one party perceives that the other one has frustrated some concerns of his/hers (Thomas, 1976) an interpersonal conflict exists whenever an action by one person prevents, obstructs or interferes with the actions of another person (Johnson 1990). In conflict situation, couples express or latent differences in satisfying their individual needs and interests, and they experience interference from their partners in accomplishing these goals. In the contemporary family, there is a great need to negotiate the changing role of husband and wife. Discussions about who makes the decisions and how they should be made create a great potential for marital conflict. Marital conflict in itself is not necessarily bad. In fact, less emphasis should be placed on the number of conflicts experienced by a couple than on how they are managed and resolved. More specifically, Gottman and Levenson (1988) suggest that the manner in which a couple handle negative effect in a conflict determines whether the marriage succeeds or fails. The couple's skill in conflict resolution and the subsequent impact that such resolution has on each partner hold the key to whether the marriage continues to function in a constructive way or becomes a destructive or malfunctioning system. Hence in most interpersonal conflicts it is important to find a resolution. Research reviews on people with marital conflicts and who had attended the marriage enrichment training through the PAIRS method can reduce marital conflicts. This method helps individuals improve their relationships and at the same time preserve the quality of these relationships over time. This approach is a training model to teach skills for the improvement of satisfaction and stability of couple relationship. (Mahshid Alsadat Keyhandoost, et al., (2017). (2003) and Johnson and Cohen (2005) have also shown the positive effect of communication skill training and problem-solving on marital distress, controlling conflict, and marital satisfaction. Khojasteh Mehr (2008) argued that communication skills training can influence the positive emotions toward spouse. Emotionally focused therapy could reduce the rate of depression, anxiety and stress in couples and there were no significant differences between the two groups. (Mohammad Reza Shairi 2014). Family Life Education is the practice of equipping and empowering family members to develop knowledge and skills that enhance well-being and strengthen interpersonal relationships through an educational, preventive, and strengths-based approach. It is the process of developing awareness and understanding of population situations as well as rational attitude and behavior towards those situations for the attainment of quality life for the family and the nation. Family life education is concerned with the study of attitudes and skills related to dating, marriage, parenting, family health and life of the family as a socio-cultural and economic unit in the society. The need for family life education subject is central to the holistic development of learners. One of the best ways used in improving the harmonious relationships between married couples is to teach them the basic communication and conflict resolution skills and help them to solve the conflicts in marriages in a health way (Fowers, 2001). Materials and Methods Married couple belonging to the age group of 18-38 years were selected for the study. Purposive sampling technique was adopted for the study. In the state of Telangana from Maheswaram Mandal Ramchandraguda and Dubbacherla villages were selected for the study. The sample for the study comprises of couple-both husband and wife. From two villages sixty (60) couples from each village were selected purposively. Thus, a total of one twenty (120) couples were selected purposively from two villages. To find out the marital satisfaction of married couple, the investigator used marital satisfaction dimension of social, emotional, interpersonal and sexual scale (Brinda Amritraj and Indira Jai Prakash, 1985) was used for pre and posttest to measure the effect of family life education on marital satisfaction. The data on marital satisfaction was collected in two stages one before and after intervention. The collected data was coded and analyzed using frequency, percentages and paired t-test (Table 1). Results and Discussion From table 2 it was evident that mean scores of marital satisfactions with respect to pre and post-test scores of interventions had shown significant differences in marital satisfaction. This intervention training significantly reduces marital conflicts and improves marital adjustments of married couple. These trainings had also significantly influenced the four aspects of marital satisfaction of married couple including social, emotional, interpersonal and sexual. The study revealed that the mean scores of marital satisfaction increased significantly after intervention. It can be concluded that the effect of intervention programme had positive effects on marital adjustment levels by improving communication. The results of the study are in congruence with the study conducted by AzadehSoltani (2013) studied on the Effectiveness of Emotional Focused Couple Therapy on Intimacy of Couples. Study results revealed that here is a significant difference between the two groups in intimacy. Further results showed that EFCT increased emotional, psychological, sexual, physical, relationship, temporal and intellectual intimacy between the two groups. Another study which is in line with the results of the study conducted by Maryam Zarnaghash (2013) studied on influence of family therapy on marital conflicts. This study results revealed that family therapy has significant influence on solving of marital problems especial communication problems between couple also family therapy decreased individual conflicts in women. From the table 3, it was found that there was a significant relationship between pre and posttest scores in problem solving skills of married couple. The t test results showed that there was a significant difference in married couples after intervention in all domains such as physical, emotional, interpersonal, family and financial domains. The high mean score of conflict resolving skills components indicates that married couples were better ability to resolve their conflicts and improved communication patterns. It can be concluded that intervention programme had positive effective in reducing marital conflicts as well as proper communication skills increases marital satisfaction. The results were further supported by Mazhari et al., (2016) study on the effect of relationship enhancement program (REP) on reducing marital conflicts of dual-career couples. The sessions of relationship enhancement program were held in 8 sessions in groups for the experiment group, but the control group did not receive any intervention ( Fig. 1 and 2). 1.The concept of conflict in marital relationships and understanding its naturalness 2. Identifying the barriers to conflict resolution 3-correct roles and principles of conflict resolution 4-implementing the practical techniques of conflict resolution 3. Problem solving skills 1.Defining the problem-solving process 2. Teaching the steps of problem-solving process 3. Implementing the practical techniques of problem-solving. Empathy 1. Concept of empathy and understanding its naturalness. 2. Enhancing empathy active listening, verbal and non-verbal models and 5. Decision making skills 1. Teaching the importance of accepting the spouse and involving partners in decision-making and respecting 2. his/her opinions and feelings, exercising to investigate the forms of men"s resistance to be involved in decision-making 3. with their wives, measures to enhance the emotional feelings, and acceptance of surrender. The results showed that the training of relationship enhancement program (REP) has led to significant decrease in marital conflicts among the women of the experiment group in the post-test stage (F = 13.92 and P > 0.01), and it has led to significant decrease in all the components of this variable, as well; including reduction of cooperation, reduction of sexual relationship, increased emotional reactions, increase in drawing child"s support, increased personal relationship with relatives, reduction of family relationship with relatives of spouse and friends, and separating financial matters from each other. Elham Mohammadi et al., (2016) study on Effectiveness of Acceptance and Commitment Therapy (ACT) on Depression in Women with Marital Conflicts. showed a significant reduction of depression at the post-test and follow-up stage (P<0.002). In conclusion the aim of this study was to evaluate the effect of family life education on marital satisfaction and problem-solving skills of married couple. The results of the study showed that family life education training had a positive and significant effect on the marital satisfaction of married couple and this intervention helped the couple in positive emotions and the ability to reduce negative emotions when solving conflicts. The findings of the study revealed that family life education programme led to changes in marital adjustments and reduced the marital conflicts of couples.
2020-09-26T02:43:02.517Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "51aab4c9175dcf322af26b9661eb74e01801fb0e", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-8-2020/T.%20Asha%20Jyothi,%20etral.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "51aab4c9175dcf322af26b9661eb74e01801fb0e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
228080180
pes2o/s2orc
v3-fos-license
Graphonomy: Universal Image Parsing via Graph Reasoning and Transfer Prior highly-tuned image parsing models are usually studied in a certain domain with a specific set of semantic labels and can hardly be adapted into other scenarios (e.g., sharing discrepant label granularity) without extensive re-training. Learning a single universal parsing model by unifying label annotations from different domains or at various levels of granularity is a crucial but rarely addressed topic. This poses many fundamental learning challenges, e.g., discovering underlying semantic structures among different label granularity or mining label correlation across relevant tasks. To address these challenges, we propose a graph reasoning and transfer learning framework, named"Graphonomy", which incorporates human knowledge and label taxonomy into the intermediate graph representation learning beyond local convolutions. In particular, Graphonomy learns the global and structured semantic coherency in multiple domains via semantic-aware graph reasoning and transfer, enforcing the mutual benefits of the parsing across domains (e.g., different datasets or co-related tasks). The Graphonomy includes two iterated modules: Intra-Graph Reasoning and Inter-Graph Transfer modules. The former extracts the semantic graph in each domain to improve the feature representation learning by propagating information with the graph; the latter exploits the dependencies among the graphs from different domains for bidirectional knowledge transfer. We apply Graphonomy to two relevant but different image understanding research topics: human parsing and panoptic segmentation, and show Graphonomy can handle both of them well via a standard pipeline against current state-of-the-art approaches. Moreover, some extra benefit of our framework is demonstrated, e.g., generating the human parsing at various levels of granularity by unifying annotations across different datasets. INTRODUCTION Human visual systems are capable of accomplishing holistic scene understanding at a single glance, e.g., identifying instances from the background, and recognizing object and background classes. Nevertheless, recent research efforts mainly focus on understanding images within a specific domain, e.g., semantic image region segmentation [1], [2] and detailed human part/clothes parsing [3], [4], [5], [6], [7]. The generalization capability of these models are limited since they are usually trained on a certain dataset with a specific set of semantic labels. Moreover, the underlying semantics structure and relatedness within images (e.g., "upperclothes can be interpreted as coat or shirt" and "ship often appears with background of sea or river") are rarely exploited in an explicit way. As a result, it is very hard to efficiently adapt the trained model into other relevant new scenarios. To address these problems and avoid redundant data annotation and re-training for discrepant label granularity, we propose to learn a universal image parsing model across multiple domains (e.g., relevant but different datasets or tasks). Specifically, the model is required to handle not only the detailed human parsing (i.e., segmenting human / parts at different coarse to fine-grained level across different datasets), as Fig. 1 illustrates, but also the panoptic scene understanding (i.e., segmenting each object instance and assigning class labels to each pixel). The most straightforward solution to the universal parsing would be posing it as a multi-task learning problem, and integrat-ing multiple segmentation branches upon one shared backbone network [5], [6], [8], [9], [10], [11]. This category of approaches, however, basically resorts to the brute-force feature-level information fusion while disregarding the underlying common semantic knowledge, such as label hierarchy, label visual similarity and linguistic/context correlations. A few recently proposed works in human parsing make attempts to incorporate the human structure information by employing graphical models (e.g., Conditional Random Fields (CRFs)) [5], self-supervised loss [6] or human pose priors [12], [13], [14], whereas these models overlook the explicit relationships among the different body parts and clothing accessories, leading to suboptimal performances especially for some infrequent fined-grained categories. In this paper, we propose to develop transfer learning and knowledge integration techniques across different domains for better handling the universal parsing, as the semantic labels are discrepant in different tasks or datasets and this discrepancy might largely hinder the model unification. Specifically, a learning framework is presented for incorporating human knowledge and label taxonomy into the intermediate graph representation, which is thus named as "Graphonomy" (i.e., graph taxonomy). It learns the global and structured semantic coherency in multiple domains via reasoning and transfer with the semantics-enhanced graph representation, enforcing the mutual benefits of the parsing across domains. Inspired by the effectiveness of human utilizing semantic knowledge learned through experience, we develop our Graphonomy based on the structured graph representation that seamlessly integrates the image feature and higher level semantics. The Graphonomy includes two main modules: Intra-Graph Reasoning and Inter-Graph Transfer, which performs iteratively during the learning procedure. Notably, Graphonomy can be flexibly inte-arXiv:2101.10620v1 [cs.CV] 26 Jan 2021 Fig. 1: With huge different granularity and quantity of semantic labels, image parsing is isolated into multiple level tasks that hinder the model generation capability and data annotation utilization. For example, the head region on a dataset is further annotated into several fine-grained concepts on another dataset, such as hat, hair and face. However, different semantic parts still have some intrinsic and hierarchical relations (e.g., Head includes the face. Face is next to hair), which can be encoding as intra-graph and inter-graph connections for better information propagation. To alleviate the label discrepancy issue and take advantage of their semantic correlations, we introduce a learning framework, named as "Graphonomy", which models the global semantic coherency in multiple domains via graph transfer learning to achieve multiple levels of human parsing tasks. For clarity, we only show a portion of labels and connections. grated with any modern image parsing systems via the graph reasoning and transfer. And all of the components of our Graphonomy are fully differentiable for end-to-end training and efficient inference. For Intra-Graph Reasoning, we first project the extracted image features into a graph, where each vertex represents a tensor combined from the pixels of similar features and it associates with a semantic label. The edge connections among these graph vertices are represented by a adjacency matrix that can be derived by either the fixed prior knowledge (e.g.the human parts layout / configuration) or a dynamic learning process with attention mechanism. The graph convolution operation is then implemented along with the graph structure for propagating the semantic knowledge from a global perspective and updating the features associating with the vertices. The updated features are then re-projected back to the feature map for enhancing the classification discriminability. In the module of Inter-Graph Transfer, our framework gradually distil related knowledge from the structured graph in one domain to the graph in another domain by employing the graph convolution operation, so that the different semantic labels across domains are bridged during the learning process. In this work, we separately discuss the knowledge transfer regarding to two different application scenarios. For human parsing, we aim to learn the model across datasets with discrepant label granularity and effectively utilize the annotations at multiple levels. To enhance the transfer capability, we make the first effort to exploit various graph transfer dependencies among different datasets. We encode the relationships between two semantic vertexes from different graphs by computing their feature similarity as well as the semantic similarity encapsulated with linguistic knowledge. Notably, we explore different ways for building the connections between the two graphs. For panoptic scene understanding, Graphonomy jointly optimizes the two tasks (i.e., instance-level thing segmentation and pixel-wise segmentation of background stuff) and exploits their semantic relations in an explicit way. And the semantic labels are not identically shared by the different tasks but contextually corelated. Our transfer module bidirectionally propagate the message between the two graphs and the connections are dynamically determined by the attention mechanism. That is, we can simply configure the transfer module as the method used in the reasoning module, making the whole framework comprehensively compact. In sum, Graphonomy encodes a set of concepts according with the taxonomy, and all graphs constructed from different domains (e.g.datasets) are connected following the transfer dependencies to enforce semantic feature propagation. Fig. 2 illustrates the overview of our Graphonomy framework. We conduct experiments on three large-scale human parsing benchmarks that contain diverse semantic body parts and clothes. The experimental results show that by seamlessly propagating information via Intra-Graph Reasoning and Inter-Graph Transfer, our Graphonomy is able to associate and distil high-level semantic graph representation constructed from different datasets, which effectively improves multiple levels of human parsing tasks. Moreover, the experiments are also conducted on two panoptic segmentation datasets and demonstrate the superiority of our Graphonomy in both accuracy and generality compared with the recently proposed panoptic segmentation approaches [15], [16], [11], [17]. This paper makes the following main contributions. • To the best of our knowledge, it makes the first attempt to tackle the image parsing across multiple domains using a single universal model, and justifies its effectiveness on two challenging image parsing problems: the detailed human parsing and panoptic scene segmentation. • It presents a new framework of graph reasoning and transfer for seamlessly integrating the semantic knowledge and deep feature learning without piling up the complexity. And various ways of graph transfer is also explored for better exploiting the underlying structure of semantics. • It provides thorough experimental analysis on several standard large-scale benchmarks and demonstrates the advantage of our framework compared with the state-ofthe-arts. The rest of the paper is organized as follows. We first review the past literature in Section 2. Section 3 introduces the overall framework of our proposed framework and discusses the implementation of each main component. The applications of our framework on human parsing and panoptic segmentation are analyzed in Section 4 and Section 5, respectively, which include the experimental results and comparisons. Section 6 concludes this paper with the discussion of future work. Fig. 2: The overview of our Graphonomy that tackles the universal parsing via graph reasoning and transfer. The parsing model can be trained across domains (e.g.relevant but different tasks or datasets) with discrepant semantic labels. RELATED WORK Human Parsing and Panoptic Segmentation. Human parsing and panoptic segmentation are two relevant research topics in scene understanding, which have recently attracted a huge amount of interests with diverse applications and achieved great progress with the advance of deep convolutional neural networks and largescale datasets. Human parsing aims to segment a human image into multiple parts with fine-grained semantics (e.g.,, body parts and clothing) and provides a more detailed understanding of image contents. Most of the prior works focused on developing new neural network models for improving discriminability of the feature representation (e.g.the dilated convolution [5], [18], LSTM structure [19], [20], [21] and encoder-decoder architecture [3]) and incorporating auxiliary information guidance such as the human pose constraints [13], [14], [22]. Although these methods showed promising results on each human parsing dataset, they basically disregarded the intrinsic semantic correlations across concepts by simply using one flat prediction layer to classify all labels and utilized the annotations in an inefficient way. Moreover, the trained model cannot be directly applied to another related task without heavy fine-tuning. Aiming to unify the tasks of instance and semantic segmentation towards some newly rising applications, panoptic segmentation has been usually discussed as a multi-task learning problem. Most of the recently proposed approaches [11], [16], [17], [15], [23] mainly focused on developing neural networks that contain multiple branches accounting for instance-aware segmentation and region segmentation respectively with a backbone network shared by the two tasks. For example, Li et al. [17] showed utilizing the feature maps learned for instance-aware segmentation is able to assist the performance of semantic segmentation. Xiao et al. [24] proposed to handle heterogeneous annotations by jointly optimizing co-related tasks. However, the modelling of intertask dependency in these approaches is usually over-simplified by learning multi-branches feature representation, leading to the suboptimal performance and limited generalization capacity. In this work, our proposed Graphonomy framework is capable of explicitly reasoning the contextual dependencies within semantics-aware graph representation across domains in the graph representation and handling human parsing and panoptic segmentation both well. Specifically, we demonstrate the effectiveness of our method on human parsing by generating the universal parsing with discrepant label granularity, which was never addressed by existing human parsing approaches. For panoptic segmentation, we show that explicitly exploiting the underlying semantic configurations with the contextually co-related tasks is a key to improving not only the segmentation performance but also the interpretability of the learning process. Knowledge Reasoning and Transfer. Many research efforts recently model domain knowledge as a graph for mining correlations among semantic labels or objects in images, which has been proved effective in many scenarios of image understanding [25], [26], [27], [28], [29]. For example, Chen et al. [25] leveraged local region-based reasoning and global reasoning to facilitate object detection. Liang et al. [28] explicitly constructed a semantic neural graph network by incorporating the semantic concept hierarchy. Some sequential reasoning models for capturing the contextual dependency were also proposed with LSTM or other memory neural networks [30], [31]. Our work also inspired by the effectiveness of transfer learning research [32], [33], [34], [35], [36], [37], [38], which targets to bridging different domains or tasks to mitigate the burden of manual labelling. For example, LSDA [33] transformed whole-image classification parameters into object detection parameters through a domain adaptation procedure. Hu et al. [34] considered transferring knowledge learned from bounding box detection to instance segmentation. Some previous works [35], [36], [38] considered adjusting the network architecture by crafting specific modules for improving the performance of model capacity transferring. Li et al. [37] proposed a new training strategy to handle the new task without forgetting the knowledge learned in the source domain. The proposed Graphonomy advances the existing models in several aspects. First, our framework is more flexible to transfer knowledge across across datasets or co-related tasks. Second, our Graphonomy is capable of dynamically adjusting the connections among graph nodes rather than reasoning with a fixed graph structure. Third, some external knowledge such as linguistic embedding can be also incorporated into our reasoning framework without piling up the complexity. GRAPHONOMY In this section, we introduce the proposed learning framework called Graphonomy, which explicitly incorporates graph reasoning and transfer learning upon the conventional parsing network and enforcing the mutual benefits of the parsing across domains, as Fig. 2 illustrates. This framework involves two modules: Intra-Graph Reasoning and Inter-Graph Transfer, and they perform iteratively within the semantics-aware graph representation. We start by taking the human parsing as a specific scenario to discuss the two modules of Graphonomy. Specifically, our framework on human parsing can handle different levels of human parsing needs (i.e., label annotations vary from dataset to dataset), whose overview on human parsing is shown in Fig 3. We further introduce how to extend Graphonomy for handling multi-level universal parsing at multiple datasets with a single model. Then, we discuss the adaptation of our framework to the panoptic segmentation. Intra-Graph Reasoning Given local feature tensors from convolution layers, we introduce Intra-Graph Reasoning to enhance local features, by leveraging global graph reasoning with external structured knowledge. To construct the graph, we first summarize the extracted image features into high-level representations of graph nodes. The visual features that are correlated to a specific semantic part (e.g., face) are aggregated to depict the characteristic of its corresponding graph node. Formally, we define an undirected graph as G = (V, E) where V and E denote the vertices and edges respectively, and N = |V |. And we take X ∈ R H×W ×C as the module input, where H, W and C represent height, width and channel number of the feature maps. We first produce high-level graph representation Z ∈ R N ×D of all N vertices, where D is the feature dimension for each v ∈ V , and the number of nodes N is consistent with the number of target part labels of a dataset. Thus, the projection can be formulated as the following function, where W is the trainable transformation matrix for converting each image feature x i ∈ X into the dimension D. The projection function φ map the features representation to the graph representation Z ∈ R N ×D . Specifically, the projection process first learns a projection parameter P ∈ R C×N , and converts the feature dimension of X according to the number of the nodes as, where X ∈ R H×W ×C is resized to R HW ×C , × is the matrix multiplication and we can obtain the X 1 ∈ R HW ×N . Then, we calculate an intermediate feature X 2 , where X ∈ R H×W ×C is resized to R HW ×C . And we multiply X 2 with a trainable weight matrix W 1 ∈ R C×D to obtain the graph representation Z ∈ R N ×D , The graph projection process can be thus specified as, Furthermore, we exploit the semantic constraints from the human body knowledge to invoke the global graph reasoning based on the high-level graph representation Z. As shown in Fig 4, we introduce the connections between the human body parts to encode the relationship between two nodes. For example, hair usually appears with the face so these two nodes are linked. While the hat node and the leg node are disconnected because they have nothing related. Following the graph convolution method [26], we perform graph propagation over representations Z of all part nodes with matrix multiplication, resulting in the enhanced features Z e : where W e ∈ R D×D is a trainable weight matrix and σ is a nonlinear function. The node adjacency weight a v→v ∈ A e is defined according to the edge connections in (v, v ) ∈ E, which is a normalized symmetric adjacency matrix. And we employ the graph convolution for T times (e.g.,T = 3 in practice). Fig. 4: Examples of the definite connections between each two human body parts, which is the foundation to encode the relations between two semantic nodes in the graph for reasoning. Two nodes co-relates if they are connected by a white line. At last, the evolved global context can be used to further boost the capability of image representation. Similar to the projection operation (Eq. 5), we use another transformation matrix to reproject the graph nodes to the images features X p . As a result, the image features are updated by the weighted mappings from each graph node that represents different characteristics of the semantic parts. Inter-Graph Transfer To distill relevant semantics from one source graph to another target graph, we introduce Inter-Graph Transfer to bridge all semantic labels from different datasets. Although different levels of human parsing tasks have diverse distinct part labels, there are explicit hierarchical correlations among them to be exploited. For example, torso label in a dataset includes upper-clothes and pants in another dataset, and the upper-clothes label can be composed of more fine-grained categories (e.g., coat, T-shirt and sweater) in the third dataset, as shown in Fig. 1. We make efforts to explore various graph transfer dependencies between different label sets, including feature-level similarity, handcraft relationship, and learnable weight matrix. Moreover, considering that the complex relationships between different semantic labels are arduous to capture from limited training data, we employ semantic similarity that is encapsulated with linguistic knowledge from word embedding [39] to preserve the semantic consistency in a scene. We encode and incorporate these different types of relationships into the network to enhance the graph transfer capability. Let G s = (V s , E s ) denote a source graph and G t = (V t , E t ) a target graph, where G s and G t may have different structures and characteristics. We represent a graph as a matrix Z ∈ R N ×D , where N = |V | and D is the dimension of each vertex v ∈ V . The graph transformer can be formulated as, where A tr ∈ R Nt×Ns is a transfer matrix for mapping the graph representation from Z s to Z t . W tr ∈ R Ds×Dt is a trainable weight matrix. We seek to find a better graph transfer dependency [1,Ns] , where a i,j means the transfer weight from the j th semantic node of source graph to the i th semantic node of target graph. We introduce and compare four schemes for implementing the transfer matrix. The effectiveness of the different schemes will be evaluated in our experiments. Handcraft relation. Considering the inherent correlation between two semantic parts, we first define the relation matrix as a hard weight, i.e., {0, 1}. When two nodes have a subordinate relationship, the value of edge between them is 1, else is 0. For example, hair is a part of head, so the edge value between hair node of the target graph and the head node of the source graph is 1. Learnable matrix. In this way, we randomly initialize the transfer matrix A tr , which can be learned during the network training. Feature similarity. The transfer matrix can also be dynamically established by computing the similarity between the source graph nodes and target graph nodes, which have encoded highlevel semantic information. The transfer weight a i,j can be calculated as, where sim(x, y) is the cosine similarity between x and y. v s i and v t j represent the feature vectors of the i th target node and j th source node, respectively. Semantic similarity. Besides the visual information, we further explore the linguistic knowledge to construct the transfer matrix. We use the word2vec model [39] to map the semantic word of labels to a word embedding vector. Then we compute the similarity between the nodes of the source graph V s and the nodes of the target graph V t , which can be formulated as, where s ij represents the cosine similarity between the word embedding vectors of i th target node and j th source node. According to the transfer matrix defined in Eq. 7, knowledge of the source graph can be transferred to the target graph by combining the features over the structures of the two graphs. It worth mentioning that the knowledge of the source and target graph can be bi-directionally transferred from one to the other, vice-versa, so that no assumption is hold on the label granularity across different datasets (i.e., tasks). For example, the source dataset is either finer or coarser than the other in terms of labels, which can be both handled by our framework flexibly. In this way, the hierarchical information of different label sets can be associated and propagated via the cooperation of Intra-Graph Reasoning and Inter-Graph Transfer, which enables our model generating more discriminative features for achieving accurate fine-grained pixel-wise classification. Universal Human Parsing As shown in Fig. 3, in addition to improving the performance of one model by utilizing the information transferred from other graphs, our Graphonomy is capable of learning a universal human parsing model by incorporating knowledge from diverse datasets. As different datasets have large label discrepancy, previous works usually adopted fine-tuning techniques on each dataset or performed multi-task learning (e.g., crafting several independent network branches for handling different datasets). In contrast, our Graphonomy can unify label annotations across different datasets via the semantic-aware graph reasoning and transfer, enforcing the : Illustration of applying our Graphonomy to panoptic scene segmentation. Each task (i.e., instance-level thing segmentation or pixel-wise segmentation of background stuff) is treated as one domain, and our framework exploits the semantics-aware dependencies across domains in an explicit way. The implementation of graph construct is modified based on the version used for human parsing, and the other components are basically kept. By analogy, our framework can be easily extneded into other similar scene understanding problem involving multiple co-related tasks. Algorithm 1 The Sketch of Parsing Model Training with Graphonomy. Input: Feature maps X. Output: enhanced feature maps X . // Build graph Obtain graph Z by function φ for each domain (e.g., dataset, task); for i = 1 to T , do // Intra-Graph Reasoning Evolve each graph within the same domain by Eq. 6; // Inter-Graph Transfer Transfer Graph within different graphs by Eq. 7; end for // Re-projecting to feature maps Obtain re-projection feature maps X p and enhanced feature maps X = X + X p . mutual benefits of the parsing across different datasets. The overall sketch of training image parsing models with our Graphonomy is generally summarized in Algorithm 1. Panoptic Scene Segmentation Besides universal human parsing, our Graphonomy can also handle other image understanding problems by simply modifying the graph construction in the pipeline of Graphonomy. As discussed above, panoptic scene segmentation is a typical image understanding problem that involves multiple co-related tasks. We can treat each task (e.g., either the instance-aware object (thing) segmentation or the stuff segmentation) as a domain so that the across domain reasoning and transfer framework can be easily adapted into the multi-task panoptic segmentation scenario. By analogy, more extra co-related tasks can be also integrated with our framework towards general scene understanding. In our implementation for handling panoptic segmentation, we craft the graph construction in the Intra-Graph Reasoning and Intra-Graph Transfer modules. We first modify the graph representation Z where original nodes represent semantic labels to Z i ns ∈ R Nins×D of all N ins instances, where each node represents one identified instance and D is the desired feature dimension. The modified graph representation Z ins is determined by where W is the trainable transformation matrix for converting each image feature x i ∈ X into the dimension D and {r i } ∈ R is the proposals of detected instances. Specifically, the projection function φ for the foreground instances is implemented by projecting the feature maps of instances i to the graph representation z i ∈ Z. The features of z i are extracted by pooling the features of the region r i , which can be formulated as, where P ooling() is the operation of ROI-Pooling used to pool the feature maps X based on the detected region r i and W is the learnable weight. After processing by the graph reasoning module, we re-project the feature representations of each vertex to the proposal by concatenating the node features z i with the features of its corresponding proposal X p . The enhanced proposal features X can be obtained by where Concat(·, ·) is the concatenation operation and (i, j) ∈ r i is the index of the proposal r i . To adaptively represent any semantic relations and consider underlying dependencies between nodes, we introduce an attention mechanism to obtain the dynamic adjacent and transfer matrix. Following [40], we calculate the edge connection a ij ∈ A between two nodes z i , z j according to where || is the concatenation operation, N i is the neighborhood of node i and δ is LeakyReLU nonlinear activation function. Obviously, the dynamic determination of edge connection is more general for tackling similar image parsing problem, compared with the hand-craft adjacent and transfer matrix. This implementation also reflect the flexibility of our Graphomony that the graph construction can be derived by either external hand-crafted prior (e.g., for the human parsing scenario) for attentive data-driven learning (e.g., for the panoptic segmentation scenario). Compared with the learnable matrix in Section 3.2, Eq. 13 determines the edge weights of two given nodes, while the matrix computes the edge weights by using gradient backward. Moreover, casting Eq. 13 can handle the scenarios that the total number of graph nodes is not fixed and is thus a more general approach. Fig. 5 illustrates the learning process of our Graphomony for panoptic scene segmentation. The semantic labels for the task of foreground instance segmentation are associated with the object (thing) identities while the labels for background stuff segmentation are associated with the semantic taxonomy. Unlike the background stuff segmentation, each foreground thing need to be assigned an identity for distinguishing it from the other ones sharing the same category label. The foreground instances are usually localized in certain compact local regions within very strong surrounding context to its spatial neighbours. Therefore, we propose to build the semantic graph in the domain of instance segmentation based on the detected regions. Specifically, we extract the features of the predicted proposals with the ROI pooling [41], [2], and then represent the graph vertexes by the region-based feature tensors. In the experiments, the benefit of this graph construction will be demonstrated compared with the way used in human parsing. During the training procedure, the Intra-Graph Reasoning and Inter-Graph Transfer iteratively execute, and the computing of the two modules follows Eq. 6 and Eq. 7, respectively. An illustrative example of generating the semantics-aware graphs from the input image is shown in Fig. 6. In each domain, the semantic structured graph is constructed to guide the feature learning via the Intra-Graph Reasoning. That is, Intra-Graph Reasoning module enables each foreground instances to assemble the contextual information from other instances, and similarly the semantic relations among background stuff are also captured. The dependencies between the identified foreground instances and the scene background are then bidirectionally explored by the Inter-Graph Transfer module. In particular, the connections between the graphs extracted from different tasks are dynamically determined by the attention mechanism according to Eq. 13, which is similar as the implementation in the reasoning module. EXPERIMENTS ON HUMAN PARSING In this section, we evaluation the effectiveness of our framework on standard human parsing benchmarks including training with a Implementation Details We use the basic neural network settings following the DeepLab v3+ [3], and we employ the Xception [42] pre-trained on the COCO [43] dataset and set output stride = 16. To illustrate the flexibility of our framework, we also adopt PSPNet [44] as the backbone network. Following the original implementation in [44], we pretrain the network on ImageNet and set the output stride = 16. Note that the implementation of PPSNet will be specially indicated in our experiments, and otherwise DeepLab v3+ is adopted. The number of nodes in the graph is set according to the number of categories of the datasets, i.e., N = 7 for Pascal-Person-Part dataset, N = 18 for ATR dataset, N = 20 for CIHP dataset. The feature dimension D of each semantic node is 128. The Intra-Graph Reasoning module has three graph convolution layers with ReLU activate function. For Inter-Graph Transfer, we use the pre-trained model on source dataset and randomly initialize the weight of the target graph. Then we perform end-to-end joint training for the whole network on the target dataset. During training, the 512 × 512 inputs are randomly resized between 0.5 and 2, cropped and flipped from the source images. Following [3], we employ a "ploy" learning rate policy. We adopt SGD optimizer with momentum = 0.9 and weight decay of 5e − 4. We set the initial learning rate to 0.007 for DeepLab v3+ [3]. For PSPNet,we set the initial learning rate to 0.02 following []. To stabilize the predictions, we perform inference by averaging results of left-right flipped images and multi-scale inputs with the scale from 0.50 to 1.75 in increments of 0.25. Our method is implemented by extending the Pytorch framework. We reproduce DeepLab v3+ [3] and PSPNet [44] following all the settings in its paper. All networks are trained on four TITAN XP GPUs. Due to the GPU memory limitation, the batch size is set to be 12. For each dataset, we train all models at the same settings for 100 epochs for the good convergence. To stabilize the inference, the resolution of every input is consistent with the original image. Upon acceptance, we plan to release our source code and trained models. Fig. 7: Examples of different levels of human parsing results generated by our universal human parsing model. We can observe that our model is able to generates precise and fine-grained results for different levels of human parsing tasks by distilling universal semantic graph representation. Datasets We evaluate our Graphonomy on four human parsing datasets with different label annotations, including PASCAL-Person-Part dataset [48], ATR dataset [10], Crowd Instance-Level Human Parsing (CIHP) dataset [45], and Multiple Human Parsing (MHP) dataset [7]. The labels of human parts among these datasets are hierarchically correlated and the label granularity is naturally annotated in a coarse-to-fine manner. PASCAL-Person-Part dataset [48] is a set of additional annotations for PASCAL-VOC-2010 [49]. It goes beyond the original PASCAL object detection task by providing pixel-wise labels for six human body parts, i.e., head, torso, upper-arms, lower-arms, upper-legs, lower-legs. There are 3,535 annotated images in the dataset, which is split into separate training set containing 1,717 images and test set containing 1,818 images. ATR dataset [10] aims to predict every pixel with 18 labels: face, sunglass, hat, scarf, hair, upper-clothes, left-arm, right-arm, belt, pants, left-leg, right-leg, skirt, left-shoe, right-shoe, bag and dress. Totally, 17,700 images are included in the dataset, with 16,000 for training, 1,000 for testing and 700 for validation. CIHP dataset [45] is a new large-scale benchmark for human parsing task, including 38,280 images with pixel-wise annotations on 19 semantic part labels. The images are collected from the realworld scenarios, containing persons appearing with challenging poses and viewpoints, heavy occlusions, and in a wide range of resolutions. Following the benchmark, we use 28,280 images for training, 5,000 images for validation and 5,000 images for testing. Evaluation Metrics We use the evaluation metrics including accuracy, the standard intersection over union (IoU) criterion, and average F-1 score. Comparison with state-of-the-arts We report the results of human parsing generated by our Graphonomy and other competing approaches in Table 1, 2, 3, and 6, on the four datasets, respectively. In Table 1, "Graphonomy (CIHP)" is the method that transfers the semantic graph constructed on the CIHP dataset to enhance the graph representation on PASCAL-Person-Part. Some previous methods achieve high performance with over 68% Mean IoU, thanks to the wiper or deeper architecture [47], [46], and multi-task learning [45]. In contrast, the superior performances generated by our framework mainly attribute to the explicitly incorporating human knowledge # Basic network [3] [7]. Performance on the val set. and label taxonomy jointly with global reasoning on the graph representation. In Table 2, "Graphonomy (PASCAL)" denotes the method that transfer the high-level graph representation on PASCAL-Person-Part dataset to enrich the semantic information. These competing approaches [19], [20], [21] adopt the LSTM based architecture for enhancing the feature representation learning, which was beaten by our graph reasoning and transfer method. In Table 3, our Graphonomy (PASCAL) improves the result up to 58.58% compared with the multi-task learning method proposed by Gong et al. [45]. We report the results in terms of the standard intersection over union (IoU) on MHP dataset in Table 6, and our model achieves about 1% improvement. Universal Parsing via Training over Multi-Datasets Training over Multi-Datasets. To sufficiently utilize all human parsing resources and unify label annotations from different domains or at various levels of granularity, we train a universal human parsing model to unify all kinds of label annotations from different resources and tackle different levels of human parsing, which is denoted as "Graphonomy (Universal Human Parsing)". We combine all training samples from three datasets and select images from the same dataset to construct one batch at each step. As reported in Table 1, 2, 3, our method achieve superior performances on all the datasets. We also compare our Graphonomy with multi-task learning method by appending three parallel branches based on the backbone, in which each branch predicts the labels from one dataset. Compared with traditional approaches, our Graphonomy is able to generate a universal semantic graph representation by distilling knowledge across different datasets while enforcing the mutual benefits of each task (i.e., the parsing on individual dataset). We also present the qualitative universal human parsing results in Fig. 7. Our Graphonomy is able to generates precise and fine-grained results for different levels of human parsing tasks, which further verifies the rationality of our Graphonomy based on the assumption that incorporating hierarchical graph transfer learning upon the deep convolutional networks can capture the critical information across the datasets to achieve good capability in universal human parsing. From the results reported in Table 1, 2, 3, we can observe that our model trained in the universal way (i.e., Universal Human Parsing) is inferior compared with the models trained for PASCAL and CIHP, respectively. These results reflect training a single universal model by combining different levels of semantic labels is more difficult than training with a specific dataset. The latter requires only transferring the model from the source dataset to the target without handling the discrepancy of label granularity across different datasets. Learning general image features for fitting different domains (i.e., datasets) at the backbone networks also increases the complexity. Specifically, as shown in the top right image in Fig 7, the annotated labels vary across different datasets, e.g., the same region of upper-body is annotated as torso, upper-arms and lower-arms in the PASCAL dataset, upper-clothes and face in ATR, and upper-clothes, coat and torso-skin in CIHP. Extending Model Capacity via Incremental Training. Furthermore, an appealing merit of Graphonomy is that the model capacity can be extended in an incremental manner, i.e., incrementally updating semantic labels with training on a new dataset. In this experiment, we first train our model using the CIHP dataset and then adapt it to the PASCAL-Person-Part dataset, and the quantitative results are reported in Table 7. Specifically, we create a new branch based on the backbone network and the Inter-Graph connection to other branches while fixing the previously learned parameters. In this way, the obtained knowledge from the previous training can be kept during the incremental training on the new dataset. Ablation Study We further discuss and validate the effectiveness of the main components of our Graphonomy on PASCAL-Person-Part dataset [48]. Intra-Graph Reasoning. As reported in Table 4, by encoding human body structure information to enhance the semantic graph representation and propagation, our Intra-Graph Reasoning acquires 0.50% improvements compared with the basic network (#1 vs #3). To validate the significance of adjacency matrix A e , which is defined according to the connectivity between human body parts and enables the semantic messages propagation, we compare our methods with and without A e (#2 vs #3). The comparison result shows that the human prior knowledge makes a larger contribution than the extra network parameters brought by the graph convolutions. Inter-Graph Transfer. To utilize the annotated data from other datasets, previous human parsing methods must be pretrained on the other dataset and fine-tuned on the evaluation dataset, as the #4 result in Table 4. Our Graphonomy provides a Inter-Graph Transfer module for better cross-domain information sharing. We further compare the results of difference graph transfer dependencies introduced in Section 3.2, to find out the best transfer matrix to enhance graph representations. Interestingly, it is observed that transferring according to handcraft relation (#6) diminishes the performance and the feature similarity (#8) is the most powerful dependency. It is reasonable that the label discrepancy of multiple levels of human parsing tasks cannot be solved by simply defining the relation manually and the hierarchical relationship encoded by the feature similarity and semantic similarity is more reliable for information transferring. Moreover, we compare the results of different combinations of the transfer methods, which bring in a little more improvement. In our Graphonomy, we combine feature similarity and semantic similarity for the Inter-Graph Transfer, as more combinations cannot contribute to more improvements. Different number of training data. Exploiting the intrinsic relations of semantic labels and incorporating hierarchical graph transfer learning upon the conventional human parsing network, our Graphonomy not only tackle multiple levels of human parsing tasks, but also alleviate the need of heavy annotated traning data to achieve the desired performance. We conduct extensive experiments on transferring the model pre-trained on CIHP dataset to PASCAL-Person-Part dataset. We use different annotated data in training set by random sampling for training and evaluate the models on the whole test set. As summarized in Table 5, simply fine-tuning the pre-trained model without our proposed Inter-Graph Transfer obtains 70.33% mean IoU with all training data. However, our complete Graphonomy architecture uses only 50% of the training data and achieves comparable performance. With 100% training data, our approach can even outperforms the fine-tuning baseline for 0.81% in average IoU. This superior performance confirms the effectiveness of our Graphonomy that seamlessly bridges all semantic labels from different datasets and attains the best utilization of data annotations. Analysis of graph convolution. To understand the effectiveness of using different layers of graph convolution, we conduct the experiments with different settings of graph convolution and report the results in Table 8. From the results, we can observe that increasing the graph layers of the Intra-Graph leads to better performance. Using five layers improves the performance by about 0.56 compared with using only one layer, but brings about 0.02% compared with using three layers. However, increasing layers results in more parameters, more GPU memory, and time consumption. Thus, we choose to use three layers for finding a trade-off between performance and resource cost. Training with different source datasets. To understand the performance of Graphonomy on transferring from different sources, we conduct a batch of experiments and the results are reported in Table 9. In Table 9a,the result shows that the model pretrained on the MHP dataset outperforms pretrained on ATR, Fig. 9: Visualized results predicted on (a) ATR dataset [9] (a) and (b) MHP dataset [7]. since the labels in MHP are more fine-grained. In Table 9a and 9c, we can observe that the model pretrained on CIHP is most superior benefiting from the larger amount of images and the fine-grained labels in this dataset. Pretraining on the ATR dataset performs better in Table 9b with the similar reason. Using different backbone networks. We also conduct experiments for evaluating the performances with different settings of the backbone network. Table 10 shows the comparisons by replacing the original backbone (i.e., DeepLab v3+ [3]) with PSPNet [44] on different datasets. These results reflect that the powerful feature learning network (e.g., deeper or having superior structures) would bring performance gain. Comparing with different transfer learning methods. In these experiments, we compare our framework with other exiting transfer learning methods and the baseline feature fine-tuning. We also adopt PSPNet or DeepLab v3+ as the backbone network for comparison. The results are reported in Table 11, our graph transfer learning achieves the leading performances with the different setting. As discussed in Section 2, most of the existing transfer learning methods mainly focus on the network architecture [36], [35] and training strategy [37], while our framework additionally consider the transferring of explicit semantics. Qualitative Results The qualitative results on the PASCAL-Person-Part dataset [48] and the CIHP dataset [45] are visualized in Fig. 8. As can be observed, our approach outputs more semantically meaningful and precise predictions than other two methods despite the existence of large appearance and position variations. Taking (b) and (e) for example, when parsing the clothes, other methods are suffered from strange fashion style and the big logo on the clothes, which [37] 60.88 Series Res. adapt [35] 61.39 Parallel Res. adapt [36] 61.56 Graphonomy (CIHP) 61.89 [48]. The methods are pretrained on the CIHP dataset [45], and then transfer to the PASCAL-Person-Part dataset [48]. leads to incorrect predictions for some small regions. However, thanks to the effective semantic information propagation by graph reasoning and transferring, our Graphonomy successfully segments out the large clothes regions. More superiorly, with the help of the compact high-level graph representation integrated from different sources, our method generates more robust results and gets rid of the disturbance from the occlusion and background, like (c) and (d). Besides, we also present some failure cases (g) and (h), and find that the overlapped parts and the very small persons cannot be predicted precisely, which indicates more knowledge is desired to be incorporated into our graph structure to tackle the challenging cases. More result comparisons can be found in supplementary materials. EXPERIMENTS ON PANOPTIC SEGMENTATION In the following, we evaluate the effectiveness of our Graphonomy on handling panoptic segmentation, a more general scene understanding problem. We first introduce experimental settings on standard benchmarks. Then we compare our method with some baselines and state-of-the-arts for demonstrating the superiority of Graphonomy. Experimental Settings Datasets and Evaluation Metrics We evaluate the performance of our Graphonomy on two panoptic segmentation datasets, COCO [51] and ADE20K [50]. COCO is one of the most challenging benchmarks including 115k/5k/20k images for training/validation/test-dev, respectively, with 80 categories of instances and 53 categories of background stuff. ADE20K includes 20k/2k/3k images for training/validation/test with 100 instance (thing) and 50 stuff categories. Following [16], we use PQ (panoptic quality) as the evaluation metric. P Q T h and P Q St indicate the panoptic quality on the foreground instance segmentation and background stuff segmentation, respectively. Implementation Details We use the basic neural network structure provided by Panoptic-FPN [11], and also implement a variant version [23] by employing the deformable convolution [52], which is denoted as "Panoptic-FPN (D)". Following the settings in [11], we employ ResNet50-FPN [53], [54] pre-trained on ImageNet [55] as backbone and use the same way of data augmentation. The number of nodes in the graph corresponding to the task of instance-aware segmentation is equal to the number of instances (things) in the input image, while the number of nodes in the graph corresponding to the task of stuff segmentation is set as the number of semantic categories. The feature dimension D of each node is set as 128. During training, we resize the inputs to the shorter side of 800 following [11], and adopt SGD optimizer with the initial learning rate of 0.02, momentum of 0.9 and weight decay of 5e-4. On COCO [51] and ADE20K [50] datasets, we train all model at the same settings for 12 and 24 epochs, respectively. All networks are trained on 8 TITAN XP GPUs with the batch size of 16. Comparison with state-of-the-arts We compare our proposed Graphonomy with other state-of-theart methods on COCO val dataset, and the quantitative results are reported in Table 12. "Graphonomy (Panoptic)" represents the model trained by our framework. These competing methods are mainly developed on powerful multi-task network architectures [11], [15], [23], [56], [17] and other advanced techniques such as the panoptic head [15], [23] and spatial attentive mechanism [17]. From the results on COCO, we can observe that our Graphonomy outperforms the other competing approaches in terms of three metrics, even though the baseline network we adopted is not the most powerful one. The significance of exploiting the semantic relations via graph reasoning and transfer is clearly demonstrated. We compare our method with only one method (i.e., Panoptic-FPN) on ADE20K since no result reported by the other competing methods, and Table 13 shows the quantitative improvement by our Graphonomy. A number of visualized results are exhibited in Fig. 10. Ablation Study We further conduct the ablation experiments on ADE20K dataset for validating the effectiveness of the main components in our Graphonomy. Graph construction in instance segmentation. We build the semantic graph in the domain of instance segmentation by using the region detector and ROI pooling, in order to better distinguish each foreground instance from its surrounding neighbors. To illustrate the difference of using different ways of graph construction, we also implement an another version by constructing the graph using the global image features just like the way we used in the domain of background stuff segmentation (and in human parsing). The results can be found in Table 13, denoted by "Ins Graph Construction". And the results generated by the ultra implementation and the original are denoted by "Instances" and "Semantics", respectively. According to experiments, we can find that using region-based feature for representing instances is beneficial especially in terms of P Q St . Inter-Graph Reasoning and Inter-Graph Transfer. The experiments of validating the effectiveness of Inter-Graph Reasoning and Inter-Graph Transfer are further conducted for better understanding how Graphonomy boosts the model capacity. Table 13 reports the experimental results, in which "w/o Inter-Graph Transfer" represents the results without activating the Inter-Graph Transfer while the Intra-Graph Reasoning is working for both of the domains; and "Graphonomy (Panoptic)" represents the results generated by the complete framework. These results clearly demonstrate how the two modules contribute progressively to the performance. CONCLUSION In this work, we have proposed a graph reasoning and transfer framework, namely Graphonomy, targeting on two crucial tasks in image semantic understanding, i.e., human parsing and panoptic scene segmentation. Our framework, in particularly, is capable of resolving all levels of human parsing tasks using a universal model to alleviate the label discrepancy and utilize the data annotations from different datasets. Graphonomy can also effectively solve panoptic scene segmentation with the same pipeline as human parsing by jointly optimizing two co-related tasks (i.e., instancelevel segmentation and background stuff segmentation). The advantage of the proposed framework is extensively demonstrated by the experimental analysis and achieving new state-of-the-arts against existing methods on a number of large-scale standard benchmarks (e.g., ATR, CIHP and MHP for human parsing, and MS-COCO and ADE20K for panoptic scene segmentation ). The flexibility of Graphonomy is also reflected on the diverse ways of implementation or embedding external prior knowledge for tackling other similar tasks without piling up the complexity. There are several directions in which we can do to extend this work. The first is to explore more valid contextual relations (e.g., linguistics-aware correlations, high-order spatial relations, or object dependency in 3D coordinates) in the graph representation for further improving the performance. The second is to investigate how to extend our framework to handle more challenging high-level applications beyond the pixelwise category or identity recognition. For example, understanding scene from the cognitive human-like perspective is a new trend in computer vision and general AI research, e.g., the object function understanding and human-object interaction with intention analysis. Exploring the causality-aware dependency, commonsense patterns and individual value models could be very promising based on our Graphonomy. The third is to develop more powerful reasoning and transfer learning algorithms within the model training process. Liang Lin is a Full Professor at Sun Yat-sen University, and the CEO of DMAI. He served as the Executive R&D Director and Distinguished Scientist of SenseTime Group from 2016 to 2018, taking charge of transferring cutting-edge technology into products. He has authored or co-authored more than 200 papers in leading academic journals and conferences (e.g., TPAMI/IJCV, CVPR/ICCV/NIPS/ICML/AAAI). He is an associate editor of IEEE Trans, Human-Machine Systems and IET Computer Vision. He served as Area Chairs for numerous conferences such as CVPR and ICCV. He is the recipient of numerous awards and honors including Xiaodan Liang Xiaodan Liang is currently an Associate Professor at Sun Yat-sen University and also the research head of DarkMatter AI China. She was a postdoc researcher in the machine learning department at Carnegie Mellon University, working with Prof. Eric Xing, from 2016 to 2018. She received her PhD degree from Sun Yat-sen University in 2016, advised by Prof. Liang Lin. She has published several cutting-edge projects on human-related analysis, including human parsing, pedestrian detection and instance segmentation, 2D/3D human pose estimation and activity recognition. She served as Area Chairs for numerous conferences such as CVPR and ICCV.
2020-12-10T09:03:34.243Z
2020-12-08T00:00:00.000
{ "year": 2021, "sha1": "d3e08a0133c073d1cf6c7610d4189d07db00abf2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.10620", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "75c40e40969cb2ed9e63b523e0b73bfb2846ea69", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
209462892
pes2o/s2orc
v3-fos-license
Osmotic regulation of UT‐B urea transporters in the RT4 human urothelial cell line Abstract Facilitative UT‐B urea transporters play important physiological roles in numerous tissues, including the urino‐genital tract. Previous studies have shown that urothelial UT‐B transporters are crucial to bladder function in a variety of mammalian species. Using the RT4 bladder urothelial cell line, this study investigated the potential osmotic regulation of human UT‐B transporters. Initial end‐point PCR experiments confirmed expression of both UT‐B1 and UT‐B2 transcripts in RT4 cells. Western blotting analysis revealed glycosylated UT‐B protein to be highly abundant and immunolocalization experiments showed it was predominantly located on the plasma membrane. Further PCR experiments suggested that a 48 hr, NaCl‐induced raise in external osmolality increased expression of UT‐B transcripts. Importantly, these NaCl‐induced changes also significantly increased UT‐B protein abundance (p < .01, n = 7, ANOVA), whereas mannitol‐induced changes in external osmolality had no effect (NS, n = 4, ANOVA). Finally, similar increases in both UT‐B RNA expression and protein abundance were observed with urea‐induced changes to external osmolality (p < .05, n = 4, ANOVA). In conclusion, these findings strongly suggest that increases in external osmolality, via either NaCl or urea, can regulate human urothelial UT‐B transporters. species-including rats (Spector, Yang, Liu, & Wade, 2004), mice (Lucien et al., 2005), dogs (Spector, Yang, & Wade, 2007) and humans (Walpole, Farrell, McGrane, & Stewart, 2014). Crucially, it has been shown that UT-B knockout mice suffer DNA damage and apoptosis in the bladder, illustrating that UT-B plays a critical physiological role in protecting bladder urothelium (Dong et al., 2013). In addition, it was also shown that urea levels were significantly higher in UT-B null urothelium compared to wild type, indicating that UT-B helps remove toxic intracellular urea from bladder urothelial cells (Dong et al., 2013). Importantly, although no link has been reported between UT-B and human kidney disease (Capriolli, Visentainer, & Sell, 2017), UT-B allelic variation has been shown to affect bladder cancer risk levels (Garcia-Closas et al., 2011;Rafnar et al., 2011). From the physiological aspect, it has also been reported that there is a direct association between UT-B and final voided urine concentration, as measured by urinary specific gravity (Koutros, Baris, Fischer, Tang, & Garcia-Closas, 2013). More recently, UT-B expression has also been shown to be downregulated in bladder urothelial cancer (Hou, Alemozaffar, et al., 2017a;Li et al., 2014) and a mutated UT-B transporter has been reported in this disease (Hou, Alemozaffar, et al., 2017a). Understanding the physiological regulation of human urothelial UT-B transporters is therefore of potential clinical significance. Relatively little is understood regarding how UT-B urea transporters are regulated, certainly in comparison to renal UT-A transporters (Stewart, 2011). However, investigations of urino-genital tract UT-B proteins have previously shown that hydration status can affect levels of transporter abundance. For example, changes in UT-B1 have been reported in mouse urino-gential tissues, where two days of dehydration significantly reduced UT-B1 abundance in both bladder and ureter (Lucien et al., 2005). In direct contrast, water restriction in rats actually increased UT-B1 abundance in rat ureter (Spector et al., 2004). Importantly, further functional studies in rats confirmed that (i) urea was reabsorbed from the urine while it was stored in the rat bladder, and (ii) that hydration status altered these urea reabsorption rates (i.e., physiological regulation occurs) (Spector, Deng, & Stewart, 2011). However, there have been no detailed studies investigating the effect of hydration status on the abundance of UT-B protein in human urothelial cells. Our previous studies reported that glycosylated UT-B transporter protein is located in the umbrella cells of the human bladder urothelial layer (Walpole et al., 2014). The aim of this current study was therefore to use the human RT4 bladder urothelial cell line to investigate the effect of increases in external osmolality upon the expression and abundance of human UT-B urea transporters. | Tissue culture Human epithelial bladder transitional papilloma RT4 cells (ATCC, Manassas, VA) were cultured in McCoy's 5A modified medium (306 ± 12 mOsm, N = 3), supplemented with 10% heat inactivated fetal bovine serum. Once cells reached 90% confluency, they were sub-cultured in a 1:10 dilution. Media were removed and replaced with fresh media every 2 days. Cells were incubated at 37°C with 5% CO 2 . For regulatory experiments, cells were exposed to media containing additional NaCl, mannitol or urea for 48hr prior to RNA extraction or protein isolation. Osmolalities of experimental media were confirmed using an Osmomat 030 osmometer (Gonotec, Germany). | End-point PCR cDNA was prepared using a SensiFast cDNA kit (Bioline, London, UK) and employing either human bladder total RNA (AMS Biotechnology, UK) or prepared RT4 total RNA. PCR amplification was carried out using these cDNA samples together with Go-Taq polymerase enzyme (Promega, Kilkenny, Ireland), using UT-B, AQP3, AQP7, AQP9, NaKATP or Actin primers (see Table 1). Initial denaturation at 94°C for 2 min was followed by 30 or 35 cycles at 94°C for 30 s, 55°C or 60°C for 30 s, and 72°C for 30 s. Final extension was at 72°C for 5 min. | Protein preparation RT4 cells were grown to 85% confluency in 75cm 2 polycarbonate flasks, washed twice in homogenization buffer (300mM Mannitol, 12mM HEPES, pH 7.6), before physical detachment using a disposable cell scraper. The RT4 cell suspension was homogenized with a Polytron PT1200 homogenizer (Kinematica, Switzerland). The lysate was centrifuged at 1000g for 5 min at 4°C. The supernatant was further centrifuged at 17,000g for 25 min at 4°C. The resulting pellets were re-suspended in homogenization buffer for use as a membrane-enriched protein fraction. An aliquot of the supernatant was retained to represent a cytosol-enriched protein fraction. | Immunoblot analysis All protein samples were mixed at a 1:1 ratio with 2X Laemmli buffer and heated at 70°C for 10 min before being loaded (~5-10 μg per lane) on an 8 to 16% TGX gel for SDS-PAGE. The separated proteins were transferred to a nitrocellulose membrane and incubated with primary antibody for 16 hr at room temperature, in either 1:5,000 hUT-Bc19, 1:1,000 GAPDH, 1:1,000 AQP3 or 1:500 NaKATP. Blots were washed and then incubated for 1 hr with 1:5,000 anti-rabbit or anti-mouse antibody conjugated to horseradish peroxidase. After further washing the proteins were imaged using Western Lighting Plus ECL reagents (Perkin Elmer, USA) and a LAS4000 Imager (Fujifilm, Japan). For deglycosylation experiments protein samples were incubated with and without peptide-N-glycosidase F enzyme for 2 hr at 37°C. For peptide incubation experiments hUT-Bc19 was pre-incubated with specific or non-specific peptide for 24 hr using a rotating mixer. | Immunofluorescent localization RT4 cells, grown on glass cover slips for 72 hr, were fixed with 4.0% paraformaldehyde for 30 min. and then permeabilized with Triton X-100 for 20 min. Cells were quenched with 30 mM glycine for 30 min before incubation for 2 hr in 1:400 hUT-Bc19 antibody. Following incubation for 1 hr in goat anti-rabbit antibody conjugated to Alexafluor 488 (1:500), the cells were incubated in Hoechst 33,342 nucleic acid stain (1:2000) for 10 min. Cover slips were mounted on glass slides using Dako mounting medium. | RESULTS Initial end-point PCR experiments revealed that RT4 urothelial cells strongly expressed both UT-B and AQP3, but not AQP7 or AQP9 (Figure 1a). Using F1/R5 UT-B primers, it was revealed that UT-B1 was the predominant transcript, with UT-B2 expressed at a lower level ( Figure 1b). Expression of UT-B2 was confirmed using the UT-B2-specific F3/R5 primer set ( Figure 1b). Sequencing of PCR products (data not shown) revealed RT4 cells to possess the Jk(A) allelic variation of UT-B. Using the previously characterized hUTBc19 antibody, pre-incubated in a non-specific peptide, western blotting analysis showed strong signals for UT-B protein at 28 and 35-70 kDa in RT4 membrane-enriched protein samples, but not in cytoplasmic-enriched samples (Figure 2a). In NaKATP 5-CAGCAGAAGCTCATCATTGTGGA−3 5-GTTCTTCATGCCCTGCTGGAAGA−3 NaKATP 758 Actin 5-GTGCTGTCTGGCGGCACCACCAT−3 5-CCTGTAACAACGCATCTCATAT−3 Actin 514 Note: All sequences of both forward and reverse primers used within the PCR experiments, including precise target and exact size (in bp) of the expected product. contrast, pre-incubation with the same amount of the specific, immunizing peptide completely ablated all signals ( Figure 2a). [NOTE: Although these findings did not conclusively prove that all signals were UT-B, they did confirm that all signals were due to the hUTBc19 antibodies and not due to any contaminant]. Deglycosylation treatment with PNGaseF enzyme shifted the 35-70 kDa smeared signal in membrane-enriched protein to a strong distinct band at 28 kDa, plus a weaker signal at 40 kDa ( Figure 2b). Immunolocalization experiments then revealed that a strong UT-B signal in RT4 cells was indeed located in the plasma membrane, with only weak intracellular staining observed ( Figure 2c). Interestingly, there was little UT-B detected in the outermost plasma membrane surfaces of RT4 cell clusters (Figure 2c). To investigate effects of changes in external osmolality on UT-B transporters, cells were incubated for 48 hr in media containing various levels of mannitol or NaCl, to raise external osmolality by 100 and 200 mOsm. Initial end-point PCR experiments suggested that mannitol-induced changes in osmolality had little or variable effect on UT-B expression (Figure 3). In contrast, more consistent increases in the expression of UT-B1 and UT-B2 were observed with NaCl-induced changes in osmolality ( Figure 3). Next, western analysis was performed on mannitol-treated cells and suggested no change in UT-B protein abundance occurred within RT4 plasma membranes (Figure 4a), but that there was an increase in AQP3 transporters ( Figure 4b). This increase in AQP3 was mainly seen in the ~45 kDa (dimer) and ~100 kDa (tetramer) signals, rather than the ~25 kDa (monomer) signal. These patterns of response were generally repeated ( Figure 4c varied from experiment to experiment, there were very consistent, similar sized increases for UT-B and AQP3 (Figure 5c). Densitometry analysis confirmed significant changes for both UT-B (p < .01, N = 7, ANOVA) and AQP3 (p < .01, N = 7, ANOVA) (Figure 5d). For example, the mean (± S.D.) densitometry values for UT-B went from 58 ± 20 in control media (~300 mOsm) to 75 ± 21 in + 200 mOsm NaCl-treated cells (~500 mOsm), an increase of ~ 30%. Finally, it should be noted that while 48 hr exposure to NaCl-induced increased UT-B, exposures of 6 hr or 24 hr had less effect (data not shown). Finally, the effects of 48 hr exposure to various levels of external urea on RT4 cells were investigated. Using a range of 0 to 200 mM urea, end-point PCR experiments suggested that urea stimulated expression of UT-B, and particularly the UT-B1 transcript, but had little effect on AQP3, NaKATP, or actin ( Figure 6). Importantly, further investigation confirmed that external urea exposure increased UT-B protein abundance (Figure 7a), while it actually decreased the levels of AQP3 transporters ( Figure 7b). Additional experiments generally showed the same patterns ( Figure 7c) and densitometry analysis confirmed that 100 mM urea treatment increased UT-B protein abundance (p < .05, N = 4, ANOVA), decreased AQP3 (p < .05, N = 4, ANOVA) and had no effect on either NaKATP or GAPDH (NS, N = 4, ANOVA) ( Figure 7d). For example, the mean (± S.D.) densitometry values for UT-B went from 50 ± 14 in control media (~300 mOsm) to 66 ± 2 in + 100 mM urea-treated cells (~400 mOsm), an increase of ~30%. Lastly, the effects on protein abundance of treatments with mannitol, NaCl, and urea were summarized (Table 2). (Figure 2b). Immunolocalization studies then confirmed UT-B protein to be located on the plasma membrane ( Figure 2c). These general findings closely agree with our previous report on human bladder UT-B transporters (Walpole et al., 2014). In further agreement with the literature (Rubenwolf et al., 2014), these RT4 cells also abundantly expressed AQP3 transporters and appeared to be an appropriate model for our investigations. Interestingly, derived from urinary bladder transitional cell papilloma, we found that RT4 cells possess the Jk(A) allelic variation of UT-B, reported to confer an increased risk for bladder cancer (Garcia-Closas et al., 2011;Rafnar et al., 2011). However, the RT4 cells did not contain the 24 nucleotide in-frame, exon 4 deletion mutant recently reported in bladder cancer cells (Hou, Alemozaffar, et al., 2017a). The precise nature of the UT-B transporters in the mammalian bladder remains unknown. Previous studies have reported similar-sized, unglycosylated UT-B bladder proteins at 29 kDa in mice (Lucien et al., 2005), 32 kDa in rats (Spector et al., 2004) and 30 kDa in humans (Walpole et al., 2014). Similarly, in this current study, we were unable to clearly F I G U R E 5 Western blotting experiments showing effects of NaClinduced increases in external osmolality on protein abundance. (a) Increases in both 28 and 35-70 kDa UT-B signals were observed with NaCl treatment, whereas there was no change in NaKATP signals. (b) Significant increases were also observed in 45 and 100 kDa AQP3 protein, with no effect observed for 37 kDa GAPDH. (c) Summary graph comparing the densitometry values in control (~300 mOsm) to +200 mOsm NaCl (~500 mOsm) treatments in all experiments for UT-B (n = 7) and AQP3 (n = 7). (d) Bar graphs illustrating mean densitometry values for GAPDH, NaKATP, UT-B and AQP3 after +100 and +200 mOsm NaCl-induced treatments. Key: * = p < .05, ANOVA; ** = p < .01, ANOVA NaCl NaKATP UT-B determine the exact UT-B proteins present. As UT-B1 was the main transcript, it would be predicted that the resulting unglycosylated UT-B1 protein would be ~40 kDa in size. Although deglycosylation experiments did reveal a weak 40 kDa signal, there was a much stronger unglycosylated protein revealed at ~28 kDa (Figure 2b). We have previously suggested, due to our UT-B antibodies being targeted to the C-terminal, that the 30 kDa UT-B protein found in human bladder is most likely the result of a N-terminal truncation event (Walpole et al., 2014). Alternatively, the recent study by Hou et al. overexpressed both UT-B isoforms in HEK293 cells and showed unglycosylated proteins at ~38 kDa for UT-B2 and ~30 kDa for UT-B1 (Hou, Alemozaffar, et al., 2017a). It is therefore also feasible that what we have observed in RT4 cells is UT-B2 protein at a low abundance (~40 kDa) and UT-B1 protein at high abundance (at ~28 kDa), which would match the relative RNA expression we detected (Figure 1b). Unfortunately, we currently do not possess reliable UT-B N-terminal antibodies to confirm which, if either, of these suggestions is correct. Studies investigating mammalian urothelial UT-B transporter expression and abundance have generally suggested that dehydration can have significant effects. For example, dehydration increased UT-B protein abundance in both rat kidney (Lim et al., 2006) and rat ureter (Spector et al., 2004). Interestingly, while Spector et al. reported a 49% increase in ureter, there was only a marginal 14% increase in bladder, where UT-B was already highly abundant in control conditions (Spector et al., 2004). Within our current study, increasing external media osmolality with mannitol had no significant effect on UT-B abundance, but did significantly increase AQP3 transporters (Figure 4). In direct contrast, NaCl-induced changes in external media did stimulate increased UT-B protein abundance ( Figure 5). As expected, NaCl exposure also increased AQP3 transporters, as has been previously reported, for example in the NHU cell line (Rubenwolf, Georgopoulos, Kirkwood, Baker, & Southgate, 2012). Lastly, treatment with 100 mM urea also significantly increased UT-B protein abundance (Figure 7). Unlike with previous treatments, urea exposure actually significantly decreased AQP3 transporter protein (Figure 7), with the lack of a urea-induced increase in this urothelial aquagylceroporin again agreeing with a previous study (Rubenwolf et al., 2012). Overall, whilst NaCl and urea upregulated UT-B transporters, only NaCl (or mannitol) upregulated AQP3 (Table 2). So, what is the physiological relevance of urea transporter regulation in bladder urothelial cells? The classical view that the mammalian bladder is a simple storage vessel, which along with the ureter has no transport capabilities, is no longer consistent with the research literature. For example, it is now known that the composition of urine actually changes as it passes along the human urinary tract (Cahill, Fry, & Foxall, 2003). More specifically, it has long been known that urea can pass across the human bladder urothelial layers (Lilly & Parsons, 1990). Importantly, other studies with rat bladder have already shown that hydration status directly alters urea transport rates (Spector et al., 2011), presumably at least partly through changes in UT-B transporters. Our results suggest that the levels of human urothelial UT-B are also regulated by changes in the urine osmolality. Hence, concentrated urine containing higher levels of urea would stimulate an increase in UT-B transporters in bladder umbrella cells, therefore facilitating the rapid removal of the toxic urea that would be passing into the urothelial lining in greater amounts. In this current study, the significant 30% increase in UT-B protein abundance obtained is about twice the size of the effect previously observed in dehydrated rat bladder (Spector et al., 2004). However, the high levels of UT-B in untreated RT4 bladder cells, while being similar to that reported for human bladder, may hinder further investigations into the exact regulatory mechanisms involved. Instead, we suggest that human ureter derived cell lines may be a better model, since they are likely to see greater osmolality-induced increases in UT-B abundance due to low control levels of the transporter-perhaps similar or greater to the ~50% increase in ureter UT-B seen in dehydrated rats (Spector et al., 2004). The initial step in these future studies would be to first confirm low expression and abundance of UT-B in the human ureter tissue. Further studies, using a suitable ureter cell line, could then investigate the role of various cellular mechanisms previously shown to be involved in the osmotic regulation of UT-A urea transporters. For example, elements such as protein kinase C and calcium, which have been shown to be involved in the osmotic regulation of both renal UT-A1 (Klein, Martin, Kent, & Sands, 2012) and intestinal UT-A6 (McGrane & Stewart, 2016). What is the clinical relevance of understanding the regulation of human UT-B transporters? In the last decade, studies of various cancers have reported a substantial decrease in UT-B expression in cancerous tissues-including bladder (Hou, Alemozaffar, et al., 2017a;Li et al., 2014), prostate (Vaarala, Hirvikoski, Kauppila, & Paavonen, 2012) and lung (Frullanti et al., 2012). Indeed, in their recent review Hou et al. suggested that UT-B transporters could therefore be a novel target for cancer research (Hou, Kong, Yang, Xie, & Chen, 2017b). More recently, following initial reports of UT-B knockout mice suffering depression-like behavior (Li et al., 2012), UT-B transporters have also been implicated in various human brain diseases. For example, recent reports have linked UT-B to Huntington's (Handley et al., 2017), Alzheimer's (Recabarren & Alarcon, 2017) and Parkinson's disease (Santiago, Bottero, & Potashkin, 2018). Future findings on UT-B regulation, and particularly the resulting functional activities, could therefore have potential benefits to a wide range of human diseases. In conclusion, our data have shown that UT-B urea transporters are highly abundant in the RT4 human urothelial cell line. Chronic increases in external osmolality, either through NaCl or urea, lead to a significant increase in glycosylated UT-B protein abundance. These findings have important implications for our understanding of the physiological regulation of human bladder urothelial transport and further work is required to detail the cellular mechanisms controlling these processes.
2019-12-25T14:03:30.197Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "1ab1ea31bba01ff12c1de1bb89dca44e5879da49", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.14314", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21b0b703b62f9501cdd80831f51ee7d4d27d41bd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
225205417
pes2o/s2orc
v3-fos-license
Approximation method of the speed-torque mechanical characteristics using the spline interpolation of various electrical motors The design of the electric drives of the on-board installation requires motors of an adequate power. The dimensioning of the electric drive takes into account the mechanical loads and it provides the value of the torque which loads the electrical motor shaft. The selection of the appropriate electrical motors is based on the speed-torque mechanical characteristics, n = u(M), where n is the angular speed in rpm and M is the load torque in N · m · Drawing the n = u(M) characteristic, one can notice that the function is not injective, therefore we employ its ‘rotated’ version, i.e. the M = f(n) function. In this way we are able to mathematically define the … f function, and further on to deduce its approximation using the spline functions. As a case study we use an asynchronous motor for which the calculus is based on the Freidzon method that uses relative values. The spline approximation uses an original data processor which outputs various types of information in a wide range of text, image and computer code formats. Using the experimentally acquired data as input information in the spline based processor we have a more accurate approximation of the speed-torque mechanical characteristics. The previously mentioned methods are useful for the rapidly evaluation of the mechanical characteristics of the motors, that is an important criterion in the marine engineering decision making process. Introduction The importance of the automatic use of the on-board installations and mechanisms has strongly increased due to the new concept regarding the autonomous ships. Accordingly, the importance of the automatic electric drives has also increased. The electric drive represents the operation through which commands are done depending on the machines' work regime (mechanisms, mechanical devices, pneumatics, hydraulics) by the use of the electric energy, [1]. In comparison with the pneumatic, hydraulic and other types of actions, the electric drive has several strengths, such as: -easy supply with electrical energy; -large range of speeds without special reductors; -fine adjustment of the speed, in a wide range of values, promptly achieved; -starting, stopping and the reversal of a turn, may be done in a simple, fast and easy way; -relative high efficiency; -possibility to include them in automatic systems; All these features make the electric drive to be preferred in most of the industrial processes, being adapted to the wide types of conditions demanded by the various technological processes. An electric drive is manufactured using systems of electric motors, made of an assembly of devices which transforms the electric energy. The principal components of an electric drive system are presented in figure 1, [2]. Theoretical background The properties of the electrical machines and work mechanisms determine the behaviour of the drive systems which they are a part of. These properties are emphasized by the mechanical characteristics functional dependence between the electrical drive and the work mechanism, according to the technological process to be made. This is why in electrical drive technique is useful to classify electrical machines and work mechanisms according to the form of their mechanical characteristics. The mechanical characteristics of the electrical motors represent the dependence of the angular speed with respect to the resistance torque, s M . Because the mechanical characteristics can be the same for work mechanisms belonging to various industrial processes, the classification of work mechanisms is done independently of their destination and is rather done according to the dependence of the s M torque to the aforementioned parameters. Every electrical machine has an infinity of mechanical static characteristics from which only one is a natural mechanical static characteristic. It is defined as the locus of the steady functioning points at various loads and angular speeds, at the nominal voltage and nominal frequency without other electrical and electronic components, such as rheostats, coils, condensers, in the electrical supply circuitry. The asynchronous motor with short circuited rotor, of normal construction, has a low starting torque, being able to start only at low-value loads. Most of the electrical drives require high starting loads, therefore it is necessary to develop a motor with a squirrel cage, high electrical resistance rotor (tall bars or double squirrel cage motor solutions). These motors have parameters, including the output drive torque, which are depending on the slip. This is why there cannot be expressed the natural mechanical characteristics in an analytical form, similar to the ones of the coiled rotor motors. The squirrel cage motor natural characteristic is unique for each unit. Normally it is provided by the manufacturer or it can be experimentally deduced. If we don't have these opportunities to assess the natural characteristic, various scientists conceived methods to approximate the variation of the natural characteristic, being conceived mathematical laws of variation which accurately approximate the real characteristic. The natural mechanical characteristics of an asynchronous motor with short circuited rotor, with amendable start, used in naval electrical drives can be calculated within a reasonable margin with the so called "general equation" of electrical naval engines mechanical characteristics, [3]: x is an exponent which depends on the motor type; for an asynchronous motor with large slip The approximation of the natural mechanical characteristic, depending on the voltage and frequency variation may be computed using the following expression which uses the relative sizes: where: Original computer based method The mechanical characteristics may be regarded as a general type diagram whose variation is given by a series of points. The coordinates of the points may be either computed, or acquired using experimental studies. A general method used to approximate a diagram and to express it in an analytical form is presented in [5,6]. The method is based on spline functions, expressed along the i , 1 i interval as a third order polynomial, i.e. ( In this case the general function may be Let us consider an asynchronous motor with star connection, whose mechanical characteristic is presented as a set of points experimentally acquired, the results of the interpolation process being presented in figure 4. Discussion The calculus relations presented in section 2 were used to generate a series of theoretical mechanical characteristics, figure 5. Freidzon's calculus relations allow general approaches with a certain fair accuracy, relations which may be used for a wide range of types of motors and a wide range of motors' powers. Conclusions An important issue in marine electrical engineering is the accurate selection of the motor drives to be used for most of the naval installations. One of the major criteria is based on the mechanical characteristic of the electrical motors. This is why, the concepts employed to rapidly assess the IOP Conf. Series: Materials Science and Engineering 916 (2020) 012029 IOP Publishing doi:10.1088/1757-899X/916/1/012029 7 mechanical characteristics must use the strengths of the previously presented methods. In this way, both methods have strengths and weak points. The spline approximation method is accurate, its precision depending on the accuracy of the experimentally acquired data. However, it offers a particular characteristic specific to the motor under testing. The main strength of the calculus based on Freidzon method resides in its generality, being applicable for AC and DC motors of various ranges of powers. However, its accuracy is under the precision of a dedicated solution based on spline approximation. The spline approximation method as well as the Freidzon method, [3], are useful for the rapidly evaluation of the speed-torque mechanical characteristics of the motors, which is an important criterion in the marine engineering decision making process, [7].
2020-10-28T18:00:01.394Z
2020-09-11T00:00:00.000
{ "year": 2020, "sha1": "d18d4df054b964243e4aed1e9bab66d7a04cd707", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/916/1/012029", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0f30b4b20d4dcc9190dc8565142f554162763298", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
267683911
pes2o/s2orc
v3-fos-license
Unraveling the Dynamics of SARS-CoV-2 Mutations: Insights from Surface Plasmon Resonance Biosensor Kinetics Surface Plasmon Resonance (SPR) technology is known to be a powerful tool for studying biomolecular interactions because it offers real-time and label-free multiparameter analysis with high sensitivity. This article summarizes the results that have been obtained from the use of SPR technology in studying the dynamics of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) mutations. This paper will begin by introducing the working principle of SPR and the kinetic parameters of the sensorgram, which include the association rate constant (ka), dissociation rate constant (kd), equilibrium association constant (KA), and equilibrium dissociation constant (KD). At the end of the paper, we will summarize the kinetic data on the interaction between angiotensin-converting enzyme 2 (ACE2) and SARS-CoV-2 obtained from the results of SPR signal analysis. ACE2 is a material that mediates virus entry. Therefore, understanding the kinetic changes between ACE2 and SARS-CoV-2 caused by the mutation will provide beneficial information for drug discovery, vaccine development, and other therapeutic purposes. Introduction The global coronavirus disease 2019 (COVID-19) pandemic, caused by the SARS-CoV-2 virus, has resulted in many deaths and severe economic problems in all countries around the world [1][2][3].To date, SARS-CoV-2 has mutated thousands of times, and these mutations occur spontaneously during replication [4].These mutations will have direct implications with the increasing number of new variants of the virus.The bad news is that new variants of SARS-CoV-2 can affect the transmission rate of the virus, the severity of the disease, and the efficacy of the vaccine [5].The development of sensitive and accurate analytical technology is significant for the purpose of screening and handling viruses that have a high level of spread and transmission that is difficult to avoid.The standard methods have been used to detect COVID-19, including RNA detection by reverse transcription polymerase chain reaction (RT-PCR) from respiratory samples [6], antibody detection by the enzyme-linked immunosorbent assay (ELISA) [7], chest X-rays [8], and computed tomography [9]. Of the several methods mentioned, RT-PCR is still the gold-standard clinical diagnostic method for detecting COVID-19 [10].Unfortunately, this method requires special equipment with trained personnel.The RT-PCR procedure also takes a long time and has complex steps [11].Therefore, testing samples in mass quantities is difficult to realize.In this case, methods that are simple, fast, accurate, and sensitive are really needed to deal with the spread of the virus in the future.SPR biosensors have been used for more than 30 years to rapidly and accurately measure several biological and chemical species with very low detection limits at the atto-or femtomolar order [12].By utilizing surface plasmon waves (SPWs), the interaction between the receptor molecule and the analyte detected can be monitored in real-time by observing the sensorgram signal.This device also offers parallel analysis through the development of surface plasmon resonance imaging (SPRi) to offer more complete information [13]. Several research groups have reviewed topics related to the use of SPR technology for dealing with COVID-19.For detection, diagnosis, and early screening of SARS-CoV-2, this topic has been reviewed by Pandey's group [14] and Nor's group [15].More specific topics, such as using SPR technology to find materials that inhibit the entry of SARS-CoV-2 have been reviewed by Mauriz's group [16].This paper tries to review the different perspectives on the use of SPR in dealing with the COVID-19 pandemic.The main points that will be presented in this paper are illustrated in Figure 1.This paper begins with an introduction to the principles of SPR technology and continues with an introduction to the kinetic parameters of SPR signals.Studies show that the SARS-CoV-2 spike protein will bind to ACE2 to mediate the entry of the virus into cells [17].Therefore, in the last part of this paper, we will only focus on reviewing kinetic parameters in several cases of SARS-CoV-2 mutation with ACE2.ACE2 is the main natural receptor that mediates viral entry, and therefore, no modification of ACE2 is required in studying the kinetic parameters of different mutation and variants of SARS-CoV-2 [18].Furthermore, information related to kinetic parameters will be useful in monitoring epidemiological developments, vaccine development, drug discovery, and so on [16,19]. plasmon waves (SPWs), the interaction between the receptor molecule and the analyte detected can be monitored in real-time by observing the sensorgram signal.This device also offers parallel analysis through the development of surface plasmon resonance imaging (SPRi) to offer more complete information [13]. Several research groups have reviewed topics related to the use of SPR technology for dealing with COVID-19.For detection, diagnosis, and early screening of SARS-CoV-2, this topic has been reviewed by Pandey's group [14] and Nor's group [15].More specific topics, such as using SPR technology to find materials that inhibit the entry of SARS-CoV-2 have been reviewed by Mauriz's group [16].This paper tries to review the different perspectives on the use of SPR in dealing with the COVID-19 pandemic.The main points that will be presented in this paper are illustrated in Figure 1.This paper begins with an introduction to the principles of SPR technology and continues with an introduction to the kinetic parameters of SPR signals.Studies show that the SARS-CoV-2 spike protein will bind to ACE2 to mediate the entry of the virus into cells [17].Therefore, in the last part of this paper, we will only focus on reviewing kinetic parameters in several cases of SARS-CoV-2 mutation with ACE2.ACE2 is the main natural receptor that mediates viral entry, and therefore, no modification of ACE2 is required in studying the kinetic parameters of different mutation and variants of SARS-CoV-2 [18].Furthermore, information related to kinetic parameters will be useful in monitoring epidemiological developments, vaccine development, drug discovery, and so on [16,19]. Working Principle and Development of a Prism-Based SPR Biosensor The SPR biosensor is one type of biosensor that utilizes surface plasmon waves in its work.Surface plasmon waves are generated when light interacts with free electrons at the interface between a metal (or other conducting material) and a dielectric (usually glass, air, or liquid) [20].When monochromatic light with p-polarization strikes a metal surface, the light will be absorbed by electrons and cause collective oscillations of electrons called plasmons [21].Surface plasmon waves propagate on metal and dielectric surfaces with a wave number symbolized by and its magnitude can be determined mathematically using the following equation [22]: where is the frequency of incident light, c is the speed of light, and are the dielectric constants of the metal and the dielectric, respectively.Because the dielectric constant is related to the refractive index (n) based on the relationship [23], then in Equation (1) can be modified to [24]: Working Principle and Development of a Prism-Based SPR Biosensor The SPR biosensor is one type of biosensor that utilizes surface plasmon waves in its work.Surface plasmon waves are generated when light interacts with free electrons at the interface between a metal (or other conducting material) and a dielectric (usually glass, air, or liquid) [20].When monochromatic light with p-polarization strikes a metal surface, the light will be absorbed by electrons and cause collective oscillations of electrons called plasmons [21].Surface plasmon waves propagate on metal and dielectric surfaces with a wave number symbolized by k sp and its magnitude can be determined mathematically using the following equation [22]: where ω is the frequency of incident light, c is the speed of light, ε m and ε d are the dielectric constants of the metal and the dielectric, respectively.Because the dielectric constant is related to the refractive index (n) based on the relationship √ ε real [23], then k SP in Equation ( 1) can be modified to [24]: where n m and n d indicate the refractive index of the metal and the dielectric. If we measure the intensity of reflected light, at a certain angle of incidence we will find an angle where the light will show a very low intensity.This happens because the incident light is completely absorbed by the electrons.The angle at which the intensity of the reflected light shows the smallest intensity is usually called the SPR angle or the resonance angle.The resonance condition occurs when the wave number of the photon is equal to the wave number of the surface plasmon.This can be explained using a dispersion curve as shown in Figure 2a.Surface plasmon waves cannot be excited by direct light because the wave vector of the surface plasmon is higher than the incident light.The wave vector of the surface plasmon (blue curve) will never intersect with the wave vector of the photon (red curve) over the entire wave number range.To achieve resonance conditions, the dispersion curve of the surface plasmon must be reduced or the dispersion curve of the photons increased.One widely used approach is to add a prism with a high refractive index to increase the photon dispersion curve.By adding a prism, the photon wavenumber changes from Biosensors 2024, 14, 99 4 of 18 In 1982, SPR technology was used for gas detection.SPR technology continues to be developed for wider applications such as food safety [25], environmental monitoring [26], medical diagnosis and detection [27], drug discovery [14], and others.Apart from these, another focus in the development of SPR biosensors is on transducer engineering to achieve sensitive sensors so that analytes can be detected down to the smallest possible concentration.By modifying the SPR chip using 2D materials such as graphene and MoS2, various biomarkers with very small concentrations can be detected.Chiu et al. modified the surface of a thin layer of gold on an SPR chip with graphene oxide to detect human chorionic gonadotropin (hCG) proteins [28].The detection limit that can be achieved by this sensor system is 0.065 nM, and when compared with conventional SPR biosensors, modification with graphene oxide can increase the sensitivity of the biosensor up to 16 times higher.In 2021, Chiu et al. also modified the SPR chip using MoS2 to detect pregnancy-associated plasma protein-A2 (PAPP-A2) [29].The detection limit obtained was 0.05 pg/mL with a linear range of detection ranging from 0.1 to 1100 pg/mL.Since the refractive index of the prism is constant, the resonance angle will depend on the presence or absence of absorbed molecules at the metal and dielectric interfaces on the sensing surface.The presence of absorbed molecules will result in a shift in the resonance angle.Therefore, we can monitor the presence or absence of adsorbed molecules or find out how fast the molecules are absorbed (kinetic analysis) from the SPR sensorgram (Figure 2b).There are several parameters that we can obtain from the SPR sensorgram, which are the association rate constant (k a ), dissociation rate constant (k d ), equilibrium association constant (K A ), and equilibrium dissociation constant (K D ).All these quantities will be discussed in the next section. Principle of SPR Kinetics In 1982, SPR technology was used for gas detection.SPR technology continues to be developed for wider applications such as food safety [25], environmental monitor-ing [26], medical diagnosis and detection [27], drug discovery [14], and others.Apart from these, another focus in the development of SPR biosensors is on transducer engineering to achieve sensitive sensors so that analytes can be detected down to the smallest possible concentration.By modifying the SPR chip using 2D materials such as graphene and MoS 2 , various biomarkers with very small concentrations can be detected.Chiu et al. modified the surface of a thin layer of gold on an SPR chip with graphene oxide to detect human chorionic gonadotropin (hCG) proteins [28].The detection limit that can be achieved by this sensor system is 0.065 nM, and when compared with conventional SPR biosensors, modification with graphene oxide can increase the sensitivity of the biosensor up to 16 times higher.In 2021, Chiu et al. also modified the SPR chip using MoS 2 to detect pregnancyassociated plasma protein-A2 (PAPP-A2) [29].The detection limit obtained was 0.05 pg/mL with a linear range of detection ranging from 0.1 to 1100 pg/mL. Principle of SPR Kinetics An SPR biosensor is a potential tool which can be used to investigate phenomena that occur on the sensing surface.Surface phenomena such as molecular absorption cause changes in the refractive index and change the SPR angle.The SPR angle will shift to a higher angle when a molecule is adsorbed and will shift to a smaller angle when a molecule is released from the sensing surface.The response of the SPR biosensor due to a phenomenon on the sensing surface can be plotted at any time in real-time, and the resulting curve is called a sensorgram curve.Kinetic parameters that describe bonding events can be obtained, such as the association, dissociation, and equilibrium constants [30]. In the simplest SPR experiment, the experiment begins with the immobilization of the active ligand to specifically recognize the molecule to be detected (Figure 3a).Ligands can be immobilized on the sensor surface through a material that is usually called selfassembled monolayers (SAM) [31].The target molecule to be detected in this case is called the analyte.The buffer that flows over the sensor surface is called the running buffer [32].It is essential to condition the sensor surface with an appropriate buffer solution, and the types of buffers that are widely used in SPR experiments are HEPES, Tris, or PBS [33]. Biosensors 2024, 14, 99 5 of 18 It is essential to condition the sensor surface with an appropriate buffer solution, and the types of buffers that are widely used in SPR experiments are HEPES, Tris, or PBS [33].As shown in Figure 3b, the capture of the analyte by the ligand begins with the conditioning of the running buffer signal.The signal on the sensorgram must form a stable baseline.External influences that cause signal fluctuations in the sensorgram, such as temperature, must be minimized to be as small as possible [34].Once a baseline is obtained, the solution containing the analyte can be injected.Ligands immobilized on the sensor surface will capture the analyte, and this is indicated by an increase in the sensorgram signal.The magnitude of the sensorgram signal increase depends on the number of active ligands.After the active ligand pairs with the analyte, the sensorgram signal will be in an equilibrium state.This phase is called the association phase.In the same phase, non-specific interactions caused by the presence of impurities in the analyte solution are also very likely to occur.Therefore, washing is carried out using a running buffer.The remaining analytes and components that are not tightly bound will be removed from the sensor sur- As shown in Figure 3b, the capture of the analyte by the ligand begins with the conditioning of the running buffer signal.The signal on the sensorgram must form a stable baseline.External influences that cause signal fluctuations in the sensorgram, such as temperature, must be minimized to be as small as possible [34].Once a baseline is obtained, the solution containing the analyte can be injected.Ligands immobilized on the sensor surface will capture the analyte, and this is indicated by an increase in the sensorgram signal.The magnitude of the sensorgram signal increase depends on the number of active ligands.After the active ligand pairs with the analyte, the sensorgram signal will be in an equilibrium state.This phase is called the association phase.In the same phase, non-specific interactions caused by the presence of impurities in the analyte solution are also very likely to occur.Therefore, washing is carried out using a running buffer.The remaining analytes and components that are not tightly bound will be removed from the sensor surface.This phase is called the dissociation phase.After this phase has been successfully completed, the sensorgram signal will show a steady state.Finally, the regeneration solution is injected into the sensing surface to break the bond between the analyte and the ligand.If the ligands are immobilized properly, all ligands will remain on the sensing surface, and measurements for other analyte samples can be made using the same SPR chip [30,[35][36][37].SPR biosensors are usually equipped with two channels: where the first channel is used to obtain a sensorgram signal from the analyte, and the other channel is used to obtain a reference sensorgram signal.The actual signal is obtained after correction by subtracting the measured analyte signal from the reference signal [38,39]. If the ligand on the sensing surface is symbolized by B is bonded to the analyte symbolized by A, the bond between them produces a complex molecule symbolized by AB.The interaction can be written by the following equation [37]: The association rate constant is defined as the number of complex molecules formed per unit time at concentrations of A and B. This quantity is usually denoted by k a and in some references it is denoted by k on .Furthermore, the dissociation rate constant indicates the number of complex molecules that decay over time.This quantity is usually symbolized by k d and in some references it is written as k o f f .Equilibrium is reached when the rates of association and dissociation are equal.The association and dissociation equilibrium constants represent the affinity of the interaction between ligand and analyte.The affinity of the molecule for association is expressed by K A .The last one is K D .This quantity indicates the stability of the formation of the AB complex molecule where a high K D value indicates the low stability of the formation or interaction of the A and B molecules [40].Table 1 below shows the definitions, units, and typical ranges for k a , k d , K A , and K D .To produce a good sensorgram signal, there are a few tips to consider.Some of them are related to pH and the type of buffer used, the reactive group, and the molecular weight.The reactive group is important because it ensures covalent coupling of the ligands on the sensor surface.Ligands must contain reactive groups such as −NH 2 , −SH, or −COOH to capture proteins and oligonucleotides.Molecular weight will affect the signal that will be generated, where smaller molecules can change the refractive index to be lower than that of larger molecules.In many cases, researchers usually immobilize a molecule with a smaller molecular weight as a ligand to obtain a higher signal [41].The relationship between the molecular weight of the analyte (MW analyte ), ligand (MW ligand ), and the ligand response (R ligand ) with the binding capacity of the analyte is shown in the following equation [42]: The response to the binding capacity of the molecule is only maximal when the ligand on the sensing surface is fully active.However, in many experiments, some ligands on the sensing surface are not active, so the signal response obtained is smaller. COVID-19 Virus and Its Mutation The SARS-CoV-2 virus is a member of the betacoronavirus genus and has a genome similar to that of SARS-CoV (about 80%) and Middle East respiratory syndrome coronavirus (MERS) (about 50%) [43].In simple terms, this virus has a spherical shape with a diameter of 130 nm and is surrounded by a spike-like structure on its entire surface, as shown in Figure 4.This virus encodes sixteen non-structural proteins (NSPs) and four structural proteins, which are the nucleocapsid protein (NP), spike glycoprotein (SP), membrane protein (MP), and envelope protein (EP) [44].The SP is composed of an N-terminal S1 subunit and a C-terminal S2 subunit located near the membrane [45].The S1 subunit contains RBD, which can bind to ACE2 as a cellular receptor during virus entry, and after that, the Transmembrane domain in subunit S2 will help the virus enter the host cell [44,45].Before the discussion continues, it is very important to know the difference between a mutation and a variant to make the discussion clearer and to avoid misunderstanding.Mutations are defined as amino acid exchanges (nonsynonymous or missense) in spike glycoproteins, while other nucleotide changes (synonymous or non-missense) are defined as variants [46,47].One of the most significant mutations in SARS-CoV-2 is the D614G mutation (Figure 4).In the D614G mutation type, the amino acid aspartate (D) at position 614 in the viral spike protein is replaced by the amino acid glycine (G) [48,49]. Mutations in the SARS-CoV-2 virus can occur in various positions in the viral genome.The SARS-CoV-2 genome is a long RNA chain, and mutations can occur in the spike protein, RBD, N-Terminal Domain (NTD) and others.The complete amino acid changes that occur as a result of the mutations are shown in Figure 5, and to illustrate the differences between each variant, we compared them to the structure of the ancestral protein.In addition, RBD is a part of the S protein that interacts directly with the ACE2 receptor in human cells.Mutations in the RBD can affect the virus' ability to bind to human cells.Therefore, we show the mutations that occur in the RBD which are shown in Table 2. Application of SPR Technology for SARS-CoV-2 Detection and Analysis of Its Binding Biosensors are detection devices that use ligands as molecular recognition elements to detect the target molecule specifically.In the context of detecting the COVID-19 virus, several materials can be used as ligands, such as ACE2 [52], monoclonal antibodies (Mabs) [53], aptamers [54], peptides [55], and immunoglobin molecules (IgM/IgG) [56].Table 3 below summarizes the various nanophotonic biosensors that have been developed, and Application of SPR Technology for SARS-CoV-2 Detection and Analysis of Its Binding Biosensors are detection devices that use ligands as molecular recognition elements to detect the target molecule specifically.In the context of detecting the COVID-19 virus, several materials can be used as ligands, such as ACE2 [52], monoclonal antibodies (Mabs) [53], aptamers [54], peptides [55], and immunoglobin molecules (IgM/IgG) [56].Table 3 below summarizes the various nanophotonic biosensors that have been developed, and to illustrate sensor performance, we include the limit of detection (LoD) obtained from each sensor.Of the ligands mentioned above, ACE2 is a type of ligand that is widely used to detect the SARS-CoV-2 virus.ACE2 is a natural receptor that is available in various tissues in the human body, especially in cells in the respiratory tract (highest in the olfactory bulbs) [65].SARS-CoV-2 uses the spike protein on its surface to bind to the ACE2 receptor on human cells.By utilizing this type of ligand, the binding affinity of different variants of SARS-CoV-2 resulting from mutations can be analyzed and identified based on changes in their kinetic parameters.This information is very important not only in virus detection, but also in vaccine development and drug discovery. Although ACE2 has been identified as the main receptor that mediates entry of the SARS-CoV-2 virus into human cells, heparan sulfate has been identified as a molecule that plays a role in the early stages of viral infection in host cells.A comparison of the affinity between heparin and ACE2 with spike protein has been investigated previously by Liu et al. [66].They started the experiment by immobilizing heparin and ACE2 on the surface of a streptavidin chip.After that, the K D value was determined on three different protein samples, namely RBD, spike monomer, and spike trimmer.The analysis results show that the K D value of the interaction between ACE is always smaller than heparin.In the case of the RBD protein, the K D values for the heparin chip and ACE2 were ∼1000 nM and 3.6 nM, respectively.These results confirm that the RBD domain has a much higher affinity for ACE2 compared with heparin. Previous results showed that the affinity of ACE2 was much higher than that of heparin.Therefore, Wrapp et al. used ACE2 and compared k a , k d , and K D on two different samples which were s-proteins of the novel coronavirus (2019-nCoV) and the RBD subdomain 1 (SD1) of SARS-CoV.Serial dissolution of ACE2 was carried out to obtain a 1:1 binding stoichiometry.After ACE2 injection, a sensorgram for each concentration of ACE2 was obtained, as shown in Figure 6.The black line shows the real data while the red line shows the fitting data.If we compare the K D value of 2019-nCoV S and SARS-CoV RBD-SD1, the K D value of SARS-CoV RBD-SD1 shows a much higher value of 325.8 nM.As explained in the previous section, a high K D value indicates a low stability in the formation of bonds between the two molecules.Therefore, it can be concluded that 2019-nCoV S has a higher affinity, which is 20 times higher than the binding between ACE2 and SARS-CoV RBD-SD1 [67].In the same year, Lan et al. also compared the binding affinity between SARS-CoV-2 RBD and SARS-CoV RBD.ACE2 was employed as a ligand and immobilized on the CM5 chip sensor surface.The response generated after ACE2 injection is 500 response units.Serial dilution was carried out on samples of SARS-CoV-2 RBD and SARS-CoV RBD to obtain a 1:1 bonding model using Biocore Insight Software evaluation software (GE Healthcare, Massachusetts, United States) and the concentrations obtained were in the range of 1.95 nM to 62.5 nM.Figure 7 shows a sensorgram for this concentration range where the values of SARS-CoV-2 RBD and SARS-CoV RBD were 4.7 nM and 31 nM, respectively [43].Walls et al. in 2020 conducted a kinetic analysis between human ACE2 (hACE2) with SARS-CoV-2 S and SARS-CoV S using a biosensor based on biolayer interferometry (BLI).The results obtained showed that the values of SARS-CoV-2 S and SARS-CoV S were 1.2 nM and 5 nM, respectively [68].If we compare some of the results above, the values obtained show a slightly different magnitude, but both have similarities, namely the of SARS-CoV-2 S is always smaller than that of SARS-CoV S. The conclusion from our discussion above shows that the kinetic data prove that SARS-CoV-2 S has a higher binding affinity to the ACE2 receptor.In the same year, Lan et al. also compared the binding affinity between SARS-CoV-2 RBD and SARS-CoV RBD.ACE2 was employed as a ligand and immobilized on the CM5 chip sensor surface.The response generated after ACE2 injection is 500 response units.Serial dilution was carried out on samples of SARS-CoV-2 RBD and SARS-CoV RBD to obtain a 1:1 bonding model using Biocore Insight Software evaluation software (GE Healthcare, Massachusetts, United States) and the concentrations obtained were in the range of 1.95 nM to 62.5 nM.Figure 7 shows a sensorgram for this concentration range where the K D values of SARS-CoV-2 RBD and SARS-CoV RBD were 4.7 nM and 31 nM, respectively [43].Walls et al. in 2020 conducted a kinetic analysis between human ACE2 (hACE2) with SARS-CoV-2 S and SARS-CoV S using a biosensor based on biolayer interferometry (BLI).The results obtained showed that the K D values of SARS-CoV-2 S and SARS-CoV S were 1.2 nM and 5 nM, respectively [68].If we compare some of the results above, the K D values obtained show a slightly different magnitude, but both have similarities, namely the K D of SARS-CoV-2 S is always smaller than that of SARS-CoV S. The conclusion from our discussion above shows that the kinetic data prove that SARS-CoV-2 S has a higher binding affinity to the ACE2 receptor. To date, the SPR biosensor has been proven to be able to be used to identify mutation phenomena in the SARS-CoV-2 protein.The effect of D614G on the kinetic parameters of SARS-CoV-2 S proteins and ACE2 has been successfully investigated by Yurkovetskiy et al. [69].In this research the SPR biosensor was designed to distinguish between G614 and D614.A comparison of kinetic parameters between the SARS-CoV-2 D614G S protein and the ancestral protein was also carried out.The results of this study indicate that the detected RBD mutation has a correlation with SARS-CoV-2 infectivity.The SARS-CoV-2 variant that continues to mutate causes the infectivity of the virus to enter the body to be higher.Several studies have reported changes in the binding affinity of ACE2 after the virus mutates [49].Xue et al. investigated nine different mutations and compared them with wild type (WT).The mutants investigated were Q498W, Q498R, T500W, S477H, Y505W, T500R, N501V, Y489W, and Q493M [70].The K D value of WT is 21.08 nM.Of the nine mutants investigated, three of them had higher K D values, namely T500W (K D = 21.8 nM), N501V (K D = 158.50nM), and Y489W (K D = 38.90nM).Since most of the K D s decreased after mutation (the smallest K D was Q493M (6.9 nM)), this indicates that the presence of viral mutations could strengthen the binding affinity.To date, the SPR biosensor has been proven to be able to be used to identify mutation phenomena in the SARS-CoV-2 protein.The effect of D614G on the kinetic parameters of SARS-CoV-2 S proteins and ACE2 has been successfully investigated by Yurkovetskiy et al. [69].In this research the SPR biosensor was designed to distinguish between G614 and D614.A comparison of kinetic parameters between the SARS-CoV-2 D614G S protein and the ancestral protein was also carried out.The results of this study indicate that the detected RBD mutation has a correlation with SARS-CoV-2 infectivity.The SARS-CoV-2 variant that continues to mutate causes the infectivity of the virus to enter the body to be higher.Several studies have reported changes in the binding affinity of ACE2 after the virus mutates [49].Xue et al. investigated nine different mutations and compared them with wild type (WT).The mutants investigated were Q498W, Q498R, T500W, S477H, Y505W, T500R, N501V, Y489W, and Q493M [70].The value of WT is 21.08 nM.Of the nine mutants investigated, three of them had higher values, namely T500W ( = 21.8 nM), N501V ( = 158.50nM), and Y489W ( = 38.90nM).Since most of the s decreased after mutation (the smallest was Q493M (6.9 nM)), this indicates that the presence of viral mutations could strengthen the binding affinity. The effect of mutations was also investigated by Barton et al. in 2021 [71].They investigated the affinity and kinetics of five types of RBD mutations (K417N, K417T, N501Y, E484K, and S477N) and two ACE2 mutations (S19P and K26R).Then, they compared it with WT RBD (In Figure 8, the affinity and kinetics of WT RBD are shown by dashed lines).As shown in Figure 8a, the RBD mutation increased binding to the single mutations (S477N, E484K, and N501Y).Of these three single-mutation types, N501Y showed the highest increase, which was 10 times higher than WT RBD.Not only single mutations, but also double (E484K/N501Y) and triple mutations (K417N/E484K/N501Y and K417T/E484K/N501Y) have higher affinity than RBD WT.The same results also occur in ACE2 mutations.Of the two types of ACE2 mutations investigated, both increased the binding affinity between ACE2 and RBD.The effect of mutations was also investigated by Barton et al. in 2021 [71].They investigated the affinity and kinetics of five types of RBD mutations (K417N, K417T, N501Y, E484K, and S477N) and two ACE2 mutations (S19P and K26R).Then, they compared it with WT RBD (In Figure 8, the affinity and kinetics of WT RBD are shown by dashed lines).As shown in Figure 8a, the RBD mutation increased binding to the single mutations (S477N, E484K, and N501Y).Of these three single-mutation types, N501Y showed the highest increase, which was 10 times higher than WT RBD.Not only single mutations, but also double (E484K/N501Y) and triple mutations (K417N/E484K/N501Y and K417T/E484K/N501Y) have higher affinity than RBD WT.The same results also occur in ACE2 mutations.Of the two types of ACE2 mutations investigated, both increased the binding affinity between ACE2 and RBD. Not only can changes in the binding affinity parameters of the SARS-CoV-2 virus caused by mutations be predicted through experimental studies, but they can also be predicted through computational studies.Various approaches to this problem have been developed, including Free Energy Perturbation (FEP) [72,73], machine learning [74], statistical potentials [75], and various force field-related scoring functions embedded in programs such as FoldX [76] and Rosetta [77].Sergeeva et al. investigated the effect of mutations on the binding affinity of ACE2 with SARS-CoV-2 using FEP [78].In this study, they investigated the binding affinity (KD) of ACE2 with wild type (WT) and 23 single mutants.In addition, the epistatic effect of the Q498R N501Y double mutant on the omicron variant can also be determined accurately.These computational results have been successfully confirmed based on SPR experiments.Not only can changes in the binding affinity parameters of the SARS-CoV-2 virus caused by mutations be predicted through experimental studies, but they can also be predicted through computational studies.Various approaches to this problem have been developed, including Free Energy Perturbation (FEP) [72,73], machine learning [74], statistical potentials [75], and various force field-related scoring functions embedded in programs such as FoldX [76] and Rosetta [77].Sergeeva et al. investigated the effect of mutations on the binding affinity of ACE2 with SARS-CoV-2 using FEP [78].In this study, they investigated the binding affinity (KD) of ACE2 with wild type (WT) and 23 single mutants.In addition, the epistatic effect of the Q498R N501Y double mutant on the omicron variant can also be determined accurately.These computational results have been successfully confirmed based on SPR experiments. Makowski and his colleagues have analyzed the mutational variations in SARS-CoV-2 against ACE2 using a computational approach supported by machine learning [79].In this study, it was found that the affinity ( ) of the beta (RBD Mutations: K417N, E484K, N501Y) and gamma (RBD Mutations: K417T, E484K, N501Y) variants tended to maintain their affinity level compared to the wild type (WT), remaining at around 1 x 10 11 M.However, the affinity ( ) of the alpha (RBD Mutation: N501Y), epsilon (RBD Mutation: L452R), and delta (RBD Mutation: L452R, T478K) variants experienced a significant increase, reaching 1.48 × 10 11 M for the alpha variant, 2.54 × 10 11 M for the epsilon, and 4.37 × 10 11 M for the delta variant.This study also investigated the individual impact of 15 RBD mutations on ACE2 affinity.It was found that five RBD mutations (G339D, N440K, S477N, T478K, and N501Y) increased the affinity of ACE2.In addition, most RBD mutations are Makowski and his colleagues have analyzed the mutational variations in SARS-CoV-2 against ACE2 using a computational approach supported by machine learning [79].In this study, it was found that the affinity (K A ) of the beta (RBD Mutations: K417N, E484K, N501Y) and gamma (RBD Mutations: K417T, E484K, N501Y) variants tended to maintain their affinity level compared to the wild type (WT), remaining at around 1 × 10 11 M.However, the affinity (K A ) of the alpha (RBD Mutation: N501Y), epsilon (RBD Mutation: L452R), and delta (RBD Mutation: L452R, T478K) variants experienced a significant increase, reaching 1.48 × 10 11 M for the alpha variant, 2.54 × 10 11 M for the epsilon, and 4.37 × 10 11 M for the delta variant.This study also investigated the individual impact of 15 RBD mutations on ACE2 affinity.It was found that five RBD mutations (G339D, N440K, S477N, T478K, and N501Y) increased the affinity of ACE2.In addition, most RBD mutations are also predicted to increase antibody release, with the E484A mutation shown to substantially decrease the neutralization activity of human convalescent serum. Affinity is information that refers to the strength of an interaction between a particular molecular target and a receptor in a human cell.From the references discussed above, changes in the structure of the virus and the mutations that occur result in changes in its binding affinity.Understanding these affinity changes is critical for controlling viral transmission, vaccination strategies, and future drug development.In the context of biosensors, machine learning is not only used to obtain affinity data from SPR biosensors.Machine learning has also been utilized in studying different nonlinear optical effects.The combination of machine learning analysis tools and multipotonic effects has enormous potential in data interpretation.Gupta et al. used Principal Component Analysis (PCA) as a learning machine to improve SERS signals [80].The use of PCA in this research was able to increase the SERS signal to be up to three times higher.Other researchers, namely Paryanti et al. [81] and Williamson et al. [82], use a different algorithm which is Neural Networks.Details regarding the use of machine learning in nonlinear optical effects are discussed in the following paper [83]. Martinez et al. summarize the process in machine learning into several parts [83].After the biosensor data are acquired, the raw data obtained are first processed through data filtering and segmentation.Furthermore, normalization is also carried out at this stage to homogenize the scale or data type according to the data's properties.After that, the appropriate model must be determined and this depends on the problem being analyzed and the characteristics of the data being analyzed.There are three models that can be used, namely classification, regression, and clustering.After the model is selected, model learning and evaluation is carried out.Once the selected model is valid and successfully implemented, regular monitoring and maintenance is required to update the algorithm according to changes within the data or environment.In this way, hidden patterns of complex biological systems such as SARS-CoV-2 mutations can be recognized with the help of computing systems in machine learning.The use of machine learning in analyzing biosensor data will be very important in dealing with future pandemics. Conclusions and Future Prospective The global COVID-19 epidemic has become a major threat to public health worldwide.Thousands of mutations have been identified in the SARS-CoV-2 genome.This paper was written to summarize the dynamics of SARS-CoV-2 mutations using SPR biosensors.The main advantage of SPR biosensors is their ability to offer real-time monitoring of molecular interactions.This property is very important, especially in the case of studying the structure and dynamics of viruses.From the information previously described, we know how the interaction affinity between ACE2 and SARS-CoV-2 changes, both wild type and mutated.Deep insights into interaction dynamics can be explored in more depth by analyzing levels of association and dissociation. SPR biosensor is a type of label-free biosensor.The absence of labels (e.g., fluorophores) minimizes the complexity in the sample preparation and the risk of contamination.The analysis process becomes faster, and the measured signal is a real signal that represents the properties of the analyte itself.The progress that has been achieved to date is that SPR biosensors can be used to detect analytes up to the atto order.Another advantage of the SPR biosensor is that the transducer is also compatible for use simultaneously with other biosensors.As a result, it is possible to obtain dual mode sensor.Regarding the dual mode biosensor, the SPR biosensor has been successfully integrated with electrochemical [84], surface-enhanced Raman scattering (SERS) [85], electrolyte-gated field-effect transistor (EG-FET) [86]. From the advantages mentioned above, currently existing biosensors, especially SPR biosensors coupled to prisms, are still bulky in size.Therefore, to obtain a more compact and portable device, several research groups are developing fiber-optic-based SPR biosensors.To obtain better sensitivity and lower detection limit, several structures have been developed such as single-mode optical fibers (unclad, side-polished, tapered, and Ushaped), long period fiber gratings (LPFG), tilted fiber Bragg gratings (TFBG), and specialty fibers (plastic or polymer, microstructured, and photonic crystal fibers) [87].Furthermore, to simultaneously detect several targets or biomarkers in one test, multiplex biosensors need to be developed.This is very important to obtain more comprehensive data.By combining multiplexing capabilities with high detection sensitivity, multiplex biosensors enable more effective monitoring of new variants and mutations that may emerge over time. Figure 1 . Figure 1.Schematic diagram to illustrate the contents of this paper. Figure 1 . Figure 1.Schematic diagram to illustrate the contents of this paper. Figure 2 . Figure 2. (a) Surface plasmon dispersion curve.(b) Schematic of the prism-coupled SPR biosensor and the resulting signal. Figure 2 . Figure 2. (a) Surface plasmon dispersion curve.(b) Schematic of the prism-coupled SPR biosensor and the resulting signal. Figure 4 . Figure 4. Structure of the SARS-CoV-2 virus.Reproduced with permission from [45].Copyright (2023), Springer.SARS-CoV-2 continues to evolve over time.These changes can affect the characteristics of the virus, such as changes in the speed of its spread.Since it was first identified in December 2019 to November 2021, SARS-CoV2 has mutated five times with variants called alpha B.1.1.7 in September 2020, beta 1.351 in October 2020, gamma P.1 in November 2020, delta B.1.617.2 in December 2020, and the last one is Omicron B.1.1.529in November 2021.Before the discussion continues, it is very important to know the difference Table 1 . Definition, units, and typical range of k a , and K D . Table 2 . [51]ants of concerns' lineages, and the location of their mutations on the RBD[51]. Table 2 . [51]ants of concerns' lineages, and the location of their mutations on the RBD[51].
2024-02-16T16:23:03.728Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "57fdd841796fa20ad56f7372a3edbbb182cb2285", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6374/14/2/99/pdf?version=1707829444", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d890d3ebdf56424fa65686d5bf990f7a0374827", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
403139
pes2o/s2orc
v3-fos-license
Predictors of Depressive Mood in Patients With Isolated Cerebellar Stroke: A Retrospective Study Objective To identify predictive factors of depressive mood in patients with isolated cerebellar stroke. Methods A retrospective chart review was performed in patients who had experienced their first isolated cerebellar stroke during 2002–2014. The patients were classified into two groups by the Geriatric Depression Scale (GDS) (non-depressive group, 0≤GDS≤16; depressive group, 17≤GDS≤30). Data on demographic and socioeconomic factors, comorbidities, functional level, cognitive and linguistic function, and stroke characteristics were collected. Significant variables in univariate analysis were analyzed using logistic regression. Results Fifty-two patients were enrolled, of whom 55.8% had depressive mood, were older (p=0.021), and had higher hypertension rates (p=0.014). Cognitive and linguistic functions did not differ between the two groups. The depressive group had higher ischemic stroke rates (p=0.035) and showed a dominant right posterior cerebellar hemisphere lesion (p=0.028), which was independently associated with depressive mood in the multiple logistic regression analysis (odds ratio, 5.081; 95% confidence interval, 1.261–20.479). Conclusion The risk of depressive mood after cerebellar stroke was increased in patients at old age, with a history of hypertension, ischemic stroke, and lesion of the right posterior cerebellar hemisphere. The most significant determining factor was stroke lesion of the right posterior cerebellar hemisphere. Early detection of risk factors is important to prevent and manage depressive mood after cerebellar stroke. INTRODUCTION www.e-arm.org tion of social activities, and quality of life [2], numerous studies have been conducted to evaluate and manage PSDM in patients in the early phase of stroke [3][4][5][6][7][8]. Although the studies showed inconsistent results, factors such as sex, stroke severity, functional impairments, family, and social support were the most frequently mentioned risk factors [9]. However, most of the studies included patients who had a supratentorial stroke. Recent bodies of evidence from studies have revealed that the cerebellum might play an important role in the regulation of emotion [10,11]. The study by Shah et al. [12] showed that patients with a major depressive disorder have reduced cerebellar volume. Leroi et al. [13] found that patients with cerebellar degeneration had high prevalence of depressive mood. In cerebellar stroke patients, Frank et al. [14] observed significantly high depression scores. However, there are a few studies on PSDM after isolated cerebellar stroke. Therefore, the aim of this study was to analyze the association between the clinical data of isolated cerebellar stroke patients and the occurrence of depressive mood in order to identify predictive factors of depressive mood after isolated cerebellar stroke and help early detection. Subjects Data were collected retrospectively from the medical records of the patients, who were diagnosed as having cerebellar stroke and hospitalized in the Department of Rehabilitation Medicine at Severance Hospital from January 2002 to December 2014. The inclusion criteria were (1) patients with isolated cerebellar stroke confirmed by a neuroimaging study (brain computed tomography [CT] or magnetic resonance imaging [MRI]) and neurological examination, (2) patients who experienced their first stroke, and (3) patients older than 18 years. The exclusion criteria were (1) patients with disorders of consciousness (vegetative state or minimally conscious state), (2) patients with severe cognitive deficit with Korean Mini-Mental State Examination (K-MMSE) scores <10 points, and (3) patients with a history of psychiatric or neurological disorders diagnosed before their stroke. Demographic and socioeconomic characteristics The clinical data of the subjects, including their demographic characteristics such as age and sex, and socioeconomic characteristics such as employment status, family status, religion, urbanicity, and education level, were collected from their medical records. Comorbidities Data on the medical histories and comorbidities such as hypertension, diabetes mellitus, smoking and alcohol consumption, cardiovascular disease, and orthopedic disease of the subjects were collected from their medical records. Data on other medical comorbidities such as pulmonary and kidney diseases were also collected. Linguistic function, cognitive function, and functional status The results of the Korean version of the Western Aphasia Battery (K-WAB) and Boston Naming Test (BNT) were used for evaluation of linguistic function [15]. To measure the cognitive function of each patient, the results of the K-MMSE were used. The Functional Independence Measure (FIM) is a functional tool to assess activities of daily living, with a total score of 126 points, and 13 items that define motor function disability (FIM-motor) and 5 items that define cognitive function disability (FIM-cog) [16]. We analyzed the correlation of each domain with depressive mood. Type of stroke and localization of lesion The brain MRI and CT results of all the recruited patients were reviewed by a radiologist who had more than 3 years of clinical experience. The type of stroke lesion (ischemic or hemorrhagic lesion) was recorded. The lesion locations of cerebellar stroke were classified into the following five categories: right anterior, right posterior, left anterior, and left posterior lobes, and vermian lesion in the middle sagittal view. Evaluation of depressive mood Depressive mood was evaluated according to the Geriatric Depression Scale (GDS) score [17]. This scale consists of 30 yes-or-no questions, with 1 point scored for yes and 0 points scored for no, for a total of 30 points. Scores >17 points have been reported to indicate a high possibility of major depressive disorder [18]. The patients were www.e-arm.org classified into two groups according to a cutoff score of 16 in order to identify the correlation between depressive mood and clinical factors (non-depressive group, 0≤GDS≤16; depressive group, 17≤GDS≤30). Statistical analyses SPSS ver. 22.0 (IBM SPSS Inc., Armonk, NY, USA) was used for statistical analyses. The association between depressive mood and categorical variables, including sex, employment status, family status, religion, urbanicity, education level, various comorbidities, dysarthria, type of stroke lesion, and lesion location of cerebellar stroke, was assessed by using the χ 2 test and Fisher exact test. The Mann-Whitney U test was used for continuous variables such as age, K-MMSE, BNT, aphasia quotient, language quotient, and FIM score. In order to identify the risk factors for depressive mood, a binary logistic regression model was used. Statistical significance was determined at p<0.05. RESULTS Among all the patients, 137 had cerebellar stroke, of whom 52 satisfied both the inclusion and exclusion criteria, as shown in Fig. 1. We classified 23 subjects into the non-depressive group and 29 subjects into the depressive group ( Fig. 1). Depressive mood in relation to demographic and so cioeconomic factors, and comorbidities With respect to demographic factors, socioeconomic factors, and comorbidities, age (p=0.021) and history of hypertension (p=0.014) showed statistically significant correlations with depressive mood (Table 1). Other factors such as sex, employment status, family status, religion, urbanicity, and education level did not show significant correlations with depressive mood. In addition, comorbidities such as diabetes mellitus, history of smoking and alcohol consumption, cardiovascular disease, orthopedic disease, and other medical diseases did not show a significant correlation with depressive mood. A trend was observed among unmarried subjects with www.e-arm.org depressive mood, although no statistically significant difference was observed between the depressive and nondepressive groups. Depressive mood in relation to linguistic function, cognitive function, and functional level Lower aphasia quotient (p=0.075) showed a trend of relative association with depressive mood, although the association was not statistically significant. With respect to the subjects' functional level, the FIM scores in the total, motor, and cognitive domains did not show statistically significant differences between the two groups ( Table 2). Depressive mood in relation to type of stroke and localization of lesion Among the lesion variables, the stroke location of right posterior cerebellar hemisphere (p=0.028) and ischemic cerebellar stroke showed statistically significant associations with depressive mood (Table 3). Other lesional www.e-arm.org factors showed no statistically significant differences between the two groups. Table 4 describes the results of the multifactorial binary logistic regression analysis on the risk factors that could affect depressive mood. The factors, including lesion pathology, history of hypertension, and age, were identified as statistically non-significant risk factors. Only stroke in the right posterior cerebellar hemisphere was identified DISCUSSION In this study, older age, history of hypertension, ischemic lesion rather than hemorrhagic lesion, and lesion of the right posterior cerebellar hemisphere were correlated with depressive mood in isolated cerebellar stroke patients. When the effects of the other factors were assessed, lesion of the right posterior cerebellar hemisphere was the only factor that correlated with depressive mood. Previous research studies identified female sex [19], severe functional impairment [20,21], previous history of depression [22], aphasia [23], and absence of family and social support as the risk factors for PSDM [24]. However, our study did not show a relationship between these factors and depressive mood after cerebellar stroke. As the subjects of our study were limited to those with isolated cerebellar stroke, it may be difficult to apply the risk factors reported in previous studies. In this study, older patients were more depressed than younger patients. In a normal population, the aging process itself increases the risk of depressive mood [25]. In stroke patients, the tendency for developing depressive mood after stroke is more frequently observed at an older age [26]. The atherosclerotic, inflammatory, endocrinologic and immunologic changes related to the aging process might form the pathophysiological mechanism that is responsible for increasing the vulnerability to depression [27]. The risk of depressive mood was higher in the patients who had a history of hypertension. Previous history of hypertension could cause the change of periventricular white matter hyperintensity [28], which is one of the risk factors for depressive mood in the elderly [29]. Due to this reason, the comorbid condition of previous hypertension can increase the risk of depressive mood in patients with isolated cerebellar stroke, although we did not measure the lesion of periventricular white matter change. Regarding another probable explanation, a study conducted by Meurs et al. [30] demonstrated that the patients with the combination of major depressive disorder and hypertension showed decreases of brain volumes in the areas associated with emotional regulation, such as anterior and middle cingulate cortices and cerebellum. The alterations of brain volume in these emotional regulatory areas might be an explanation for the comorbidity between hypertension and depressive mood. Further studies should be performed to confirm our results. The patients with ischemic cerebellar stroke were more vulnerable to depressive mood than those with hemorrhagic stroke; however, previous studies showed controversial results [20,24,31]. The present study showed that ischemic stroke was bivariately related to depressive mood, but it failed to demonstrate the relationship in a multivariate logistic regression model. Similar results were observed in a study conducted by van de Port et al. [31]. In this study, ischemic stroke patients were significantly older than hemorrhagic stroke patients (59.9±14.0 vs. 47.4±21.5 years; p=0.014). Thus, older age could cause higher vulnerability to depressive mood in ischemic stroke patients than in hemorrhagic stroke patients. Our study showed that lesion of the right posterior cerebellar hemisphere was a risk factor for depressive mood, which was consistent with the findings of prior research studies [32,33]. The researchers argued that the posterior lobe, not the anterior lobe, contributed to higherlevel processes such as mood regulation [32]. Besides, Damasio et al. [33] demonstrated that the right posterior cerebellar hemisphere was significantly activated by emotions such as anger and fear. A neuroanatomical study has shown that cerebellar hemispheres project to the contralateral dorsolateral prefrontal cortex (DLPFC) through dentatothalamic fiber tracts and receive cortical input back to the cerebellum, forming a closed prefrontal-cerebellar circuit [34]. Imbalance between the left and right DLPFC was demonstrated in major depressive disorder and it was linked to negative emotional judgment [35]. In our study, a lesion in the right posterior cerebellar hemisphere might have affected the balance between the left and right DLPFCs via the prefrontal-cerebellar circuit, causing depressive mood. Although prior studies showed that the cerebellar vermis and paravermian area are the most important anatomical locations in the cerebellum responsible for emotional control [10,36,37], our results showed no statistical correlation between cerebellar vermian lesion and depressive mood. The reason for this might be the small number of patients with a cerebellar vermian lesion alone. Most of the subjects had mixed hemispheric and vermian lesions, or no vermian www.e-arm.org lesion at all. This might have affected the results. Further study is needed to localize the lesion associated with depressive mood in patients with isolated cerebellar stroke by using functional neuroimaging for brain connectivity. This study has some limitations. First, the subjects' cognitive impairment could have influenced the development of depressive mood, as they were patients who had mild to moderate cognitive impairment. As GDS is a screening tool for subjective depressive mood, reliable data could not be obtained from patients with moderate cognitive impairment. Second, the development of depressive mood could be affected by the other functional factors in subjects such as balance [38] and fatigue [39]. In our study, the subjects' functional status was evaluated only with the FIM score, which does not correlate well with balance function or energy expenditure in stroke patients [40]. Third, the number of subjects was small. Further study with a prospective cohort design, large number of subjects, and considerations of cognitive and other functional factors is needed. In conclusion, older age, history of hypertension, ischemic stroke, and especially lesion of the right posterior cerebellar hemisphere in patients with cerebellar stroke could confer a higher risk of developing depressive mood. During rehabilitation, the patients with these factors should undergo careful observation of emotional changes for early detection and management of depressive mood in patients with isolated cerebellar stroke.
2018-04-03T03:04:24.671Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "7e2b9c86abd0c861554807d7ad72e546725d3a9c", "oa_license": "CCBYNC", "oa_url": "http://www.e-arm.org/upload/pdf/arm-40-412.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e2b9c86abd0c861554807d7ad72e546725d3a9c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258798472
pes2o/s2orc
v3-fos-license
Differences in Perceived Stress and Depression among Weight (Dis)Satisfied Midwestern College Students during COVID-19 : Background: Stress and depression are common mental health concerns among college students. Factors related to weight status and stigma are associated with poor mental health outcomes. We sought to describe the prevalence of weight dissatisfaction in relation to stress and depression among college students ( n = 551). Methods: A cross-sectional study was conducted via a convenient sample between December 2020 and February 2021. Mean differences in the Perceived Stress Scale-10 scores and Center for Epidemiologic Studies Depression Scale scores were examined using a one-way analysis of variance. Associations between stress, depression, and weight dissatisfaction were measured by logistic regression. Results: Weight dissatisfied (75.1%) students had significantly higher mean depression scores compared to weight satisfied. The weight dissatisfied students were 1.05 times more likely to be depressed compared to those who were weight satisfied. Significant mean differences in stress and/or depression were found for weight dissatisfied students by gender, race, parental status, marital status, residence, and U.S. citizenship. Weight dissatisfaction was higher than that reported in the literature, possibly due to the influence of social isolation during the COVID-19 pandemic. Conclusions: Strategies to reduce the prevalence of weight dissatisfaction for improved mental health should be explored, particularly efforts to reduce weight stigmatization and expand access to mental health care. Introduction College students experienced new challenges to overcome with the onset of the COVID-19 pandemic inclusive of isolation, fears of disease diagnosis, and online education to name a few [1,2]. Consequently, college student's perceived stress and depressive symptoms were impacted due to COVID-19 [3,4]. Nationally represented data shows the mean BMI placed college students in the overweight category, whereas of the Fall 2020 semester, the mean BMI was 25.30, which might be reflective of the impact of the COVID-19 pandemic [5]. Studies have shown that during this time college students' emotional eating was associated with increased perceived stress levels [6]. In addition, due to the COVID-19 pandemic, research has shown that 65% increased snacking and ate in response to the sight and smell of food, and 52% increased eating in response to stress [7]. Additionally, college students reported their eating changed during COVID-19 by eating more convenience foods and cheaper higher processed foods [8]; as a result, college students were gaining more weight during the course of the COVID-19 pandemic [9]. Research among college students has shown that changes in eating habits might lead to increased levels of stress [10]. Studies have shown that experiencing higher levels of stress might lead to more depressive symptoms among college students [11]. Last, college students face pressures of achieving academic success, which might potentially impact their eating habits and increase their levels of stress [12][13][14]. According to the American College Health National College Health Assessment [ACHA], currently, 34.3% of college students are overweight or obese with a mean body mass index (BMI) at 24.86 [15]. Recent studies have shown that college students having moderate to high perceived stress were associated with emotional eating and a lack of self-regulation of personal eating [16]. Increased emotional eating, in particular, has been associated with higher perceived stress among students of color [17]. Gender differences have shown that female students are more likely to have higher levels of perceived stress compared to male students [18]. Additionally, college student diets lacking frequent consumption of fruits and vegetables have been associated with higher perceived stress and depressive symptoms [19][20][21]. Similarly, college students who have higher perceived stress were more likely to have diets high in fats and increased fast-food consumption [21,22]. Though the links between dietary patterns and mental health outcomes are clear, it is less clear how weight dissatisfaction may contribute to poor mental health outcomes. Nationally, close to half of college students reported that they are currently trying to lose weight [15]. A recent study of adults that were dissatisfied with their body showed they were more likely to be overweight compared to normal weight [23]. A nationally representative sample of U.S. adults found 67% to be body dissatisfied [24]. Body image has been defined as thoughts, feelings, and emotions related to an individual's body shape, size, and attractiveness [25]. Whereas, body dissatisfaction relates to how we look, happiness or lack thereof regarding specific parts of the body, and possessing a feeling of being overweight [26]. Studies have shown that college students with higher BMIs had greater body weight dissatisfaction [27]. Similarly, it has been shown that college students overall have a high rate of body image dissatisfaction, despite lower reported overweight and obesity compared to the adult general population [28]. Specifically, female students are more likely to express body image dissatisfaction as well as a desire to be thinner compared to male students [29]. However, male students who practiced weight suppression and had greater overall mean BMI reported higher body dissatisfaction over a six-month time period, which contrasted with female students [30]. At the same time, female students overall experienced greater body dissatisfaction compared to male students [30]. Studies have shown that having a lower body image and body dissatisfaction might potentially impact college students' overall mental health and, more specifically, self-esteem and selfconfidence [28]. Intervention studies that have linked the impact of body satisfaction on mental health have shown that both can be improved among college students through increased exercise and physical activity [31]. Along the same lines, body dissatisfaction has increased the risk of disordered eating among college females [32]. Research has shown that college students that have eating disorders are more likely to be currently experiencing depressive symptoms [33]. Despite losing weight because of an eating disorder, students might still remain dissatisfied with their bodies [34]. A sizeable amount of literature has focused on body (dis)satisfaction and body image, but to a lesser degree weight (dis)satisfaction among college students. More specifically, the impact of weight (dis)satisfaction on college students' perceived stress and depression. The purpose of this study was to describe the differences in weight satisfaction and dissatisfaction on (1) perceived stress and (2) depression experienced among a sample of midwestern college students. Participant Recruitment A convenience sample of enrolled students completed a self-administered online survey created via Qualtrics. Data collection spanned 2 months starting on 11 December 2020 and continuing until 12 February 2021. Six separate anonymous individual survey links were created and shared via email to various programs across campus to increase student responses. First, the survey was advertised in the daily university newsletter sent to all students and faculty with an active university email address. Also, the survey was sent to four different programs (Public Health, Exercise Science, Nutrition, and Sports Psychology) in the Principal Investigator's home department. The survey was also sent to the Principal Investigator's school inclusive of six other programs: Early Childhood Education, Elementary Education, Special Education, Secondary Education, Educational Leadership, and Psychology. No participant incentives were offered. Overall, a total of 607 students started the survey. The resulting Excel datasheets from the six individual survey links were combined into one final dataset. The final dataset was cleaned and nonrespondents were removed. A total of 56 students were removed from the final analysis due to incomplete responses. Overall, n = 551 completed the survey. The study was approved by Southern Illinois University-Edwardsville Institutional Review Board protocol #1009. Data Analysis The data were analyzed using JMP Pro 16.2 (SAS Institute., Cary, NC, USA, 1989-2021) software. Sociodemographic variables included: gender (male/female), ethnicity (Hispanic/ non-Hispanic), race (white vs. non-white) and (Asian, African-American, White, and multi-racial), parent of a child, first-generation student, current classification (undergraduate/graduate), current enrollment status (full-time/part-time), online/distance education student, current employment status (employed/unemployed), hours worked per week, current residence (on-campus/at home/off-campus), and U.S. citizenship. Student respondents self-reported height, current weight, and weight gained during college. Selfreported weight was reported in pounds (lbs). Height was reported in feet (ft) and inches and was converted to inches. BMI was calculated within the dataset using the equation 703 × (lbs/[in 2 ]). Differences in mean PSS-10, mean CES-D scores, mean BMI, and sociodemographic variables were examined using one-way analysis of variance (ANOVA). Significance was established at the p < 0.05 level for all statistical tests. Mean PSS-10, CES-D, and BMI were reported with standard deviations [SD] (i.e., PSS-10 ± SD; CES-D ± SD; BMI ± SD). The association between mental health and weight dissatisfaction was measured with binary logistic regression models. Crude models measured the association between stress and weight dissatisfaction and between depression and weight dissatisfaction. Adjusted models controlled for sociodemographic variables. Goodness of fit was measured with the Hosmer-Lemeshow test, and multicollinearity was assessed. Perceived Stress Scale (PSS-10) The 10-item Perceived Stress Scale (PSS-10) was used to measure levels of stress among the student respondents. The PSS-10 has shown an internal consistency of α = 0.91 measuring perceived stress among college students during COVID-19 [60]. Students were asked to respond to the following items (within the last month) using a five-point Likert scale (never, almost never, sometimes, fairly often, and very often) and were scored from 0-4 respectively: The responses per question were added together in the final dataset to provide a cumulative score for each individual student respondent. Respondents that did not complete all 10 items were not included in the final analysis. Scores for the Perceived Stress Scale from 0-13 are considered low stress, 14-26 are considered moderate stress, and 27-40 are considered high stress. Internal consistency for PSS-10 was α = 0.73. Depression The Center for Epidemiologic Studies Depression Scale (CES-D), is a 20-item validated scale that measures risk for clinical depression [61]. The CES-D has shown an internal consistency of α = 0.79 measuring the risk for clinical depression among college students during COVID-19 [52]. Students were asked to respond to the following items (during the past week) using a four-point Likert scale (rarely [less than one day], some or a little of the time [1-2 days], occasionally or a moderate amount of time [3-4 days], and most or all of the time [5-7 days]) and were scored from 0-3, respectively. The responses per question were added together in the final dataset to provide a cumulative score for each individual student respondent. Respondents that did not complete all 20 items were not included in the final analysis. Total scoring for CES-D ranges from 0 to 60. Studies have designated that scores above 16 for CES-D might be suggestive of increased risk for clinical depression [61]. Internal consistency for CES-D was α = 0.82. Weight Satisfaction Weight satisfaction was measured using one item, "how satisfied are you with your weight since COVID-19"? Respondents had a five-point Likert scale of options to choose from (very satisfied, satisfied, somewhat satisfied, dissatisfied, and very dissatisfied). For the purposes of analysis, participant responses for this item were dichotomized into satisfied and dissatisfied. The responses very satisfied, satisfied, and somewhat satisfied were combined as "satisfied". The responses dissatisfied and very dissatisfied were combined as "dissatisfied". Chi-square analyses (X 2 ) were performed between the socio-demographic variables and weight satisfaction, and they were reported with odds ratios (OR) and 95% confidence intervals (CI). Independent samples t-tests were performed for the continuous PSS-10, CES-D, and BMI scores and weight satisfaction, (means ± SD) with 95% CI. Binary logic regression models were run to determine potential relationships between the predictor variables perceived stress, depression, and the binary response variable of weight satisfied/weight dissatisfied. Other binary logic regression models were run adding categorical predictor variables to the previously described model inclusive of gender, race, parents of children, first-generation students, student classification, current enrollment status, enrollment in online classes, marital status, current employment status, current residence, and U.S. citizenship. Participant Characteristics Most students reported being dissatisfied with their weight (75.1%, n = 413) compared to those that were weight satisfied (24.9%, n = 137). See Table 1 for differences in participant characteristics by weight satisfaction. The mean perceived stress score for all students was 23.72 ± 3.74. Most students (75.6%, n = 403) reported being weight dissatisfied with a mean perceived stress score of 24.02 ± 3.76. Weight satisfied (24.4%, n = 132) students had a mean perceived stress score of 22.80 ± 3.56. Weight-dissatisfied students were found to have significantly higher perceived stress scores compared to weight-satisfied students [t(532) = 20.41; p < 0.0001 *]. See Table 2 for differences in mean perceived stress based on sociodemographics and weight satisfaction. Students reported on 20 different items from the CES-D scale. The mean depression score for all students was 25.13 ± 8.43. Most students (75.4%, n = 402) reported being weight dissatisfied with a mean depression score of 26.06 ± 8.52. Weight satisfied (24.6%, n = 131) students had a mean depression score of 22.27 ± 7.51. Weight-dissatisfied students were found to have significantly higher depression scores compared to weight-satisfied students [t(532) = 8.35; p < 0.01 *]. See Table 3 for differences in mean depression scores based on sociodemographics and weight satisfaction. A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress. After adjusting for sociodemographic characteristics and depression, no association remained between perceived stress and weight dissatisfaction (Table 4). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress and depression. There was no association between perceived stress and weight dissatisfaction [OR = 1.04; 95%(CI) = 0.98-1.11; p = 0.22]. There was a statistically significant association between depression and weight dissatisfaction; from low to high, for every 1 unit increase in depression, the odds of being weight dissatisfied increased 1.05 times [OR = 1.05; 95%(CI) = 1.01-1.08]. Most students were female (84.1%, n = 456). Female students were significantly more likely to be weight dissatisfied compared to male students (p < 0.001 *). Females had significantly higher overall mean perceived stress scores compared to males (24.00 ± 3.54 vs. 22.32 ± 4.45; p < 0.0001 *). See Table 2 for differences in mean perceived stress scores based on gender and weight satisfaction. Females had higher overall mean depression scores compared to males but this was not statistically significant (25.29 ± 8.35 vs. 23.56 ± 8.92). See Table 3 for differences in mean depression scores based on gender and weight satisfaction. A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and gender. Both the crude and adjusted models were significant. The crude model found female students were 0.48 times as likely to be weight dissatisfied compared to males [OR = 0.48; 95%(CI) = 0.30-0.77], whereas in the adjusted model, female students were 2.07 times more likely to be weight dissatisfied compared to males [OR = 2.07; 95%(CI) = 1.23-3.48] ( Table 5). A majority of students identified as white (79.1%, n = 436). Other students that are non-white identified as African American (12.0%, n = 66), multi-racial (4.7%, n = 26), and Asian (4.2%, n = 23). White students were 1.14 times more likely compared to non-white students to be weight dissatisfied [OR = 1.14; 95%(CI) = 0.70-1.85; p = 0.59]. See Table 2 for differences in mean perceived stress scores based on race and weight satisfaction. See Table 3 for differences in mean depression scores based on race, ethnicity, and weight satisfaction. There were no statistically significant differences in perceived stress and depression scores based on ethnicity and weight satisfaction (p > 0.05). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and race. Neither the crude nor the adjusted models were found to be significant (Table 5). Most students were not parents of children (87.8%, n = 484). Students that were not parents had significantly higher overall mean perceived stress scores compared to students that had children (23.88 ± 3.77 vs. 22.63 ± 3.36; p = 0.01 *). See Table 2 for differences in mean perceived stress scores based on being parents of children and weight satisfaction. Students that were not parents had higher overall mean depression scores compared to students that had children (25.35 ± 8.39 vs. 23.60 ± 8.65). See Table 3 for differences in mean depression scores based on being parents of children and weight satisfaction. There were no statistically significant differences in depression scores based on being parents of children and weight satisfaction (p > 0.05). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and parents of children. Both the crude and adjusted models were significant. The crude model found parents were 2.39 times more likely to be weight dissatisfied compared to nonparents [OR = 2.39; 95%(CI) = 1.15-4.95], whereas in the adjusted model, nonparents were 0.36 times as likely to be weight dissatisfied compared to parents [OR = 0.36; 95%(CI) = 0.17-0.77] ( Table 5). A majority of students were not first-generation college students (64.0%, n = 352). First-generation college students were significantly more likely to be weight dissatisfied compared to non-first-generation college students (p < 0.0001 *). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and first-generation college students. Both the crude and adjusted models were significant. The crude model found first-generation college students were 0.49 times as likely to be weight dissatisfied compared to males [OR = 0.49; 95%(CI) = 0.32-0.76], whereas in the adjusted model first-generation college students were found to be 1.92 times more likely compared to non-first-generation students to be weight dissatisfied. [OR = 1.92; 95%(CI) = 1.22-3.03]. First-generation college students had higher overall mean perceived stress scores (23.86 ± 4.02 vs. 23.64 ± 3.59) and depression scores (25.88 ± 8.86 vs. 24.65 ± 8.13) compared to non-first-generation college students. See Tables 2 and 3 for differences in mean perceived stress and depression scores based on being a first-generation college student and weight satisfaction. There were no statistically significant differences in perceived stress and depression scores based on being a first-generation college student and weight satisfaction (p > 0.05). A near-even split of student respondents was found between undergraduate (49.9%, n = 275) and graduate students (50.1%, n = 276). Undergraduate students had significantly higher overall perceived stress scores to graduate students (25.92 ± 8.64 vs. 24.37 ± 8.18; p = 0.03 *). See Table 2 for differences in mean perceived stress scores between undergraduate/graduate students and weight satisfaction. Undergraduate students had higher overall depression scores to graduate students (24.02 ± 3.78 vs. 23.44 ± 3.69). See Table 3 for mean CES-D scores between undergraduate/graduate students and weight satisfaction. There were no significant differences in mean depression scores between undergraduate/graduate students and weight satisfaction. A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and classification. There was no association between undergraduate classification and weight dissatisfaction for the crude or the adjusted models (Table 5). Most students were currently enrolled full-time (85.8%, n = 472). Full-time students had higher overall mean perceived stress scores compared to part-time students ( Tables 2 and 3 for mean perceived stress scores and depression scores based on current enrollment status and weight satisfaction. There were no statistically significant differences in perceived stress and depression scores based on current enrollment status and weight satisfaction (p > 0.05). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and enrollment status. There was no association between enrollment status and weight dissatisfaction for either the crude or the adjusted models (Table 5). Most students were enrolled in online classes (60.7%, n = 333). Online students had higher overall mean perceived stress scores (23.81 ± 3.78 vs. 23.58 ± 3.71) and depression scores compared to on-campus students (25.46 ± 8.42 vs. 24.63 ± 8.40). See Tables 2 and 3 for mean perceived stress scores and depression scores based on being enrolled in online courses and weight satisfaction. There were no statistically significant differences in perceived stress and depression scores based on being enrolled in online courses and weight satisfaction (p > 0.05). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and enrollment in online classes. Neither the crude nor the adjusted models were found to be significant (Table 5). A majority of students were single (82.7%, n = 454). Single students had significantly higher overall perceived stress scores compared to married students (23.89 ± 3.80 vs. 22.89 ± 3.39; p = 0.02 *). See Table 2 for differences in mean perceived stress scores relationship status and weight satisfaction. Single students had higher overall depression scores to graduate students (25.39 ± 8.40 vs. 24.02 ± 8.52). See Table 3 for mean depression scores between undergraduate/graduate students and weight satisfaction. There were no significant differences in mean depression scores based on relationship status and weight satisfaction (p > 0.05). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and marital status. There was no association between marital status and weight dissatisfaction for either the crude or the adjusted models (Table 5). A majority of students were employed (75.8%, n = 417). Employed students worked a mean of 26.72 ± 12.52 h per week. Weight-dissatisfied students worked significantly more hours per week compared to weight-satisfied students (27.60 ± 12.40 vs. 24.20 ± 12.58; t(376) = 5.43; p = 0.02 *). Employed students had higher overall mean perceived stress scores compared to unemployed students (23.74 ± 3.70 vs. 23.67 ± 3.89). Unemployed students had higher overall mean depression scores compared to employed students (25.55 ± 8.54 vs. 24.99 ± 8.41). See Tables 2 and 3 for mean perceived stress scores and depression scores based on employment status and weight satisfaction. There were no statistically significant differences in perceived stress and depression scores based on current employment status and weight satisfaction (p > 0.05). A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and employment status. There was no association between employment status and weight dissatisfaction for either the crude or the adjusted models (Table 5). Most students lived off-campus (90.2%, n = 494). For the off-campus students, (47.4%, n= 260) lived in apartments, and (42.7%, n = 234) lived at home. Students that lived in off-campus apartments had significantly higher perceived stress scores compared to those that lived at home and on campus (24.21 ± 3.74 vs. 23.40 ± 3.58 vs. 22.87 ± 4.17; p = 0.01 *). See Table 2 for differences in mean perceived stress scores based on current residence and weight satisfaction. Students that lived in off-campus apartments had significantly higher depression scores compared to those that lived on-campus and at home (26.20 ± 8.65 vs. 25.57 ± 7.18 vs. 23.88 ± 8.35; p = 0.01 *). See Table 3 for differences in mean depression scores based on current residence and weight satisfaction. A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and current residence. There was no association between marital status and weight dissatisfaction for either the crude or the adjusted models (Table 5). Most students reported as U.S. citizens (91.6%, n = 504). U.S. citizens were found to have significantly higher overall mean perceived stress scores compared to non-U.S. citizens (23.90 ± 3.64 vs. 21.87 ± 4.30; p < 0.0001 *). See Table 2 for differences in mean perceived stress scores based on U.S. citizenship and weight satisfaction. U.S. citizens were found to have significantly higher overall mean depression scores compared to non-U.S. citizens (25.48 ± 8.33 vs. 21.31 ± 8.69; p = 0.001 *). See Table 3 for differences in mean depression scores based on U.S. citizenship and weight satisfaction. A binary logistic regression model tested the probability of a student being dissatisfied with their weight as a function of perceived stress, depression, and U.S. citizenship. There was no association between U.S. citizenship and weight dissatisfaction for either the crude or the adjusted models (Table 5). BMI, Weight Gain, Current Weight by Weight Satisfaction The mean BMI for students was 27.10 ± 7.20. There were significant differences in BMI [t(540) = 54.46; p < 0.0001 *]. Students that were dissatisfied with their weight had a significantly higher BMI compared to those that were satisfied with their weight (28.36 ± 7.58 vs. 23.34 ± 4.04). Students reported they gained a mean of 16.39 ± 18.08 pounds since starting college. There were significant differences in weight gained in college [t(488) = 20.50; p < 0.0001 *]. Students that were dissatisfied with their weight reportedly gained significantly more weight from when they started college compared to those that were satisfied with their weight (18.44 ± 16.94 pounds vs. 9.95 ± 20.07 pounds). There were significant differences by current weight in college [t(540) = 32.05; p < 0.0001 *]. The mean current weight for students was 166.08 ± 46.64 pounds. There were significant differences in current weight in college [t(540) = 32.05; p < 0.0001 *]. Students that were dissatisfied with their weight had significantly higher current weight compared to those that were satisfied with their weight (172.48 ± 49.24 vs. 147.03 ± 30.96). Student Education Outcomes Students responded to five different items related to their mental health and education. Most students reported they felt isolated during the COVID-19 pandemic (77.9%, n = 422), had adequate support from family/friends during COVID-19 (76.6%, n = 415), found it difficult to be motivated to attend class and/or complete all coursework due to the COVID-19 pandemic (71.5%, n = 387), and did not miss class and/or any assignments due to their mental health as a result of the COVID-19 pandemic (62.2%, n = 337). Other responses include students who had a change in student status as a result of COVID-19 [i.e., withdrawn from classes; full-time to part-time] (11.3%, n = 61). Significant differences were found based on weight satisfaction. Students that were dissatisfied with their weight were significantly more likely to report finding it difficult to be motivated to attend class and/or complete all coursework due to the COVID-19 pandemic compared to those students that were satisfied with their weight [78.8% vs. 21.2%, X 2 (1) = 9.63; OR = 1.91; 95%(CI) = 1.27-2.90; p < 0.01 *]. Students that were dissatisfied with their weight were significantly more likely to report missing a class and/or any assignments due to their mental health as a result of the COVID-19 pandemic [82.0% vs. 18.0%, X 2 (1) = 8.00; OR = 1.84; 95%(CI) = 1.20-2.82; p < 0.01 *]. Students that were dissatisfied with their weight were significantly more likely to report they felt isolated during the COVID-19 pandemic [77.3% vs. 22.7%, X 2 (1) = 4.20; OR = 1.59; 95%(CI) = 1.02-2.49; p = 0.04 *]. No other significant differences were found (p > 0.05). Discussion The prevalence of weight dissatisfaction was much higher in the study sample (75.1%) compared to reports from previous studies. A study of community college students found 41.7% of surveyed students were dissatisfied with their weight [62], and international studies reported a prevalence of weight dissatisfaction among 38.3% of undergraduates in India [63] and 47.2% of adolescents in Turkey [64]. Though cultural differences make it difficult to compare U.S. college students to international settings, both domestic and international studies consistently report that females are significantly more likely to be weight dissatisfied compared with their male counterparts [62,64]. It is possible that experiencing the COVID-19 pandemic influenced the particularly high weight dissatisfaction in the study sample. A recent study of college students from the southeastern U.S. found that 67.1% of students reported increased concerns about weight and body shape since the start of the pandemic [9]. Weight dissatisfaction was significantly associated with both BMI and depression, where weight-dissatisfied students were more likely to have a higher BMI and were more likely to have higher depression scores compared to weight-satisfied students. In Turkey, each unit increase in BMI resulted in a 7.5% increase in depression levels among young adults [65], and among U.S. young adolescent females, weight satisfaction was significantly correlated with depressive symptoms [66]. The relationship between depression and weight dissatisfaction remained after controlling for stress and sociodemographic characteristics, suggesting that depression and weight dissatisfaction are independently associated. Though not measured in the present study, weight stigma likely contributes to the association between weight dissatisfaction and depression. Experiencing weight stigma was found to be associated with weight dissatisfaction and with depression and anxiety [67][68][69][70]. Overall, depression appears to be more strongly associated with weight dissatisfaction compared to perceived stress. After adjusting for depression and sociodemographic characteristics, there was no association between perceived stress and weight dissatisfaction, whereas the association between depression and weight dissatisfaction persisted after adjustment. Those who were first-generation college students were more likely to be weight dissatisfied and had higher PSS-10 and depression scores compared with non-first-generation students. Among students seeking services at a university counseling center, first-generation students were significantly more likely to experience distress related to academics and finances [71]. However, other studies have found weak to no associations between firstgeneration status and depression [71,72]. Depression scores by race significantly differed between weight-satisfied and weightdissatisfied students. For all race categories, weight-dissatisfied students had higher depression scores. Among weight-dissatisfied students, those who identified as Multiracial had the highest depression scores, followed by white, Asian, and African American students. Weight dissatisfaction has been found to vary by race among U.S. adolescents from the upper Midwest region [73]. It has previously been reported that African American students, particularly African American women, report body dissatisfaction less frequently than their white counterparts; however, African American female students who were more enculturated reported body dissatisfaction at levels closer to white female students in the Midwest [74]. Thus, body dissatisfaction is likely more tied to prevailing cultural beliefs and norms rather than racial categories themselves. The influence of U.S. culture may also be evident in the differences in depression and stress reported by students who were U.S. citizens compared to those who were not citizens. For both depression and stress, U.S. citizen students reported higher scores for both weight-satisfied and weight-dissatisfied students. Conclusions A high prevalence of weight dissatisfaction was found in this sample of midwestern college students. Those who were weight dissatisfied scored higher on both stress and depression scales across multiple variables. Depression was independently associated with weight dissatisfaction. Further descriptive and analytic studies are recommended to better understand the potential moderating role of U.S. culture (enculturation) on associations between body dissatisfaction and mental health outcomes and to determine whether differences exist by region. Additionally, strategies to reduce the prevalence of weight dissatisfaction for improved mental health should be explored, particularly efforts to reduce experiences of weight stigmatization and expanding access to mental health care. Limitations Though the results presented demonstrate important patterns in relationships between weight dissatisfaction and mental health, results should be interpreted to understand a few limitations. Convenience sampling may reduce generalizability to other populations. Additionally, the cross-sectional nature of the study does not allow for assessments of causality; it is unknown whether mental health preceded feelings of weight (dis)satisfaction or whether mental health outcomes are a consequence of weight dissatisfaction. The relationships between physical health, mental health, and body perceptions operate within complex pathways. Future analytic studies should develop and measure other potentially mediating and moderating variables under a directed acyclic graph or causal diagram) framework, particularly variables related to experiences of weight stigma and sources of weight messaging. Institutional Review Board Statement: "The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Southern Illinois University-Edwardsville (protocol code #1009 and 10 December 2020)" for studies involving humans. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to FERPA regulations.
2023-05-20T15:05:06.118Z
2023-05-18T00:00:00.000
{ "year": 2023, "sha1": "243a3540887cb32a95bba1620f4c5ad31f4b4b23", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/covid3050056", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7484f27f94f869aee44605b6140db5348119e4a7", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
168452397
pes2o/s2orc
v3-fos-license
Auctions with external incentives: experimental evidence We consider auctions where bidders’ valuations are positively correlated with their productivity in a second-stage aftermarket. We test in the lab whether bidders recognize the opportunity to signal their productivity through their bidding and, conditional on them doing so, whether disclosing different information about the auction outcomes affects their signaling behavior. Our results confirm that bidders recognize the signaling opportunities they face and also react to differences in the way their bidding behavior is disclosed, although not always in a way that is consistent with theoretical predictions. Introduction Often, bidders care about the reputational effects of their bidding. For instance, this is the case when managers bid on behalf of shareholders for a takeover of another firm. Imagine that the value of the target firm to each bidding firm has a (private) component which depends on their management's ability. That is, the higher this 1 3 ability, the better the potential acquisition would be managed and the more valuable it would be. Thus, how much a bidding manager is willing to pay is a function of her ability as a manager. Insofar as managers' career prospects and future rewards depend on the post-auction inferences of markets about managerial ability to evaluate and manage assets, and managers are aware of this, the type of information that is disclosed at the end of the auction will influence their bidding. In other words, managers with career concerns will use their bids to publicly signal their ability. The same argument applies to bidding for spectrum auctions, sports agents bidding for a free player, or collectors, who also offer consultancy services, when bidding for a work of art to add to their collection. In such environments, auction disclosure rules that pre-specify the type of information released at the end of the auction become important. They may have an impact on auction revenues, the bidders' careers prospects and/or the level and distribution of post-auction wages and managerial compensations. These effects will in turn influence government revenues from license auctions, the interaction of market discipline and performance of managers of potential targeted firms, the regulation of art auctions, the scope of salary caps and existence of undisclosed fees in sports transfer markets, to mention but a few. The significance of these makes it imperative to understand the effects of various disclosure rules on auction and post-auction outcomes. This paper is an experimental investigation of the implications of disclosure rules on bidding behavior, auction outcomes and remuneration of bidders' post-auction services. To study these effects, we frame the incentives between the auction and the possible longer-term effects in a setting where bidders valuations in the auction are also linked to their productivity as workers. We chose this setting, which maps closely with the example of takeovers described above, because it is easy to explain to the participants in our experiment. As the structure of the experiment should clarify, however, the incentives in the experiment also apply to all of the examples described above. Specifically, we consider a setting with two stages. In the first stage, we run sealed-bid first-price auctions for a single object between two bidders. In the second stage, the bidders from the first-price auction take on the role of workers in a labor market where their wages are determined. In the latter market, two firms may hire any number of workers (who can work for only one firm) and each firm bids a wage for each worker's services. The benefits to firms from hiring bidders/workers are the same, and proportional to the bidders' valuations of the auctioned object. This is effectively Bertrand competition between firms for the services of any given worker. Throughout, valuations are private information of workers and at the end of the first stage, the winning worker's identity is publicly disclosed. Treatments differ in terms of the information released to everyone at the end of the auction about submitted bids before the wage-setting market takes place. An important benefit of this setting is that it maps very closely with the theoretical framework in Giovannoni and Makris (2014), from which we derive predictions. In particular, we consider four disclosure rules. We have disclosure rule A (for "all"), where all the bids and the corresponding bidder's identity are revealed; disclosure rule N (for "none") where the winner's identity is revealed but none of the bids are; disclosure rule W (for "winner") where only the winning bid (and the winner's identity) is disclosed, and disclosure rule S (for "second") where only the losing bid (and the winner's identity) are disclosed. 1 We also consider as controls the disclosure rule B (for "base") where no second-stage wage-setting market takes place, and the situation (disclosure rule T, for "transparent") where a second-stage wage-setting market takes place but valuations are revealed at the end of the firststage auction. In both treatments, there is no theoretical scope for signaling through bids. We address two main questions. The first question is whether bidders recognize the signaling opportunities created by the presence of the aftermarket. If they do recognize these signaling opportunities, the second question is whether their behavior conforms to the different incentives implied by the different disclosure rules. We also ask, to the extent that the results do not conform to the theoretical predictions, what might explain the deviations from predicted behavior. Our results suggest that subjects' behavior provide a positive answer to the first question. In particular, workers bid higher than they would in the absence of an aftermarket to signal their productivity: we call this overbidding. With respect to the second question, subjects do not show the full sophistication needed for confirming (expected) revenue comparisons across treatments. We show how, for treatments A , W and N , but not for S , the amount of a particular worker's overbidding is indeed related to how likely it is that such bid is revealed. This shows an understanding of the underlying incentives, with the exception of the S treatment. These results imply that signaling incentives in auctions are fairly robust, and the impact of these effects should be taken into account when setting up or studying auction markets. Our results also suggest, however, that precise predictions on the effect of specific disclosure rules should be made with a degree of caution. In particular, we do find that revenue is higher when there are signaling incentives, a natural consequence of overbidding. However, while subjects understand the differences in signaling opportunities across disclosure rules, they do not do so to the extent that revenues are affected as predicted by theory. For example, if we compare the W treatment with the A treatment, theory predicts a steeper bidding function in the former, but overall higher expected revenues in the latter because incentives to overbid across individuals are higher. Our results find that the empirical bidding function in the W treatment is indeed steeper than that in the A treatment, but expected revenues are not significantly different across treatments. This indicates that bidders understand the relative differences but not the absolute ones across incentives. Put another way, bidders seem to understand the comparative statics but are not sophisticated enough to conform to the point predictions. We can show that most of these failures can be explained by the behavior of bidders whose valuations are quite low or quite high, whereas bidders with intermediate valuations tend to conform more to theoretical predictions. There is a theoretical literature that deals with cases where reputational effects distort bidding behavior. Das Varma (2003), Haile (2003), Goeree (2003), and Salmon and Wilson (2008) discuss the effects of external incentives on bidding behavior. Their focus is on the comparison of various price mechanisms for a given disclosure rule. In our experiment, all our treatments instead fix the first-stage price mechanism (first-price auction) and vary the disclosure rules. This is because Giovannoni and Makris (2014) show that in an environment where external incentives introduce solely signaling incentives, disclosure rules are the crucial component in terms of expected revenue comparisons. Katzman and Rhodes-Kropf (2008) and Molnar and Virag (2008) also recognize the importance of disclosure rules but their results are more ambiguous than those in Giovannoni and Makris (2014). 2 Finally, Dworczak (2017) studies the problem of optimal disclosure rules, allowing for much more generality in the disclosure of information, but does not allow for reputational concerns for the losers in the auction. Our paper also relates to a few strands of literature within experimental game theory. In the first instance, this work is the latest in a line of research exploring the motives for overbidding in auctions. A long-standing literature on experimental auctions studies the role of informational feedback and overbidding in first-price auctions. Isaac and Walker (1985) first studied this question. They compared the case where the winner's bid was revealed to the case where all bids were revealed; they found more overbidding in the former treatment than the latter. More recently, Dufwenberg and Gneezy (2002) and Ockenfels and Selten (2005) have extended the literature on informational feedback. One possible behavioral explanation as to why different modes of feedback can result in different levels of bidding relative to the risk neutral Nash equilibrium is regret. This behavioral explanation was first proposed by Engelbrecht-Wiggans (1989), and extended by Engelbrecht-Wiggans and Katok (2005). Filiz-Ozbay and Ozbay (2007) propose a model of anticipated regret to explain why different informational conditions lead to overbidding. Another literature that explores overbidding in auctions is that on auctions with resale. Overbidding relative to the "standard" case occurs in this environment because bidders whose private values are low have the opportunity to recoup any losses incurred in the first auction through reselling the object (see Lange et al. 2011;Georganas 2011 andGeorganas andKagel 2011;Filiz-Ozbay et al. 2015;Pagnozzi and Saral 2017, for recent experimental investigations). In our paper, bid disclosure affects bidding behavior not for behavioral reasons, but as the profit-maximizing response to career concerns. Our paper thus contributes to the relatively small experimental literature on signaling. Miller and Plott (1985) study a product quality signaling game; Cooper et al. (1997) study limit pricing games. More recent studies include Cooper and Kagel (2005;, who study individual vs team play in signaling games, Kübler et al. (2008), who study job market signaling in the lab and Jeitschko and Normann (2012), who study signaling games with deterministic vs stochastic signals. Dodonova and Khoroshilov (2014) Auctions with external incentives: experimental evidence the signalling hypothesis proposed by Fishman (1988) behind jump bidding in takeover auctions with entry costs. The innovation in our paper is that we study signaling through auctions which introduces an additional layer of complexity. In some of our treatments, the information that becomes publicly available depends on the behavior of other agents: when only the winner's bid is disclosed, for example, any bidder is never certain that her bid will be disclosed. Our paper is closely related to Bos et al. (2018), who experimentally investigate the role of an external observer on behavior in a two-player aftermarket. The external observer in their experiment performs the same role as the firms in our experiment, in that payoffs to bidders may depend on the observer's valuation. The experiment varies the auction format (first-price vs. ∼ second-price), information to the external observer about the winner's payoffs, and whether the observer's estimates of bidders' private values affects bidders' payoffs. Similar to our results, the authors find that signaling opportunities affect bidding behavior in the auction, even though bidding is not as aggressive as predicted by theory. They also find that the first-price auction when winner's payment is revealed outperforms all other treatments in terms of revenue and efficiency. This paper complements our research in two dimensions. It compares two auction formats under one disclosure rule, while we focus on multiple disclosure rules under the first-price auction. It also considers a different implementation of the aftermarket stage. In the next section, we introduce the theory and hypotheses behind our experiment. Section 3 describes the experiment and Sect. 4 collects our results. In the latter Section we also provide a discussion of the results. Section 5 concludes while the Appendix contains some more theoretical details as well as detailed instructions for the experiment. Giovannoni and Makris (2014) provide a general analysis of the theoretical setting, but in that paper, the second-stage market is not explicitly modeled. In our experimental analysis, however, we have subjects explicitly participating in the secondstage market. Therefore, in this paper, we need to formally model these interactions. In this section we introduce the model, and describe the main results together with the hypotheses we will test in the experiments. We leave some of the details for the Appendix. Theory and hypotheses The Giovannoni and Makris (2014) framework applies to many different situations where bidding behavior, by revealing information about the underlying valuations of the bidders, may also affect the latter's opportunities in a future market interaction. For example, consultants in art markets will be aware that their bidding (advice) will reveal their expertise to future clients, or CEOs of firms that take over another firm will know that their bids will be a signal of their managerial ability. 3 For the purposes of the experiment, however, and to make the underlying situations as simple and transparent as possible to participants, we consider a situation where bidders are "workers" whose valuation for a good in the first-stage auction is also an indicator of the productivity in a second-stage labor market where they are employed by "firms". In particular, we have two auction stages with two bidders in each stage. In the first stage there is a first-price auction where two "workers" bid for a single unit of an indivisible good in a standard independent private value (IPV) setting. Each bidder i ∈ {1, 2} has a valuation x i for the good and the valuations are independently and uniformly distributed on [0, 100] . Let b i be the bid of worker i. We denote = x 1 , x 2 and = b 1 , b 2 , while capital letters denote random variables and small letters their realizations. In the second stage, there is a market ("the aftermarket") that determines which worker is employed, and at what wage, by existing firms. Firms are symmetric; their valuation for worker i's employment is x i , > 0 , while they obtain zero if they do not employ a worker. Thus, we assume that a worker's willingness to pay for the object in the first-stage auction is proportional to her productivity for firms in the second period. We will consider several versions of this model, corresponding to different treatments in our experiment and these versions differ from each other according to how much information is publicly disclosed at the end of the first-stage auction about workers' valuations. Throughout, we will consider Perfect Bayesian Equilibria where, in the first-stage auctions, bidding functions are symmetric and monotone. We will simply refer to them as equilibria. Disclosure rules We begin with describing the aftermarket, which will be the basis for our comparisons across different disclosure procedures. It consists of two parallel first-price auctions. In second-stage auction 1, two "firms" A and B bid for worker 1's employment while in second-stage auction 2, the same two "firms" A and B bid for worker 2's employment. Workers, at this stage, simply receive the highest bid in second-stage auction i as a wage. Let w i l be the wage offer from firm l ∈ {A, B} to worker i (i.e. firm l ′ s bid in second-stage auction i) and let l = w 1 l , w 2 l be the wage profile offered by firm l. Then, the utility for firm l in this stage if she offers l and the other firm m offers m is equal to where 1 A represents the indicator function that takes value 1 iff A is true. Given the above, the total utility for worker i over the two stages, given Auctions with external incentives: experimental evidence Note that winning or losing the first-period auction has no consequence for the second-stage wages in itself. However, as we shall discuss shortly, it may affect what firms know about the workers' valuations and hence the wage offers they will be willing to make. We will denote with I the information available to everyone in the second-stage market under disclosure rule . Since this is publicly available information, it will be common knowledge amongst the firms and the workers. Let ∈ {1, 2} denote the winner of the first-stage auction and − denote the loser. The disclosure rules are: 1. Transparent: = T . Here x 1 and x 2 are publicly revealed and so I T = . We also consider the benchmark case = B, where no aftermarket exists, which corresponds to the standard IPV setting. The information available at the end of the auction is obviously irrelevant in this case. 4 We start our analysis from the second stage. Here the uninformed parties (the firms) make the offers in the second-stage auctions. Risk neutrality and independence of the valuations mean that the value of employing a worker is a fraction of the worker's valuation of the good in the first stage. Since information at the end of the first-stage auction is publicly available, both firms form the same expectation, E X i |I , for each worker i, when they contemplate their offers. The following proposition follows immediately: Proposition In any equilibrium, and conditional on the disclosure rule , there is a unique profile of wages A ( ), B ( ) offered by firms in the aftermarket, where for each worker i = 1, 2 The intuition here is that firms are involved, in effect, in symmetric Bertrand competition. As a result, workers are able to extract all the expected (from the firms' point of view) surplus they can generate for firms. We can now turn to the study of the first-stage auction. Given the proposition, the procedure follows that of Giovannoni and Makris (2014), which we summarize here (more details are given in the Appendix). The analysis entails first putting a simple restriction on off-the-equilibrium-path beliefs, which in turn allows us to define bidder effective valuations under An effective valuation is a function of the standard valuation x i that captures all that is at stake for a bidder in the first-stage auction given equilibrium play in the aftermarket. In more detail, we have the direct utility from winning the auction x i , but also the reputational returns that i can expect in the second-stage auction(s) as a function of her valuation. These reputational returns are divided into two further components: one that captures the net reputational gain to the bidder from winning the auction, and one that captures the additional reputational net gain from marginally increasing the bid. As shown in Giovannoni and Makris (2014), the optimal bidding function in the first-stage auction is the standard first-price bidding function in the IPV setting but with effective valuations replacing standard valuations. Letting denote such bidding functions, we get that the equilibrium bidding functions are: Figure 1 plots the theoretical bidding functions. The intuition for the bidding functions B and T is straightforward: in the first case there is no aftermarket and we have the standard IPV result. In the second case, valuations are publicly revealed Auctions with external incentives: experimental evidence at the end of the first-stage auction, and so bidders have no incentive to use bids as a signaling device. As a consequence, we obtain again the behavior of the standard IPV setting. All other disclosure rules imply overbidding relative to B and T because now being perceived to have high valuations is beneficial. 5 Furthermore, the way overbidding obtains depends on the disclosure rule. To understand how overbidding is shaped, we begin with a comparison between A and W . For A , note first that in a monotone equilibrium bids reveal exactly a bidder's valuation and so other bidders' bids have no impact on reputational returns. In contrast, in auctions where the disclosure rule is W , winning or losing does matter for inferences about x i because if i wins then b i becomes known, while if i loses then firms believe b i to be below the competing (disclosed) bid. In addition to the reputational gain from winning, there is also a reputational gain (relative to the increase in the likelihood of winning the auction) for bidder i from increasing her bid marginally, because, by doing so, she might increase marginally the perception of firm(s) about her valuation. For disclosure rules A and W , this relative gain is the same conditional on own bid being disclosed. However, in the former case, all bids are always disclosed, while in the latter case, the only disclosed bid is the winner's. The result is that in auctions where the disclosure rule is A , bidders with different valuations have similar incentives for overbidding. With disclosure rule W , in contrast, the incentive to overbid is much higher for workers with high valuations (who are more likely to win and have their precise bids publicly revealed) than for workers with low valuations (who are more likely to lose and for whom all that will be known is that their bid is below the winner's bid). 6 The intuition for S auctions is the opposite to that in W auctions: now it is bidders with low valuations whose bids are more likely to be disclosed and such bidders have a greater incentive to overbid. 7 Finally, with disclosure rule N , only the identity of the winner is disclosed so that signaling only comes from winning or losing the auction. In particular, because none of the bids is disclosed, the net reputational gain from winning the auction must be constant, and so bidders of all valuations have the same incentive to overbid. Still, there is less overbidding under N than under A (where incentives to overbid are also constant across types). Under disclosure rule N , no bid is ever disclosed and thereby the relative net reputational returns bidders can expect from marginally increasing their bid do not exist with disclosure rule N . 5 Giovannoni and Makris (2014) show that if being perceived to have a low valuation is beneficial then underbidding will occur in equilibrium. The same could be obtained here by assuming that worker productivity for the firm is 100 − x i . 6 The fact that in A auctions the overbidding incentives are exactly the same across valuations is a consequence of the fact that we have two bidders and a uniform distribution (which is also responsible for the linearity in the bidding functions). 7 Thus, for disclosure rule S the incentive to overbid for low types "flattens" the bidding function. It is easy to see that these incentives cannot be too great otherwise the bidding function would no longer be increasing and the equilibrium we focus on would not exist. With our parameterization this means that we need < 2 3 . Finally, it is also easy to calculate expected revenues in each treatment using the formula We summarize these results in the following set of hypotheses. Our two main points of interest in the first-stage auction are i) whether there is overbidding as predicted by the theory and ii) whether, conditional on such overbidding, the comparative statics for disclosure rules A, S, W, N apply. We begin by looking at expected revenues: Hypothesis 1 Expected revenues are highest in the A and S treatments, and lowest in the B and T treatments, with the W and N treatments being the intermediate case. Hypothesis 1 is not, however, the only possible test of the theory. Even if our results confirm the hypothesis, this may happen with bidding functions that differ significantly from the predicted ones. As is well known in the experimental literature on auctions (Kagel 1995;Kagel and Levin 2008), bidding generally does not conform to point predictions made by the theory. Thus, we will also consider a different hypothesis that compares bidding functions across different disclosure rules qualitatively, and allows us to see whether bidders understand the different nature of the signaling opportunities generated by the type of disclosure they face. Our theoretical bidding functions are linear in valuations and so can be fully summarized by the slope and their vertical intercept. Simple inspection of the slope and vertical intercept of each of these bidding functions will then give predictions for how low types should bid in one disclosure rule over another and how rapidly bids should increase in one disclosure rule over another as valuations increase: Hypothesis 2 (a) The empirical bidding functions for the treatment are ranked as follows in terms of slope: The empirical bidding functions for the treatment are ranked as follows in terms of intercept: Our model also generates predictions with respect to second-stage outcomes. We first look at workers' wages. Recall that wages should be equal to the expected productivity of each worker (conditional on the publicly available information at the end of the first-stage auction). We can then write wages w i for i as a function of her bid b i and her opponent's bid b j 8 : See the appendix for details, keeping in mind that in the T treatments, the valuations are known to the firms so that bids are irrelevant. Also, note that in the experimental setting we cannot in principle exclude that two workers bid the same amount in the first-stage auction and this creates the issue of how to define a winner in the event of a tie. This matters particularly for wage determination with disclosure rules W, S and N . We resolve the problem by determining the winner via a lottery and providing the information according to the disclosure rule. For example, with disclosure rule W , if b i = b j but i wins the lottery, only her bid is disclosed. These results can be summarized by a second set of hypotheses. We begin with a simple consequence of the fact that across disclosure rules, we predict bidding functions that are increasing in valuations and we disclose the identity of the winner. Since we also predict that wages will be a function of expected valuations, then we expect: Hypothesis 3 Across disclosure rules, winners receive higher wages than losers. To test more precise predictions regarding wages, we again rely on comparisons across disclosure rules: 9 Hypothesis 4 With disclosure rules T and A , the two workers share the same wage function. 10 With disclosure rule W , the winner's wage is more responsive than the loser's wage to the winner's bid, while with disclosure rule S the loser's wage is more responsive than the winner's wage to the loser's bid. 11 These predictions are intuitive if one recalls that wages are only a function of the aftermarket's expectation about workers' productivity. The immediate consequence is that with disclosure rules T and A , a worker's wage is based on her own valuation or bid, while with disclosure rules W or S , the wage must be based on the only bid that is observed: respectively, the winner's or the loser's. In each of these last two cases, the impact on wages will depend on whose bid is the one that is disclosed. As one would expect, the marginal effect of the publicly observed bid on the wage of the bidding worker must be stronger than the marginal effect on her opponent's Recall that in the B treatment, there is no aftermarket. 10 Of course, in the T case, such wage function is a function of the bidder's valuation x i while in the A case, it must be a function of her bid b i . In fact, 11 In the N treatment, bids are not disclosed, so the wage functions under this treatment cannot be part of the comparisons in Hypothesis 4. wage. This explains the differences in the wage functions of the winner and loser under W or S. Finally, one last testable implication of our model is that it predicts that, conditional on a given expected valuation, all the surplus (as expected by firms) will go to the workers. In other words: Hypothesis 5 Under all disclosure rules, firms will make zero profits. Notice that this hypothesis requires both that firms can form correct beliefs about the workers' valuations and that they conform to Bertrand competition. In the case of disclosure rules T , firms know the workers' valuations while in all other cases, they do not and have to form beliefs relying on disclosed bidding behavior. We will use this difference to investigate Hypothesis 5 further. Experimental design and procedures Our experiment implements the model laid out in Section 2 in a between-subjects design. Table 1 outlines the experimental design. We consider six treatments. Five treatments differ in the information revealed after the first stage auction: in T , both workers' private values are revealed; in W , only the winning worker's bid is revealed; in S , only the losing worker's bid is revealed; in A , both workers' bids are revealed; and in N , neither values nor bids are revealed. Finally, in B , we consider a standard first price auction without an aftermarket. Upon arrival to the laboratory, subjects sat in individual computer booths. Verbal communication was not allowed at any time. We gave written copies of the instruction sets to subjects (reproduced in the Appendix), and we publicly announced that everybody in a given role was reading the same set of instructions. Subjects had a maximum of 10 min in which to read the instructions; after that time elapsed, subjects had the opportunity to ask clarification questions in private. Once all queries were answered, the experiment started. Subjects were assigned to the role of firm or worker at the beginning of the experiment, and they kept their roles until the end of the experiment. Subjects had five practice periods in which to familiarize themselves with the software interface and the auction environment. Once the five practice periods were concluded, the experimenters made a public announcement that all further rounds would be incentivized. In every period of the experiment (including the practice periods), subjects were randomly matched with other participants in the session. In all treatments, the first-stage auction had two workers bidding for a prize, whose value was an i.i.d. draw from a uniform distribution with support {0.01, 0.02, … , 100.00} , as is standard in the experimental literature on privatevalue auctions (c.f. Goeree and Holt 2002;Güth et al. 2005). Applying an i.i.d. draw for each participant in a given session is closest to the spirit of the theory. When the support of the distribution of private values is large relative to the number of draws, it ensures that the results will be based on (in expectation) a larger set of realizations. The main alternative to this approach would be to use a pre-randomized sequence of private values. Its main advantage is comparability of behavior, and less variability across sessions. This advantage translates into more statistical power. However, given that the set of possible private values is very large (10,000 possible values), we feel that using the same sequence of only 35 realizations per bidder would make our results unrepresentative. Worker bids could be made from the set {0.00, 0.01, 0.02, … , 100.00}. 12 After both workers placed their bids, a screen provided feedback to the workers and firms; the type of feedback was a function of the disclosure rule. The second-stage auction followed the feedback screen. The two firms made a wage offer for each of the two workers, also from the set {0.00, 0.01, 0.02, … , 100.00} . A worker would be assigned to the firm who made the highest wage offer; it was therefore possible for one firm to hire both workers. The value to firms from hiring a worker was equal to 40% of the worker's private valuation of the object on sale in the first-stage auction. In other words, the value of the parameter was set to 0.4. There were a total of 35 incentivized periods in the experiment. The payment was the payoff from three randomly picked periods, plus a show-up fee. Since our theory predicted firms would earn significantly lower payoffs than workers, we set the show-up fee for workers to be equal to £5, and the show-up fee for firms to be £10. We conducted three sessions with 12 participants in each session. Each session implemented one condition. No participant took part in more than one session. Our subjects were recruited from a pool of volunteers, all of whom were undergraduate students from a wide range of disciplines using the lab's ORSEE system (Greiner 2015). The experiment was programmed using Z-Tree (Fischbacher 2007). A total of 216 subjects took part, none of whom had ever taken part in an auction or market experiment before. Sessions lasted on average 90 minutes, and the average payment was approximately £18.50 (which is more than twice the minimum hourly wage in the UK). Results The unit of analysis is the bidding decision by an individual (firm or worker) in an incentivized experimental period (our analysis is robust to the inclusion of the five practice periods), or an individual's per period revenue. Unless otherwise noted, we will employ random effects estimators with clustered standard errors at the session level, to account for the random matching protocol. We also consider extended models that include a linear time trend. We replicate our analysis by considering only the final 20 rounds, to control for any learning that may have taken place in the first third of the experiment. Our results are robust both quantitatively and qualitatively. Results are available upon request. First-stage auction bidding We start by looking at revenue in the first-stage auction, summarized in Table 2. 13 Result 1: Limited disclosure generally leads to overbidding but there is weak support for our revenue predictions. Support: We find partial confirmation of our theoretical predictions. Hypothesis 1 stated revenues should be lowest in T and B . Average revenue is indeed very similar in T and B ( 2 (1) = 0.04, p = 0.836 ), as predicted. While average revenue in all other treatments is nominally higher, it is only significantly higher in the case of Table 2 First-stage auction revenue Omitted category is B . In the N treatment we collected 12 workers per session, as there were no firms. In all other treatments, we collected only 6 workers per session. As a result, In all other cases, we find no significant differences (a full breakdown of all tests is in Table 8 in the Appendix) between treatments without signaling opportunities ( B and T) and treatment with such opportunities (all others). With respect to the latter group of treatments, with the exception of W and S ( 2 (1) = 4.06, p = 0.044 ), we find no significant differences in average revenue across treatments. While our theory predicted revenue equivalence between some disclosure rules, it did not predict it for others. We now turn to the bidding functions themselves. Table 3 shows results from regressions of the bid placed by worker i in experimental period t, b i,t , on that worker's private value in that period, X i,t , and dummy interactions with the relevant treatments and their corresponding dummy intercepts. In order to facilitate the interpretation of the table and for ease of comparison with point predictions, we present estimated coefficients directly, rather than the estimated dummy interaction coefficients. Table 9 in the Appendix shows results from the same regression in the traditional format. 14 Result 2: The empirical bidding functions are ranked in terms of slopes as follows: W > B = N = A = T = S . The empirical bidding functions are ranked in terms of intercepts as follows: A = S = N > W = T > B. Support: We start by comparing estimates of slopes. The first part of Hypothesis 2 pertains to the equality of estimated slope coefficients in the B , T , N and A treatments. We fail to reject the null of equality for all pairwise tests of equality slope of slope coefficients ( 2 (1) ≤ 1.96, p ≥ 0.161 ) (see Table 10 for all pairwise test results). We also fail to reject a joint test of equality of the estimated coefficients ( 2 (3) = 2.23, p = 0.526 ). The next part of Hypothesis 2 stated that the estimated slope on the S treatment is smaller than all other estimated slope coefficients. However, the slope coefficients in the S and T treatments are virtually the same and not statistically significantly different ( 2 (1) = 0.08, p = 0.781 ). When comparing A and S , the slope coefficients are nominally different but statistically not Session-clustered SEs in parentheses. * * * , * * p < 0.01, p < 0.05 significantly different ( 2 (1) = 0.98, p = 0.322 ). We note that the estimated coefficients are very tightly bound, so this lack of statistical significance is unlikely to be from noise. In all other pairwise comparisons, the differences in estimated slope coefficients were highly significant ( 2 (1) ≥ 9.06, p ≤ 0.003 ). Furthermore a joint test of equality across all pairwise comparisons with S yielded a significant difference ( 2 (4) = 21.21, p < 0.001 ). The final comparative static prediction was that the slope of the bidding function in W was steeper than in other treatments. With the exception of the comparison with B ( 2 (1) = 2.60, p = 0.107 ), all pairwise tests were significant ( A = W : 2 (1) = 3.41, p = 0.065 ; T = W : 2 (1) = 6.30, p = 0.012 ; N = W : 2 (1) = 35.04, p < 0.001 ); a joint test of equality was also highly significant ( 2 (4) = 37.21, p < 0.001). We proceed with estimated intercept comparisons. The model predicts that the estimated bidding functions for W , B and T have the same intercept. We reject that hypothesis ( 2 (2) = 33.25, p < 0.001 ). The model also predicts that the S will have the highest intercept; while the estimated intercept for S is significantly larger than B ( 2 (1) = 33.23, p < 0.001 ), T ( 2 (1) = 9.00, p = 0.003 ) and W ( 2 (1) = 10.21, p = 0.001 ), it is not statistically significantly different to that of N ( 2 (1) = 0.27, p = 0.600 ) or A ( 2 (1) = 0.57, p = 0.452 ); a joint test of equality of intercepts was highly significant ( 2 (5) = 96.10, p < 0.001 ). The second highest intercept is predicted by the model to be that of A ; while the estimated coefficient in A is indeed significantly larger than that of W , B , and T (all comparisons, 2 (1) ≥ 13.42, p < 0.001 ), it is not significantly different to that of N ( 2 (1) = 1.41, p = 0.235 ); a joint test of significance yielded a significant difference ( 2 (4) = 74.92, p < 0.001 ). Finally, the model predicts that the empirical bidding function in N will have a higher intercept than that of W , B and T . We find significant differences in estimated intercepts in all pairwise comparisons ( W ∶ 2 (1) = 4.12, p = 0.042 ; B ∶ 2 (1) = 17.04, p < 0.001 ; T ∶ 2 (1) = 4.52, p = 0.034 ); a joint test of equality of intercepts was highly significant ( 2 (4) = 67.96, p < 0.001). Wages We now turn to the effect of different disclosure rules on wages. Our third hypothesis stated that workers that win the first-stage auction should receive a higher wage than workers that lose it. Table 4 summarizes the estimates of a regression of wage offer Auctions with external incentives: experimental evidence in stage 2 on a series of treatment dummies interacted with a dummy for stage-1 auction winner. As before, we present the directly estimated coefficients for ease of exposition. Table 11 displays regression estimates of the dummy coefficients. Result 3: Workers who win the first-stage auction receive higher wages than workers who do not win the first-stage auction. Support: Winning workers earn substantially higher wages than losers, as demonstrated by We now turn to Hypothesis 4, which pertained to the shape of the wage function for the two workers. Table 5 summarizes the results from estimations of the wage equations for the winning worker and losing worker in each treatment as a function of the relevant stage-one auction variables. Result 4: (a) The estimated wage functions for the winning worker in T and A have similar slopes to those of the losing workers' but higher intercepts. (b) In disclosure rule W , the winning worker's wage is more responsive than the losing worker's wage to the winning worker's bid. (c) In disclosure rule S the losing worker's wage is not significantly more responsive than the winning worker's wage to the losing worker's bid. (d) Average wages are higher for the winning worker than for the losing worker in the N treatment. Support: (a) In the T treatment, the estimated intercept for the winning worker's wage function is larger but not significantly different than that of the losing worker's ( F(1, 2) = 5.53, p = 0.143 ); the slope coefficient of the losing worker's wage function is marginally significantly different to the winning worker's ( F(1, 2) = 10.46, p = 0.084 ). A joint test of equality of slope and intercept is only marginally significant ( F(1, 2) = 10.94, p = 0.084 ). In the A treatment, the comparisons of interest is on the coefficient on the winning worker's own bid versus the coefficient on the losing workers' bid, as well as both workers' intercept coefficients. The former test yields a non statistically significant difference ( F(1, 2) = 0.09, p = 0.794 ), but the latter yields a statistically significant difference ( F(1, 2) = 29.15, p = 0.033 ). (b) In the W treatment, the estimated slope coefficients are statistically significantly different ( F(1, 2) = 104.56, p = 0.009 ). (c) In the S treatment, the estimated slope coefficients are not statistically significantly different ( F(1, 2) = 1.63, p = 0.330 ). (d) In the N , we observe a significant difference in average wages ( 2 (1) = 105.26, p = 0.009). We conclude this subsection with the fifth hypothesis, which stated that firms should make zero profits in all disclosure conditions. Table 6 summarizes estimated average profits across all relevant conditions from a GLS regression of firm profits on treatment dummies. One interesting feature of our data is that firms are not making zero profits, as theory predicts. Empirical data on Bertrand duopolies (Dufwenberg and Gneezy 2000;Fonseca and Normann 2012) suggests that subjects may not reach the Bertrand-Nash equilibrium, particularly when the number of players is low, as is the case in our paper. In the case of our specific experiment, there is a potential additional reason why, on average, profits end up being positive: firms may form the wrong beliefs about workers' private values. That explanation should be valid if profits are significantly higher in the limited disclosure treatments than in T . In fact, we see mixed evidence in favor of that explanation. There is no statistically significant difference in profits between T and S ( 2 (1) = 0.39, p = 0.534 ), or A ( 2 (1) = 0.28, p = 0.599 ) but, firms make higher profits on average in W ( 2 (1) = 37.12, p < 0.001 ). and N ( 2 (1) = 5.18, p = 0.023). The role of interim beliefs The above results apply when we average over the valuations, so that it is worth considering whether matters change for particular valuations. Figure 2 displays estimates of firm profits conditional on the private value of the hired worker and its square value for each of our treatments where there was an aftermarket. In T , firm profits are consistently close to zero, as predicted. In contrast, in the limited disclosure treatments, firms consistently make losses on workers whose private values were low, but make reasonably large profits when private values are high. The most extreme case is the N treatment, while the other treatments lie somewhere in between. This suggests that firms were also unable to correctly infer worker types from their bidding behavior in stage 1 at either end of the distribution of private values. The only way to be sure incorrect beliefs were at the heart of incorrect bidding for particular types, as opposed to alternative explanations such as decision error, we needed explicit data on beliefs. However, we did not include belief elicitation in our original design, as we felt it would be detrimental to decision quality in an already complex environment. In order to understand to what extent bidding behavior in the first-stage auction lead to correct beliefs about valuations, we conducted an additional experiment. The purpose of this experiment is to see to what extent firms offer or accept wages using correct information. We recruited a separate sample of subjects from the same subject pool as the main treatments. Upon arrival to the laboratory, subjects sat in individual computer booths, and verbal communication was not allowed at any time. Subjects were told they were taking part in an individual choice experiment. The instructions told subjects they would observe a sequence of bids placed by subjects in the firststage of the earlier A treatment. Their task was to guess the private values of each of the two workers in each period. The instructions included a copy of the instruction set used in the A treatment. 15 See the Appendix for the instructions for this treatment. To make the guesses incentive-compatible, we employed a quadratic scoring rule of the form: Quadratic scoring rules are incentive-compatible if subjects are risk-neutral, expected utility maximizers (Offerman et al. 2009). Those assumptions are the ones made by our theoretical model, hence this belief elicitation method is suitable to our set up. We conducted two sessions with 18 participants in each session. This means that two subjects in the belief elicitation sessions observed the same sequence of bids-this allowed us to test for the consistency of guesses. Indeed, beliefs were very consistent: the average Spearman correlation for pairs of "guessing participants" was 0.84, with only two cases having a Spearman correlation below 0.75. We are therefore confident that the elicited beliefs are reliable measures of the predicted workers' private values. 10,000 − (true value − predicted value) 2 15 Of course, these subjects did not observe any outcomes from the second stage. 3 Auctions with external incentives: experimental evidence Note that the theoretical bidding function here is given by 1 2 x i + 40 , which implies that we should not observe bids below 40 or above 90. Any such bids are off-theequilibrium path behavior and, as required by the equilibrium under consideration, any bids below 40 should lead to beliefs that put probability one on type zero while any beliefs above 90 should lead to beliefs that put probability one on type 100. In our treatment, 19% (68%) of bids by the winning (losing) worker were below 40, but only 1% (0%) of bids by the winning (losing) worker were above 90. Table 7 presents estimation results of elicited beliefs as a function of observed bids by the winning and losing worker in the first-stage auction from the A sessions. To capture the piecewise linear nature of the predicted belief function, that is we regressed the belief about worker value on b ij with dummies for the case when b ij > 90 and b ij < 40 , and their interactions with b ij . In addition, we ran a linear specification. We consider three samples: the beliefs on the winning worker value, the losing worker, and the pooled data (since the belief function should be the same for both workers). It is important to note that the linear function performs equally well as the piecewise model, as can be noted by the R 2 for both models (regardless of whether we look at the winning worker, losing worker or the pooled data). Figure 3 (left) plots the theoretical belief function for both winning and losing workers in the A treatment; it also plots the two estimated belief functions: the piecewise linear model that corresponds to the theoretical model and a simple linear specification, as per Table 7. Two aspects are noteworthy: firstly, both empirical belief functions are practically indistinguishable, which suggests that the more parsimonious specification is the better model for the belief data. Secondly, both empirical belief functions differ substantially from the theoretical belief function for observed bids below 40off-equilibrium behavior. This suggests that firms in the main experiment expected workers with low private values to overbid less than suggested by the theory (which, if anticipated by workers, then removed their strategic incentive to do so). This leads us to ask: were firms able to predict bidding behavior well? Figure 3 (right) plots the inverted belief function (linear specification) against the empirical bidding function from workers in the A treatment. The empirical bidding function (blue) is slightly flatter than the inverted belief function and it has a higher intercept. In the discussion of Result 5 at the beginning of this subsection, we highlighted how firms choose wages that produce profits closer to theoretical predictions for intermediate valuations and farther from those predictions for extreme valuations. In that context we suggested that it may be difficult for the aftermarket to predict bidding behavior in these circumstances. Figure 3 (right) confirms that suggestion: firms may have underestimated both the degree to which workers bid in excess of their private value when the draw was low, as well as the extent to which they bid below their private value when their draw was high. Figure 3 (bottom) further confirms this conjecture; it plots the prediction error for each belief elicitation data point against the true private value. Most predictions are quite accurate: the median absolute error in predicted values was 6.75 ECU, and 75% of observations had an absolute error lower than 13.2 ECU. However, the most extreme cases of prediction error are at the extremes of the domain of private values. We verify this by running a regression of the absolute error in prediction on private value and private value squared. We obtain a constant coefficient of 10.86 (robust s.e. 0.63), a coefficient of −0.26 (robust s.e. 0.03) on private value and a coefficient of 0.004 (robust s.e. 0.0004) on the square of private value ( N = 2520, R 2 = 0.16 ). Thus, the highest errors in prediction come from the extremes of the private value distribution. Discussion The results of the experiment partly confirm the theoretical predictions of the model. In particular, comparing the bidding behavior in the B and T treatments with the other treatments suggests that workers realize that they are participating in a signaling game and (over)bid accordingly. In particular, the bidding behavior of workers in the A , W and N treatments suggests that individuals grasped the basic incentives. When queried in a non-incentivized, open ended questionnaire about how they made their bidding decisions, 48% of workers reported bidding higher than their private value when the realization was 'low', and bidding below their private value when the realization was 'high'. Just under 50% of workers in these treatments explained their bidding behavior as a function of second-stage outcomes, either as responding to expected wage offers, or even explaining their behavior as signaling a particular type. However, subjects also stated reducing their overbidding in periods where they drew a higher private value. This is a plausible explanation as to why the treatment effects are weaker than what theory would predict. 16 In the S treatment, bidding behavior was not what we would expect from the theory. This is not entirely surprising, as this is certainly the most difficult disclosure rule to think about, for two main reasons. The first is that the disclosure rule itself may be difficult to interpret since we have a first-price auction, but only the losers' bid is disclosed. The second reason is that in the symmetric and monotone equilibrium, workers with low valuations face a trade-off between bidding low given their valuation of the object and bidding high for signaling purposes (as their valuation is likely to be revealed if they do not overbid), while workers with high valuations face the reverse trade-off. In other words, in the S treatment workers face a trade-off between their standard auction incentives and their signaling incentives, and equilibrium bidding behavior requires them to finely balance these opposing incentives. This does not happen with the other disclosure rules where the incentives to overbid are (weakly) monotonic in the valuations: in the A and N treatments, these incentives are the same regardless of the valuation whereas in the W treatment, the higher the valuation, the higher the incentive to overbid. All of this implies that bidding in the S treatment is significantly more complex than in the other treatments. As such, it should not be too surprising that even if the incentives are correctly understood, their implementation in the bidding process may fail. Regarding wage-setting in the second stage, the model's predictions are also broadly confirmed, with S data again diverging the most from theoretical predictions. It is particularly interesting to note that bidders seem to understand that if a worker's bid is disclosed, that should have a greater impact on this worker's wage than on the wage of a worker whose bid is not disclosed. Over 53% of firm subjects stated in the post-experimental questionnaire that they had discriminated between the winning and the losing worker in their wage offers across all treatments. In the W and S that proportion was 92% and 67%, respectively. Notably, very few firm subjects expressed concerns about over-or under-bidding by workers (8% and 12%) respectively-most observations of both types of concerns came from firms in the S treatment, which suggests this was the treatment in which behavior was most noisy. A substantial number of firm subjects reported applying a differential mark-down based on the observed bids (e.g. "As a Firm I decided to bid below 40% of the workers bid price if it was winner. If it was a loser then I bid at 20% or less. My main aim was to offer the lowest price possible for a worker, but still trying to beat the other firm. I had base my bids off past bids other firms made.") Risk aversion has been often raised as a possible explanation for over-bidding in private value auctions in the lab (c.f. Cox et al. 1985Cox et al. , 1988Smith and Walker 1993;Goeree and Holt 2002). We feel that this is an implausible reason for deviations from our theoretical benchmark which explicitly assumes risk neutrality. The main basis for our position is theoretical in nature. Arrow (1971) and Rabin (2000) show that risk aversion as postulated by expected utility theory must imply risk neutral behavior for small stakes-such as those typically used in laboratory experiments. As such, we feel that the traditional definition of risk aversion, which relies on the concavity of one's utility function, is not a good candidate for the deviations. We conclude our discussion by revisiting the data on beliefs. The estimated belief function was qualitatively similar to predictions for the range of bids one would expect to observe in equilibrium, but it differed dramatically for ranges of bids that are outof-equilibrium. In particular, subjects seemed to discount the incentive to overbid by low types: recall that in theory such incentive should be such that no subject would bid below 40 in equilibrium. This may have removed the incentive by workers to overbid to the extent predicted by theory when drawing a low private value. These results also confirm that the explanation for excess profits in our treatments may be partly attributable to firms not being able to correctly guess valuations, when they are extreme, from bidding behavior. Conclusions This paper is the first to present an experimental analysis of signaling through auctions and does so in a context where bidders' valuations in a first-stage auction are positively correlated with the expected returns in an aftermarket. To do so, we adopt the framework in Giovannoni and Makris (2014) which allows for easily testable implications. We interpret the aftermarket as a labor market where the first-stage bidders (workers) have a productivity that is a linear function of their first-stage valuations. We model the labor aftermarket in such a way that in equilibrium workers take all the surplus (as expected by firms) and equilibrium wages are therefore equal to what the firms believe to be their expected productivity. The theory predicts overbidding in the firststage auction under limited disclosure and, more interestingly, that different disclosure rules about what is known about bidding at the end of the first-stage auction influence bidding in the first-stage auction and wages in the second-stage auction. We test the theoretical predictions in an experiment, focusing on differences across disclosure rules. The results show that bidders understand the possibility of signaling their productivity through their bids and, further, that they also understand some of the qualitative differences in the signaling opportunities that arise from the various disclosure rules. Several predictions about wages in the aftermarket are also confirmed. However, our results also show that bidder behavior differs sufficiently from theoretical predictions to invalidate the predicted effect of different disclosure rules on revenue and firm profits. In particular, there is not as much overbidding for low valuation types as one would predict, although our belief elicitation analysis also shows that this behavior is to some degree (but not completely) anticipated from firms. From a revenue efficiency perspective, theory predicted that disclosing the losing bid would generate the highest level of revenue alongside the treatment where both bids are revealed. In contrast, our results suggest that revealing the winning bid can be, for the seller, equally beneficial to revealing all the bids and strictly better than revealing the losing bid. Our data suggests that the losing-bid disclosure rule should be used in practice with caution. It is the most cognitive demanding environment as argued earlier, and it leads to less consistent behavior relative to theoretical benchmarks. 3 Auctions with external incentives: experimental evidence where we emphasize that this is an expectation with respect to j's valuation, conditional on equilibrium play from her. We define an effective valuation as We call these effective valuations because they capture all that is at stake for individual i in the first-stage auction (assuming equilibrium play in the second stage). Specifically, we have the direct utility from winning the auction x i , but also the reputational returns that i can expect in the second-stage auction as a function of her valuation. The component v − v − captures the net reputational gain to the bidder from winning the auction, while the remaining term in the square brackets captures the additional reputational net gain from marginally increasing the announcement. This definition allows us to state, following on from Giovannoni and Makris (2014), the following proposition: Proposition A Given assumption A, equilibrium bidding functions of the firstprice sealed-bid auction are given by 20 Proof Define and note that is the expected utility of valuation x i from bidding (z i ) . Suppose now that a symmetric and strictly increasing equilibrium exists. Note then that in a such an equilibrium is the expected profit of bidder with valuation x i from bidding b i ≥ 0,with, by Moreover, being strictly increasing, is almost everywhere differentiable. The first-order condition (FOC) for a maximum of the expected profit of bidder with valuation x i is (except in points of non-differentiability of (.)) So, if is a symmetric and strictly increasing equilibrium, then it must be that b i = (x i ), with (x i ) > 0 for any x i > 0, and hence almost everywhere in x ∈ (0, 100]. One can easily see that if is a symmetric and strictly increasing equilibrium, then it must be continuous: if x was a jump point then bidding lim x→x − (x) is preferred to bidding lim x→x + (x) by bidder of valuation x (resp. x + , where is arbitrarily small) when lim x→x + (x) = (x) (resp. when lim x→x − (x) = (x)); such deviation does not have an effect on the auction's outcome and the reputational return, but leads to lower price upon winning. Note also that in any symmetric and strictly increasing equilibrium, (0)x0 = 0. Continuity of , and hence (x i )x i , implies, therefore, that the differential equation with the boundary condition (0)x0 = 0 has a unique solution, for any x ∈ [0, 100], to the proposed equilibrium: It remains thus to show that x i is indeed an equilibrium. To this end, note first that, given that competitors deploy x i , any bidder is indifferent over any bid weakly lower than (0) . Also, any bidder strictly prefers (100) to any higher bid. We have that ≥ 0 for any z i = where is arbitrarily small. So, it will be enough to . This holds as an equality by continu- for any x ∈ [0, 100], given that competitors deploy . Thus, is an equilibrium. ◻ A.2 Bidding and wage functions We calculate bidding and wage functions for all our treatments. Define then it is easy to see that 21 and since for all disclosure rules ≠ T , it must be that x = −1 (b), we have the wages described in the main text. A.3 Instruction set We reproduce the instructions used in our experiment. All instructions included screenshots of the different stages of the experiment to facilitate comprehension of the experimental environment. The instructions for the different treatments differed only in a key sentence and screenshot in the third page, as well as how the information was revealed in the examples at the end. The key sentence was always underlined in every version shown to subjects. To minimize the number of pages, we reproduce the different key page 3 sentences below: A : "Firms only know the bids of both workers and the identity of the winner and loser." W : "Firms only know the winning bid and the identity of the winner." S : "Firms only know the losing bid and the identity of the loser." T : "Firms will know what the value of the stage 1 prize to either worker is." N : "Firms only know the identity of the winner and loser." The instructions for the B treatment did not include (or make a reference to) Stage 2, or firms. The examples had the same private values and bids, but no stage 2 market, and payoffs in the examples were adjusted accordingly. We present the instructions used for the T treatment, as well as the instructions for the belief elicitation treatment. The instruction set for the latter treatment incorporated the instructions for the original treatment; we will not reproduce that section for the sake of brevity. Instruction set Welcome to our experiment. Please read these instructions carefully, as your payment will depend on your decisions, as well as the decisions of other people in the room. Your payoffs during the experiment will be denominated in Experimental Currency Units (ECU). Once the experiment is finished, we will convert your payoff from ECU to pounds and pay you in cash. 3 ECU are worth £1. You will also receive £10 for participating. This experiment will be divided into 5 practice rounds and 35 real rounds. The experiment you are about to participate in will involve four people, two workers (worker A and worker B) and two firms (firm 1 and firm 2). Your role in this experiment will be that of a firm. You will retain that role throughout the experiment. At the beginning of each round, the computer will randomly match two workers and two firms. It is possible, though very unlikely, that you will be paired with the same three people in consecutive rounds. All rounds of the experiment will work in the same way. We will now describe the way in which each round of the experiment works. Each round of the experiment is divided into two stages. We now describe each stage in turn. Stage 1 Workers bid for the prize in an auction. Workers can bid any amount between 0.00 and 100.00 (maximum of two decimal places). The winner of the auction is the person who bids the highest amount. If there is a tie, the computer will flip a virtual coin to decide which worker wins the auction. When bidding, each worker will know his own value of the prize, but each worker will not know the other worker's value. The value of the prize to either worker is determined randomly by the computer. This value is a number between 0.01 and 100.00 (in increments of 0.01), where each number is equally likely to be drawn. The payoff to the winner of the auction will be equal to the value of the prize minus the bid. The payoff to the loser of the auction will be zero. Stage 2 In Stage 2, the firms will enter an auction to hire the workers. They will be told of all the bids in Stage 1's auction and the corresponding bidders. They will offer wages to both the winner and the loser of the Stage 1 auction. 3 Auctions with external incentives: experimental evidence The firm that makes the highest wage offer to a given worker will hire that worker and pay him/her that wage. It is possible for a firm to hire both workers, if that firm makes the highest wage bid for both workers. If a firm hires a worker, that firm will gain a payoff equal to 40% the value that worker assigns to the stage 1 prize. For example, if a worker values the prize at 30 ECU, that worker is worth 12 ECU to a firm. However, the firms do not know what the value of the stage 1 prize to either worker is. Firms only know the bids of both workers and the identity of the winner and loser. The two firms must make a separate wage bid for the winner and the loser of the first stage auction. The firm that makes the highest bid for a given worker will hire that worker and pay the worker that wage. The firm that makes the lowest bid for a given worker will not hire that worker. In case of a tie, the computer will flip a virtual coin to decide which firm hires that worker. At the end of the round, firms and workers will receive information about the outcome of the auctions and their final payoffs. 3 The following is the final payoff screen for a worker at the end of a period: And the following is the payoff screen for a firm at the end of a period: 3 Auctions with external incentives: experimental evidence To fix ideas, consider the following examples. Example 1: In Stage 1, worker A has a value of 65 and worker B has a value of 70. Worker A bids 80 and worker B bids 62. Worker A wins the auction and pays 80, thus having a net payoff of 65 − 80 = −15 . Worker B lost the auction and has a payoff of 0. In Stage 2, both firms observe the two bids and the corresponding bidders. That is, they know worker A bid 80 and worker B bid 62. Firm 1 makes a bid of 43 for worker A and a bid of 24 for worker B. Firm 2 bids 30 for worker A and bids 25 for worker B. Firm 1 had the highest bid for worker A, therefore it hires worker A. Firm 2 made the highest bid for worker B, therefore it hires worker B. The final payoffs for the four players are: • Worker A gets a payoff of −15 from Stage 1 and a payoff of 43 from Stage 2 for a total of 28. • Worker B gets a payoff of 0 from Stage 1 and a payoff of 25 from Stage 2 for a total of 25. • Firm 1 gets a payoff of (0.40 × 65) − 43 = −17. • Firm 2 gets a payoff of (0.4 × 70) − 25 = 3. Example 2: In Stage 1, worker A has a value of 54 and worker B has a value of 17. Worker A bids 46 and worker B bids 10. Worker A wins the auction and pays 46, thus having a net payoff of 54 − 46 = 8 . Worker B lost the auction and has a payoff of 0. In Stage 2, both firms observe the two bids and the corresponding bidders. That is, they know worker A bid 46 and worker B bid 10. Firm 1 makes a bid of 15 for worker A and a bid of 6 for worker B. Firm 2 bids 12 for worker A and bids 5 for worker B. Firm 1 had the highest bid for worker A, therefore it hires worker A. Firm 1 also made the highest bid for worker B, therefore it hires worker B. The final payoffs for the four players are: • Worker A gets a payoff of 8 from Stage 1 and a payoff of 15 from Stage 2 for a total of 23. • Worker B gets a payoff of 0 from Stage 1 and a payoff of 6 from Stage 2 for a total of 6. • Firm 1 gets a payoff of (0.40 × 54) − 15 + (0.40 × 17) − 6 = 6.6 + 0.8 = 7.4. • Firm 2 gets a payoff of 0. Example 3: In Stage 1, worker A has a value of 33 and worker B has a value of 48. Worker A bids 46 and worker B bids 55. Worker B wins the auction and pays 55, thus having a net payoff of 48 − 55 = −7 . Worker A lost the auction and has a payoff of 0. In Stage 2, both firms observe the two bids and the corresponding bidders. That is, they know worker A bid 46 and worker B bid 55. Firm 1 makes a bid of 5 for worker A and a bid of 6 for worker B. Firm 2 bids 4 for worker A and bids 7 for worker B. Firm 1 had the highest bid for worker A, therefore it hires worker A. Firm 2 made the highest bid for worker B, therefore it hires worker B. The final payoffs for the four players are: • Worker A gets a payoff of 0 from Stage 1 and a payoff of 5 from Stage 2 for a total of 5. • Worker B gets a payoff of −7 from Stage 1 and a payoff of 7 from Stage 2 for a total of 0. • Firm 1 gets a payoff of (0.40 × 33) − 5 = 8.2. • Firm 2 gets a payoff of (0.40 × 48) − 7 = 12.2. Your payment for this experiment will be the sum of payments from three rounds, which the computer will draw at random. Each round is equally likely to be chosen. Instruction set Welcome to our experiment. Please read these instructions carefully, as your payment will depend on your decisions. Your task in this experiment will be to make predictions on the basis of choices made by people who took part in an earlier experiment. You will be paid for the accuracy of your predictions: the more accurate your guesses are, the more money you will earn. Before we explain what you will have to predict, we would like you to read the instructions the people who took part in the original experiment read. Those instructions follow this page. They are printed in yellow paper. Please take 10 min to read the original instructions. The instructions for your task follow at the end. In the present experiment, you will be playing the role of the firm. However, you will not be interacting with anyone in the room. In each period, you will see the bids from the Stage 1 auction from participants who played the role of worker in an experimental session that took place sometime between December 2013 and January 2014. These participants are not present in this room. In each period, you will have to make two predictions: one for the value of the prize to the winning worker, and another for the value of the prize to the losing worker. You will be able to see the bids that each worker made in Stage 1 of that round before you make your predictions. The closer your prediction is to the actual value of the prize for a given worker, the more money you will earn. Your payoff from each prediction is calculated based on the following formula: Prediction payoff (in tokens): 10,000 − (true value − predicted value) 2 For example, suppose that you guessed that, in a given period, the value to a worker was 35. At the end of each period, you will receive feedback about what the actual value of the prize was for both workers, and your payoff for that period. You will see the same sequence of bids that an actual participant playing the role of a firm saw over the course of the 35 experimental periods. Once you make predictions for each of the 35 periods in the original experiment, the computer will select three periods at random, which will be the basis for your payment. This means you will be paid on the basis of six predictions. 10,000 tokens = £1. You will also earn £4 for participating. Table 8 displays the Chi-squared values and p values from revenue comparisons related to the estimates from specification (1) in Table 2. Table 9 displays GLS regression estimates of the bid placed by worker i in experimental period t, b i,t on that worker's private value in that period, v i,t , and dummy interactions with the relevant treatments and their corresponding dummy intercepts; the omitted treatment is B . The first specification only includes treatment dummies and their interactions with v i,t . The second specification also includes a linear time trend. The estimated coefficients are unchanged, but we do detect a significant time trend. We rely on the first specification for hypothesis testing purposes, as we test differences in intercepts and the presence of the time trend will alter the economic interpretation of the intercept. A.4 Supplementary analysis Our main interest, as explained in the text, is on comparative static effects of disclosure rules. For the sake of completeness, we include the tests for the point predictions of the model. We find that the empirical bidding functions are significantly different from their theoretical counterparts; in particular the intercepts of the estimated bidding functions from limited disclosure treatments are significantly smaller than predicted. We start by looking at the B treatment, which is the omitted category in the regression results. A joint test of equality of the coefficient and slope to the benchmark predictions (0 and 0.5, respectively) yielded statistically significant differences ( 2 (2) = 1769.03, p < 0.001 ). Given that the estimated constant is not significantly different to zero ( 2 (2) = 0.91, p = 0.340 ), the likely source of the difference is the estimated slope ( 2 (2) = 126.10, p < 0.001). We also find that the estimated bidding function in W is significantly different to predictions ( 2 (2) = 17815.69, p < 0.001 ). The critical difference appears to come from the estimated intercept (3.49, 2 (2) = 58.87, p < 0.001 ), which is significantly different from the point prediction, although the estimated slope is quite close to prediction (0.8, 2 (2) = 2.40, p = 0.121). In the case of S , the empirical bidding function is different from prediction ( 2 (2) = 2067.43, p < 0.001 ), but here the qualitative nature of the difference is rather different: the empirical bidding function is steeper than predicted (0.67 vs. 0.2, 2 (2) = 1693.49, p < 0.001 ), but the estimated intercept is much smaller than what theory predicts (7.71 vs. 60, 2 (2) = 1776.79, p < 0.001 ). In other words, workers are not overbidding by as much as they should when then get a bad private valuation for the stage 1 good. The same is true for the A ; the joint test of equality of the estimated slope and intercept to the predicted values rejects the null at highly significant levels ( 2 (2) = 7080.02, p < 0.001 ). This is driven by a large difference between predicted and estimated intercept (40 vs. 9.08, 2 (1) = 535.62, p < 0.001 ), as well as a large difference between predicted and estimated slope (0.5 vs. 0.71, 2 (1) = 30.15, p < 0.001). Table 10 reports the test statistics and p values for all slope and intercept and joint test comparisons across all treatments. Table 11 displays the GLS regression estimates of wages as a function of winning the stage 1 auction. We report on a specification with and without a time trend. T is the omitted category; Winner equals one if worker i won the stage-1 auction in period t.
2018-12-30T14:36:07.064Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "680cb190402f1712f511b97b9d3d02d8e292dfc0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00182-020-00725-1.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "f54c24846817f79683c67a8065a23328044cfbe1", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Computer Science", "Business" ] }
52831049
pes2o/s2orc
v3-fos-license
Pancreatic cancer takes its Toll It takes T-bet is essential effector development. Development of pancreatic ductal adenocarcinoma (PDAC) is known to be driven by a persistent inflammatory state, as oncogenic mutations alone are not sufficient for tumorigenesis. Toll-like receptors (TLRs), the pillars of the innate immune system, are highly expressed in the tumor microenvironment and on circulating leukocytes in PDAC. Contrasting results have reported antitumorigenic effects or induction of intrapancreatic inflammation and tumor progression upon ligation of different members of this receptor family. TLR9 in particular is expressed in both tumor and tumor-related cells, and its activation has been shown to impair tumor cell proliferation, suggesting that TLR9 agonists might be useful as adjuvant therapy. In this issue, Zambirinis et al. further examined the specific role of TLR9 signaling in PDAC. Gain-and loss-of-function experiments in a mouse model of PDAC showed that TLR9 activation is oncogenic in PDAC. Interestingly, these effects are only partially explained by activation of TLR9 signaling in the tumor cells themselves. Unexpectedly, the authors found expression of TLR9 on pancreatic stellate cells (PSCs; myofibroblast-like cells in the pancreas) and showed that TLR9 activation in these cells results in the production of the CCL3 and CCL11 chemokines. They show for the first time that CCL11 can promote the proliferation of pancreatic cancer cells in a dose-dependent manner. Activation of TLR9 in PSCs also leads to the recruitment of tumor suppressive regulatory T cells (T regs) that, together with TLR9activated myeloid-derived suppressor cells (MDSCs), favor the generation of an immunosuppressive microenvironment. Perhaps one of the most important aspects of the crosstalk between tumor cells and their microenvironment is the inhibition of anticancer immune responses. This new study provides compelling evidence that, in addition to the adaptive immune system, specific molecules in the innate immune system, such as TLR9, can play crucial roles in cancer development. Importantly, the recent development of small molecule agonists and antagonists of TLRs may offer a new strategy to inhibit cancer growth. It will be interesting to determine how such manipulations may be combined with other strategies to activate antitumor T cells. Another exciting aspect of this work is the prominent role of PSCs, providing further support to the notion that desmoplasia from fibroblasts plays a critical role in the recruitment and (in)activation of immune infiltrates in pancreatic cancer. Unquestionably, the molecular complexity of tumor, stroma, and immune cells will continue to engage the efforts of pancreatic cancer researchers for many years to come. Tumors are aberrant organ systems that develop through cooption of developmental and wound response programs. Wnt signaling represents a central organizing force in organ development and healing, but constitutive Wnt activation initiates and maintains tumor growth. Development of molecular targeted therapies against Wnt has proven challenging because of the diversity of ligands, receptors, and effectors. In this issue, Ladang et al. discover that ELP3, an enzymatic component of Elongator, promotes Wntinduced colon tumorigenesis and recovery from injury. As its name implies, the Elongator complex critically regulates transcriptional elongation through RNA polymerase II, but Elongator also functions in translation through tRNA modification, cytoplasmic kinase signaling, and exocytosis. Leveraging previous studies of oncogenic activity of Elongator, the authors found that Wnt induces ELP3 and that ELP3 levels are increased in colon cancers, a cancer commonly associated with Wnt dysregulation. Targeted disruption of ELP3 in the colon ablated Tuft cell generation, but no significant phenotype in colon organization or animal health was detectable at baseline. In contrast, loss of ELP3 severely attenuated tumor initiation and recovery from radiation injury, potentially through translationnot transcriptional-control of SOX9, a master organizer of the endodermal cellular hierarchy. The colon undergoes continuous renewal with replacement of the colonic epithelium every seven days on average. Upon injury, like radiation, Wnt signaling is activated to accelerate self-renewal. The current study suggests that Elongator activity is necessary for injury responses in the colon, although future studies may define whether Elongator is sufficient for recovery of the colon and other tissues dependent on Wnt. Conceptually, activating Elongator could accelerate regeneration or improve the efficacy of cell-based therapies. As ELP3 is the catalytic subunit of the histone acetyltransferase elongator complex, its activity may be amenable to disruption through pharmacologic antagonists. Targeting ELP3 in the colon had minimal detrimental effects, suggesting that ELP3 may be an attractive target in Wnt-related cancers, especially in the colon where orally administered therapies could provide locoregional antagonism. Insight from Jeremy Rich (A) A healthy intestinal crypt is illustrated in the left panel. Upon inactivation of the Apc gene suppressor ("Oncogenic hit"), Wnt signaling is constitutively activated and triggers adenoma development. Whereas the acetylase ELP3 is dispensable for the maintenance of normal stem cells and intestinal homeostasis, this factor is required for Wnt-dependent tumor initiation and induction of cancer stem cell self-renewal. (B) The inactivation of Apc triggers β-catenin stabilization and consequently drives the expression of Wnt target genes, such as SOX9 and ELP3. ELP3 chemically modifies tRNAs to promote SOX9 translation. As a result, a pool of Lgr5 + /Dclk1 + /Sox9 + cells efficiently drives Wnt-dependent tumor initiation in the intestine. Both T cells and NK cells employ complex networks of transcriptional regulators to control their differentiation and functional prowess. In this issue, three studies report that the transcription factor ZEB2 is critical for generation and expansion of terminally differentiated effector cells, chiefly by working in partnership with T-bet. Several transcription factors have been shown to regulate CD8 + T cell differentiation during an immune response. T-bet has emerged as a driving force in the development of KLRG1 hi terminal effector cells. Both Dominguez et al. and Omilusik et al. show here that ZEB2, a molecule not previously associated with lymphocyte differentiation, is strongly up-regulated in early KLRG1 hi effector cells and supports their progression to a short-lived effector statusyet appears dispensable for generation of long-lived memory cells. T-bet is required for ZEB2 expression, and the factors coregulate many of the same genes. However, the impact of ZEB2 deficiency could not be completely overcome by T-bet overexpression, and both cooperative and independent activities of these factors may be important. Interestingly, van Helden et al. demonstrate a parallel role for ZEB2 in permitting NK cell maturation. Zeb2 deficiency thwarted normal NK cell differentiation, migration into peripheral tissues, and control of melanoma growth. Again, many of these features echo the role of T-bet in NK cell development, and there was ample evidence of cooperation between the factors in controlling gene expression; however, ZEB2 was not completely subservient to T-bet because ZEB2 could partially restore NK development in T-bet gene-deficient mice. These studies characterize ZEB2 as a novel player in the transcriptional control of lymphocyte differentiation, playing strikingly similar roles in CD8 + T cells and NK cells. Numerous phenotypic and functional changes accompany the differentiation of lymphocytes-a challenge has been to understand how these differentiation states relate to complex transcriptional networks within the cell and how generation of a stable subset is achieved. ZEB2 is downstream of T-bet, but cooperation between these factors is crucial for regulation of multiple genes-hence ZEB2 may serve to reinforce the differentiation program initiated by T-bet expression. However, whereas both ZEB2 and T-bet inhibit expression of memory precursorassociated genes in CD8 + T cells, Dominguez, et al. suggest that low level ZEB2 expression is required for generation of the CD8 + T cell effector-memory subset, indicating a more nuanced role. Also, whereas ZEB2 does not appear necessary for acquisition of key effector functions (such as cytolysis and IFN-γ production) by terminally differentiated CD8 + T cells or mature NK cells, it will be important to see how this factor impacts pathogen control in various contexts. Deciphering the pathways that promote effector cell generation may help direct strategies for better vaccine approaches, and understanding the cooperative and independent roles of T-bet and Zeb2 could provide new targets for therapeutic intervention. It takes Zeb2 to tango: Cooperation between T-bet and Zeb2 is essential for CD8 + T cell effector differentiation and NK cell development. Although increasing T-bet expression alone can mediate some of the necessary gene expression changes, coordination with Zeb2 (which is itself under T-bet transcriptional control) is required for full commitment and expansion of terminally differentiated CD8 + T cells and mature NK cells. Sara E. Hamilton and Stephen C. Jameson, Center for Immunology, University of Minnesota: hamil062@umn.edu and james024@umn.edu Insight from Sara Hamilton (left) and Stephen Jameson (right) How dominant mutations in presenilin (PSEN) cause early-onset familial Alzheimer's disease (FAD) has been debated since the discovery of such mutations 20 years ago. A study by Szaruga et al. in this issue of JEM now appears to provide a definitive answer. Presenilin is the catalytic subunit of γ-secretase, a protease that cuts the transmembrane domain of the amyloid precursor protein (APP) to produce the C terminus of the amyloid β-peptide (Aβ) that notoriously deposits in the Alzheimer brain. Some argue that reduction of presenilin's proteolytic activity (i.e., a loss-of-function effect) is responsible for the neurodegeneration caused by FAD mutations. Others have shown that some mutations do not reduce proteolytic activity, but all increase the proportion of aggregation-prone 42-residue Aβ (Aβ42) to 40-residue Aβ (Aβ40; i.e., a gain of toxic function). Further complicating matters, γ-secretase initially cuts APP substrate via an endopeptidase activity to produce Aβ48 and Aβ49 and release the corresponding APP intracellular domain (AICD). These long Aβ peptides are then sequentially trimmed via a carboxypeptidase function of γ-secretase along two primary pathways: To address the loss-versus gain-of-function question, Szaruga et al. examined γ-secretase proteolytic activity in samples from post-mortem human brains from 24 FAD mutation carriers, covering nine different PSEN mutations. The samples contained endogenous human γ-secretase complexes and-importantly-both wild-type and PSEN mutant complexes. Under these natural conditions associated with the human disease state, the production of AICD-a measure of γ-secretase endoprotease activity-was not significantly different from that seen in control non-AD brains. Thus, the presence of the wild-type PSEN allele apparently compensates for any loss of endoproteolytic activity from the mutant allele. In contrast, clear reduction of carboxypeptidase activity-as measured by the ratio of Aβ38 from its precursor Aβ42-was seen for every mutation. These findings have implications for the mechanism of Alzheimer pathogenesis and for drug discovery. In considering γ-secretase as a therapeutic target, one should first know what specific functional alterations in the enzyme lead to disease, and that appears to be decreased carboxypeptidase activity. Therefore, a search for stimulators of this activity would make sense. Such compounds have already been identified, although they appear to stimulate only the Aβ42 → Aβ38 step, insufficient if other long Aβ peptides are augmented in Alzheimer's and play pathogenic roles. As is so often the case, answering one key question leads to another. Szaruga, M., et al. 2015. J. Exp. Med. http://dx.doi.org/10.1084 Ab is derived from its precursor protein APP by sequential proteolysis, first by β-secretase (not depicted) and then by γ-secretase, the latter hydrolyzing within the transmembrane (TM) domain. Initial cleavage occurs at the so-called ε site (indicated by the scissors), releasing the APP intracellular domain or AICD (red intracellular piece) and leaving Aβ49 or Aβ48 fragments in the membrane. Aβ49 or Aβ48 fragments are successively cut by the carboxypeptidase-like activity of γ-secretase, increasing the probability of release from the plasma membrane to the extracellular medium. Both ε cleavage and carboxypeptidase TM trimming depend on PSEN, the catalytic subunit of γ-secretase. Pathogenic mutations in PSEN cause a qualitative shift in Aβ profile production, increasing the proportion of released longer Aβ peptides, which are prone to aggregate and form the plaques observed FAD. Michael S. Wolfe, Brigham and Women's Hospital and Harvard Medical School: mswolfe@partners.org Michael Wolfe Cutting to the chase: How pathogenic mutations cause Alzheimer's
2017-07-06T07:17:00.026Z
2015-11-16T00:00:00.000
{ "year": 2015, "sha1": "3b9874315b6ee5f1e555e24c68e1ff2657109215", "oa_license": null, "oa_url": "https://rupress.org/jem/article-pdf/212/12/1990/1013509/jem_21212insights.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "41ef038edff8e6a27ff20d057aae437386ba2c81", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13855804
pes2o/s2orc
v3-fos-license
Fundamentals of Traffic Flow From single vehicle data a number of new empirical results concerning the density-dependence of the velocity distribution and its moments as well as the characteristics of their temporal fluctuations have been determined. These are utilized for the specification of some fundamental relations of traffic flow and compared with existing traffic theories. For the prosperity in industrialized countries, efficient traffic systems are indispensable. However, due to an overall increase of mobility and transportation during the last years, the capacity of the road infrastructure has been reached. Some cities like Los Angeles and San Francisco already suffer from daily traffic collapses and their environmental consequences. About 20 percent more fuel consumption and air pollution is caused by impeded traffic and stop-and-go traffic. For the above mentioned reasons, several models for freeway traffic have been proposed, microscopic and macroscopic ones (for an overview cf. Ref. [1]). These are used for developing traffic optimization measures like on-ramp control, variable speed limits or re-routing systems [1]. For such purposes, the best models must be selected and calibrated to empirical traffic relations. However, some relations are difficult to obtain, and the lack of available empirical data has caused some stagnation in traffic modeling. Further advances will require a close interplay between theoretical and empirical investigations [2]. On the one hand, empirical findings are necessary to test and calibrate the various traffic models. On the other hand, some hardly measurable quantities and relations can be reconstructed by means of theoretical relations. Therefore, a number of fundamental traffic relations will be presented in the following. Until now, little is known about the velocity distribution of vehicles, its variance or skewness. A similar thing holds for the functional form of the velocity-density relation or the variancedensity relation at high densities. Empirical results have also been missing for the fluctuation characteristics of the density or average velocity. These gaps will be closed in the following. Although the data are varying in detail from one freeway stretch to another, the essential conclusions are expected to be universal. In a recent paper [3] it has been shown that the traffic dynamics on neighboring lanes is strongly correlated. Therefore, it is possible to treat the total freeway cross section in an overall way. Consequently, we will only discuss the properties of the lane averages of macroscopic traffic quantities. The empirical relations have been evaluated from single vehicle data of the Dutch two-lane freeway A9 between Haarlem and Amsterdam (for a sketch cf. Fig. 1 in Ref. [3]). These data were detected by induction loops at discrete places x of the roadway and include the passage times t α (x), velocities v α (x), and lengths l α (x) of the single vehicles α. Consequently, it was possible to calculate the number N(x, t) of vehicles which passed the cross section at place x during a time interval [t − T /2, t + T /2], the traffic flow and the macroscopic velocity moments Small values of T are connected with large statistical variations of the data, but large values can cause biased results for k ≥ 2 [3]. Values between 0.5 and 2 minutes seem to be the best compromise [1]. The vehicle densities ρ(x, t) were calculated via the theoretical flow Other evaluation methods [4] are discussed in Ref. [1]. We start with the discussion of the grouped empirical velocity distribution P (v; x, t) which was obtained in the usual way: Here, n(x, v l , t) denotes the number of vehicles which pass the cross section at x between times t − T /2 and t + T /2 with a velocity v ∈ [v l − ∆/2, v l + ∆/2). The class interval length was chosen ∆ = 5 km/h. In theoretical investigations, the velocity distribution P (v; x, t) has mostly been assumed to have the Gaussian form [5][6][7] Here, V (x, t) := v denotes the average velocity and Θ(x, t) := [v − V (x, t)] 2 the velocity variance. Assumption (5) has been made for two reasons: First, it allows to derive approximate fluid-dynamic traffic equations from a gas-kinetic level of description [5][6][7]. Second, analytical results for the velocity distribution are not yet available, even for the stationary and spatially homogeneous case. Therefore the question is, whether the Gaussian approximation is justified or not. Figure 1 gives a positive answer, at least for the average velocity distribution at small and medium densities. In particular, bimodal distributions are not observed [8]. An investigation of the temporal evolution of the velocity distribution is difficult due to the large statistical fluctuations (which come from the fact that only a few vehicles per velocity class pass the observed freeway cross section during the short time period T ). Therefore, we will study a macroscopic (aggregated) quantity instead, namely the temporal variation of the skewness This can be interpreted as a dimensionless measure of asymmetry (cf. Fig. 2). Figure 3 shows that the skewness mainly varies between −0.5 and 0.5. The deviation from 0 is neither systematic nor significant, so that the skewness is normally negligible. This indicates that even the time-dependent velocity distribution is approximately Gaussian-shaped [9]. Now it will be investigated how the average velocity V and the variance Θ depend on the vehicle density ρ (cf. Figs. 4 and 5). The problem is that the data for high vehicle densities are missing. However, for computer simulations of the traffic dynamics the corresponding functional relations need to be specified. This can be done by means of theoretical results. For the average velocity and variance on freeways with speed limits, recent gas-kinetic traffic models [6] imply the following implicit equilibrium relations (indicated by a subscript "e"), if the skewness is neglected (cf. Fig. 3): Herein, V 0 denotes the average desired speed (or free speed), τ (ρ) is the effective densitydependent relaxation time of acceleration maneuvers, p(ρ) means the probability of immediate overtaking. Moreover, ρ max denotes the maximum vehicle density, T r the reaction time, and A(ρ) with 0 ≤ A(ρ) ≪ 1 the relative individual velocity fluctuation during the time interval τ (ρ) [1,6]. According to relation (8), the equilibrium variance vanishes when the average velocity becomes zero. This consistency condition is not met by all traffic models (cf. Ref. [10]). In addition, we expect that the average velocity vanishes at the maximum vehicle density ρ max . Therefore, in the limit ρ → ρ max we must have the proportionality relation the proportionality factor being V 0 . Whereas the overtaking probability p(ρ) is expected to vanish for ρ → ρ max , the relaxation time τ (ρ) and the fluctuation parameter A(ρ) are assumed to remain finite [11]. Therefore, the ansatz V e (ρ) ∝ (1 − ρ/ρ max ) β leads to β = 1 and This is a very interesting discovery, since many researchers believed that the average velocity approaches the ρ-axis horizontally. In addition, we find that Θ e (ρ) Our remaining task is to specify the parameters ρ max and T r . From other measurements it is known that ρ max lies between 160 and 180 vehicles per kilometer and lane [12]. The reaction time T r for expected events is at least 0.7 seconds [13]. A good fit of the data results for ρ max = 160 vehicles/km lane , T r = 0.8 s (cf. Fig. 4). In addition we can conclude from (7) that the velocity-density relation V e (ρ) of a multi-lane freeway should start horizontally, since the probability of overtaking p(ρ) should approach the value 1 at very small densities ρ ≈ 0. However, it is not only possible to reconstruct the functional forms of the velocity-density relation V e (ρ) and the variance-density relation Θ e (ρ). From these we can also determine the dependence of the model functions A(ρ) and τ (ρ)[1 − p(ρ)] by means of the theoretical relations (7) and (8). The result for the diffusion strength A(ρ) is depicted in Figure 6. Finally, we will investigate the temporal fluctuations of the empirical vehicle density ρ(x, t). Until now, most related studies have been presented theoretical or simulation results. It has been claimed that the power spectrumρ(x, ν) of the density ρ(x, t) obeys a power laŵ For δ, the values 1.4 [14], 1.0 [15], or 1.8 [16] have been found. The empirical results in Figure 7 indicate that the exponent δ is 2.0 at small frequencies ν, otherwise 0.0. Taking into account the logarithmic frequency scale, we can conclude that the power spectrum is flat for the most part of the frequency range. This corresponds to a white noise. Analogous results are found for the power spectrum of the average velocity V (x, t) [1]. In summary, we found that the velocity distribution is approximately Gaussian distributed and that its skewness is negligible. We were able to reconstruct the velocity-density relation V e (ρ) and the variance-density relation Θ e (ρ) by means of theoretical results. This allowed the determination of some density-dependent model parameters. The fluctuations of the vehicle density could be approximated by a white noise, although a power law with exponent 2.0 was found at small frequencies. All these results are necessary for realistic traffic simulations.
2014-10-01T00:00:00.000Z
1997-03-01T00:00:00.000
{ "year": 1998, "sha1": "ac4035fae6d9c9c2fe27da542e2e5dd29bf29256", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9806080", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e5bdcf1505e89edebeafcb37d5b9f866ed1fc366", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
264906653
pes2o/s2orc
v3-fos-license
The Role of Extraversion Personality on Posttraumatic Growth in Victims With Disabilities Due to Earthquakes . The Yogyakarta earthquake in 2006 left many problems, especially for those with disabilities the aftermath earthquake. Posttraumatic growth is an individual’s positive interpretation of negative experiences or feelings. The personality type that is often associated with stress, trauma, and growth events is the extraversion type. The extraversion personality dominates in dealing with traumatic events. This research aimed to find out the roles of extraversion in the posttraumatic growth of survivors with disabilities aftermath earthquake. The subjects were survivors of the Yogyakarta earthquake 2006 (N=51) that suffered upper and lower limbs disability in arms and feet and spinal cord disability. Data were collected using the posttraumatic growth scale and extraversion scale. This study’s data analysis method was simple regression. The results show that extraversion contributes to posttraumatic growth with disabilities aftermath earthquake. Introduction Indonesia is the largest archipelagic country and is located along the equator.Geographically, its territory is the meeting point of several world lithospheric plates, making Indonesia a country prone to natural disasters such as earthquakes, volcanoes, and tsunamis [1]. Badan Meteorologi Klimatologi dan Geofisika (BMKG) and Badan Nasional Penanggulangan Bencana (BNPB) recorded one of the most significant earthquake events that caused thousands of casualties and property losses, namely the Yogyakarta earthquake of May 26, 2006 [2].The impact caused by the earthquake is still felt today by Yogyakarta residents, especially survivors who were seriously injured and disabled or disabled. ICoPsy Based on observations in the field, researchers found that several types of physical disabilities in earthquake victims include the feet, hands, and spine.Spinal cord injuries are the most dominant, resulting in victims having difficulty carrying out daily activities and becoming dependent on others.Spinal cord injury itself consists of two, namely paraphilia and paraplegia. The "new" self-acceptance of the victims is undoubtedly not easy.Accepting that the limbs are no longer perfect and experiencing limitations in activity becomes a conflict among the victims.The dispute arises not only from self-acceptance but also from acceptance of the environment.The father's role, who experienced a disability after the earthquake, was a heavy blow for him because he would encounter obstacles in providing for the family.The same thing is also felt by mothers who experienced disabilities after the earthquake and have limitations in providing for the needs of their husbands and children.[3] surveyed groups of women with disabilities victims of the earthquake in the Bantul area, namely Jetis and Bambanglipuro sub-districts, related to the development of motivation and self-acceptance.It was found that as many as 50 respondents (60%) of women with disabilities victims of the earthquake still did not accept that they were disabled.The average respondent is still embarrassed to leave the house, feels inferior, sad about the conditions experienced, lacks family support, and does not dare to express his opinion.The conflict that arises causes its trauma. Trauma is an emotional response to a horrific event such as an accident, rape, or natural disaster.Immediately after the incident, it will cause a characteristic shock and rejection.The DSM-IV-TR [4] defines trauma as only perceivable by those who experienced, witnessed, or faced actual threats of death or death, severe injury, or threats to the physical integrity of oneself or others.This is accompanied by responses in the form of intense fear, a sense of helplessness, or horror.[5] Found two individual responses when facing traumatic events: negative and positive.This form of adverse reaction is described by stress and depression called posttraumatic stress disorder (PTSD).Meanwhile, the positive posttraumatic response is called resilience and has recently been known as posttraumatic growth [6,7].[7] Mention factors that influence posttraumatic growth, such as individual characteristics, characteristics of surrounding circumstances, emotional management of complex events, automatic or deliberate cognitive rumination processes, self-disclosure, personality, optimism, expectations, social influences (social support) and culture, coping, narrative development, and wisdom.DOI 10.18502/kss.v8i19.14348 The earthquake disaster became one of the traumatic events of interest, especially for survivors who experienced disabilities due to earthquakes.Several recent researchers have researched posttraumatic growth in earthquake survivors [8,9,10,11).In previous studies, the research referred to the role of social influence, namely social support, on posttraumatic growth.The results showed that social support significantly contributed to posttraumatic growth among survivors with disabilities of the Yogyakarta earthquake (12).Other researchers, such as [6], found that the best predictor of posttraumatic growth was personality (such as extraversion). The personality types that are often associated with stress, trauma, and growth events are the Big Five Personality type [6,7,13].The prominent five personalities comprise five dimensions: extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience.Low-to-moderate relationship variation between top-five personalities and traumatic events is inevitable. Based on the description that has been described, researchers are interested in knowing the role of extraversion personality on posttraumatic growth in disabled survivors after the Yogyakarta earthquake. Definition Posttraumatic growth was first introduced by [6].The popularity of posttraumatic growth in this study is called posttraumatic growth, along with the emergence of other psychological terms related to growth, such as stress-related growth, benefit-finding, and adversarial growth.The different terms used by experts to describe growth stem from one common idea regarding the reorganization of cognitive structures as a result of experiences of stress and trauma [14]. [7] Also, add that growth does not occur directly as a result of trauma but rather from the individual's struggle with the new posttraumatic reality, which is crucial in determining the extent to which posttraumatic growth occurs.Posttraumatic growth does not compare life's pre-or after-circumstances by judging that one state is better than the other.However, post-traumatic growth is more about a different acceptance of questions that show differences in the individual that can later be used to determine positive qualities from within the individual.DOI 10.18502/kss.v8i19.14348ICoPsy Posttraumatic growth is conceptually described through three things [7,15,16].first, individuals feel an improvement in relationships with several people, such as being more able to value friends and family, feeling an increase in affection, and having the desire to help others.Second, individuals can better change their view of themselves after surviving a traumatic event, such as accepting their limitations or limitations after the traumatic event.In addition, individuals also become more resilient and have greater strength than before going through a traumatic event.Third, the individual feels a change in his life philosophy, such as finding new meaning and renegotiating what is essential for him to be realized immediately because life is only limited.[6] divide posttraumatic growth into five domains, namely: Domain The first domain is to relate to others.Some people report being closer to their immediate and extended family, feeling a closer friendship with people who were known/still strangers/neighbors before the event occurred.However, many trauma victims also reported that some friends left and were not supportive during difficult times. The second domain is new possibilities, including the individual's desire to change their life goals, enroll in a new school environment, gain a new degree/nickname, or acquire a new skill.Overall, they have a "here and now" focus accompanied by an appreciation of new life and time. The third domain is personal power or perceived change within oneself.Change occurs when trauma victims express that they become more assertive, profound, authentic, confident, open, empathetic, creative, alive, mature, human, unique, humble, and go according to plan.Many describe themselves as 'better people' now. The fourth domain is spiritual change, where people can return to their previous faith.They start with actively participating in attending places of worship, praying, and believing in becoming a higher being through gratitude. The fifth domain is lifetime rewards.Many victims report that trauma allows them to 'see clearly' what is essential in life and to prioritize change, from how and with whom they make decisions to spend the day with importance to nature, health, essential or unimportant physical appearance, and belongings.DOI 10.18502/kss.v8i19.14348 Influencing Factors Factors that affect posttraumatic growth include: Demographic characteristics .Demographic characteristics of gender, age, education, marital status, length of marriage, and occupation are predictors of post-traumatic growth.[10] found significant post-traumatic growth differences based on age, education, severity, and length of marriage.However, no posttraumatic growth differences were found in sex, income, marital status, and type of occupation. Personality.[6] found that optimism, extraversion, positive activity and emotions, and openness to feelings are associated with posttraumatic growth. Optimism.[7] found that optimism influences people who experience traumatic events.Optimism can predict an individual's ability to regulate matters related to traumatic events [17]. Hope.Hope can be a positive coping when faced with stressful situations and play a role in developing posttraumatic growth.Hope is different from optimism.Hope is an expectation that the goal can be achieved and the individual's capacity to imagine a way to achieve that goal [18]. Spirituality.The role of spirituality in coping carried out by [19] survivors of the eruption of Mount Merapi in 2010 proves that spirituality is a significant predictor of post-traumatic growth. Social support.[20] found that social support increases an individual's closeness to his family and social environment.An environment that views individuals positively and can get closer can support individuals out of traumatic events toward positive change.This is because individuals who experience traumatic events require emotional support from their environment.Time Range.One factor affecting post-traumatic growth is the interval or period between traumatic events and circumstances.However, the time interval until the individual experiences growth differs for each individual.Some can grow immediately after a stressful event, but some do not.This is influenced by the type of traumatic event or the characteristics of the individual who experiences it [18]. Characteristics of traumatic events.Different traumatic events can affect the development of further posttraumatic growth.[6] found that individuals who experienced great traumatic experiences experienced a better likelihood of post-traumatic growth development than those who did not experience traumatic experiences.DOI 10.18502/kss.v8i19.14348 Extraversion Extraversion is the most popular personality trait compared to other personality traits. Extraversion is often connoted by the approach shown by energetic individuals towards social and material life, such as being sociable, liking to engage in activities, being assertive in speaking, having positive emotions characterized by sociable, assertive, active like to make new friends and warm [21]. The early development of extraversion refers to the frame of mind proposed by Jung. Jung developed a personality theory and discovered two concepts: extraversion personality (extraverted personality type) and introversion personality (introverted personality type).Jung saw individuals with extraversion personality types as having an objective or impersonal view of the world, while individuals with introversion personality types had an essential subjective or individualized view of looking at things [22].[23] conducted investigations related to introversion and extraversion of theoretical concepts that Jung had formulated.Introversion shows a tendency to develop symptoms of fear and depression, characterized by a tendency toward obsession, irritability, apathy, and autonomic nerves to be unstable.Introversion also has feelings of being easily hurt, easily nervous, inferior, daydreaming, and likes to sleep.Extraversion is known for engaging with people around.Extraversion individuals are delightful, full of vigor, enthusiastic, and energetic and experience more positive emotions in times of crowd. His pleasure in seeking stimuli from the environment makes extraversion individuals fall into the category of challenge-loving.When in groups, they talk a lot, do not hesitate to express ideas, and like to be the center of attention from those around them.In Golberg's collection of research on personality, it is argued that extraversion is the most researched and discussed personality trait [24].Almost every personality inventory lists extraversion as one of the revealed characteristics.Linkages with other variables, such as posttraumatic growth, also support other features of extraversion. Method This study uses quantitative research to obtain data in quantification, namely numbers [25].The research design used is simple linear regression, a study that predicts the relationship of one dependent variable with one or more independent variables [26]. The variables in this study were posttraumatic growth as the dependent variable and extraversion as the independent variable.DOI 10.18502/kss.v8i19.14348This shows that extraversion personality significantly affects posttraumatic growth in disabled survivors after the Yogyakarta earthquake. The results are by the theory put forward by [6] that of the top five personalities, only the extraversion personality dimension is most consistent with supporting posttraumatic growth.Research conducted by [28] reinforces the findings of this study that personality extraversion is significant to be a predictor of posttraumatic growth in heart disease patients.Individuals with extraversion personalities who are sociable, assertive, active, or energetic and like to seek new and warm experiences can quickly deal with traumatic events and have a high potential for posttraumatic growth. In this study, extraversion personality correlates with aspects of posttraumatic growth consisting of connecting with others, new opportunities, personal strengths, positive spiritual changes, and appreciation of life.Extraversion personality is highly correlated with relationships with others, with a value of r = 0.718 (p < 0.01).This finding is reinforced by previous research conducted by [7], that extraversion correlates strongly with personal strength and relating to others. [29] examined the degree of personality pathology among people with clinical disorders such as social anxiety similar to personality disorders, finding that personality is positively correlated with neuroticism and negatively correlated with extraversion. [30] found similarities in personality dimensions of optimism and extraversion associated with posttraumatic growth in people with arthritis and multiple sclerosis but not significant in personality neurosis. Table 1 : Description Characteristics of Subjects.
2023-11-02T15:13:39.987Z
2023-10-19T00:00:00.000
{ "year": 2023, "sha1": "ba42fac2ebe43493f83121a98bccf831365b5129", "oa_license": null, "oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/14348/22781", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c11022800916aeb0aac3ff1ffb4986bcd5dedc99", "s2fieldsofstudy": [ "Environmental Science", "Psychology" ], "extfieldsofstudy": [] }
17797307
pes2o/s2orc
v3-fos-license
Indian Hospitals and Government in the Colonial Andes This article examines the reception of the early modern hospital among the indigenous people of the Andes under Spanish colonial rule. During the period covered by this study (sixteenth to mid-eighteenth centuries), the hospital was conceived primarily as a manifestation of the sovereign’s paternalistic concern for his subjects’ spiritual well being. Hospitals in the Spanish American colonies were organised along racial lines, and those catering to Indians were meant to complement the missionary endeavour. Besides establishing hospitals in the main urban centres, Spanish colonial legislation instituted hospitals for Indians in provincial towns and in small rural jurisdictions throughout the Peruvian viceroyalty. Indian hospitals often met with the suspicion and even hostility of their supposed beneficiaries, especially indigenous rulers. By conceptualising the Indian hospital as a tool of colonial government, this article investigates the reasons behind its negative reception, the work of adaptation that allowed a few of them to thrive, and the eventual failure of most of these institutions. In 1567, during his inspection visit to the province of Chucuito, the Spanish official Garci Diez de San Miguel questioned the local ethnic authorities about the conditions of the population under their oversight. The responses repeatedly referred to the numerous and growing payments of tribute and labour that the indigenous people of the province were obliged to provide to the Church, the colonial authorities, and the Spaniards who had settled in the region. The purpose of the inspection visit was to assess the resources of the province, compile the information necessary to set the rate of tribute, and facilitate the speedier and more abundant flow of its population's contributions in labour, silver, and products to the benefit of the Crown. The voracious colonial extractive programme would be accompanied by measures that would assert the position of the king as protector of the Indians and patron of their evangelisation. Diez de San Miguel asked the curacas (chieftains of indigenous lineage groups) if they thought that a hospital where poor * Email address for correspondence: gr266@cam.ac.uk I would like to thank Medical History's anonymous referees for their suggestions. Anne Pushkal translated this article from Spanish. Any errors and omissions are entirely my own. Indians could be treated should be established in the province. While some expressed their agreement, others replied that 'there was no need for a hospital'. 1 How should the curacas' refusal be understood? The rejection of an institution that at least in theory should benefit them seems baffling, although it could be interpreted as a sign of opposition to the colonial order that was attempting to take root. Since in subsequent years hospitals were established not only in this province but also in the rest of the viceroyalty, this article proposes to examine why and how this policy came about and how it was received. As an extensive literature shows, the institution of the hospital was not originally created to address the problem of health, but rather that of poverty. 2 Its point of departure and ultimate objectives were fundamentally concerned with spiritual affairs. 3 The hospital offered hospitality to pilgrims and the homeless, and sheltered those who, ill or near death, had urgent need of assistance and guidance to save their souls. The hospital was not a homogeneous institution; under this name were grouped establishments and collective actions that had a number of distinct purposes and rationales for assistance. 4 Their motivation was not disinterested, since it began with the premise that good works would receive divine recompense. Exported to the New World, the institution conserved some of these features, but changed its character, inasmuch as its promoters sought to express a bond between the Crown and its subjects as firm as the one between God and his faithful. In their objectives and operation, urban hospitals sought to foreground the role of health, and even that of doctors. I argue that, due to the political implications of these measures as well as for practical reasons, these objectives were difficult to attain. Because it had political and religious ends, and was, moreover, a space where distinct levels of authority and different, even discrepant visions of assistance to the poor and of the form in which it should be administered were articulated, the Indian hospital offers a privileged vantage point from which to observe the methods and institutions of government. Studying the conditions in which hospitals for Indians were established and administered allows us to approach a field of conflict and negotiation where matters of religion, subsistence, and governance overlapped. Using judicial and administrative sources produced by civil and church governments, I focus on hospitals located outside of the viceregal capital between the sixteenth and eighteenth centuries. 5 Poverty and the Poor in Catholic Europe: Tradition and Reform Until the sixteenth century there predominated in Europe a point of view that saw nothing alarming about the existence of the poor: they were even considered necessary to the exercise of the Christian virtue of charity and made possible the salvation of those who helped them. But soon different attitudes appeared, in which the poor were perceived as a threat to the well being of society. The role that the elites and the monarch were to play in this regard was also the subject of debate. On the one hand were those who defended the visibility of the poor and supported their right to solicit alms, defended the practice of charity, and reaffirmed the traditional role of the Church as protector of the poor. On the other hand were those who considered the poor and poverty as anomalies, invoked the intervention of the State, and advocated plans for social reform which contemplated the creation of places of confinement where the poor would be compelled to work. 6 It has been maintained that the lay and reformist policies initiated by the municipalities to confront poverty were successful in Protestant northern Europe, while in southern Europe there persisted an attitude that was traditional, Catholic, and archaic. 7 However, this dichotomy has been questioned by historians who have shown that cities like Venice, profoundly Catholic and possessing a strong lay government, had a firm grasp on actions of social welfare. 8 Studies of Spanish cities like Toledo, Seville, and Madrid show that, during the sixteenth and seventeenth centuries, the Crown tried to push forward social reforms and exercise a certain control over the provision of relief to the poor. To this end, it sought to reduce the number of hospitals and intervene in their administration and governance. 9 These plans met with resistance and difficulties of varying calibre, and the impulse weakened. Before 5 Adam Warren's, Medicine and Politics in Colonial Peru: Population Growth and the Bourbon Reforms (Pittsburgh: University of Pittsburgh Press, 2010) provides in its first chapter an overview of the beginnings of colonial hospitals in Lima and the medical ideas professed by physicians under the Habsburgs. However, the actual timeframe of his study begins in the mid-eighteenth century (when this article ends) up to the midnineteenth century, long after Peru's independence. 6 the eighteenth century no state-led system of poor relief emerged, and those initiatives that did exist depended on local policy. Although the monarchy was faithful to its role as head of Catholicism and the Counter-Reformation in Europe, in its attempt to control the hospitals and become patron of the poor it remained in continuous tension with the Church. It is worth examining whether these debates and efforts at reform had any repercussions in the New World. The King's Sovereignty and the Invention of the Miserable Condition of the Indian As we know, papal bulls sanctioned Spain's sovereignty over the inhabitants of the New World. Shortly thereafter, the Patronato Real (rights and privileges of the Crown over Church affairs) would be established as the foundation of an intimate cooperation between Church and State in the colonial enterprise, which upheld the role of the king as the sponsor of missionary activity and protector of the Indians. The Indians, because they were pagans and new to the Christian faith, were legally considered miserables (wretches). Originally, this condition implied not only poverty, but also the inability to be responsible for one's own fate. The root of the problem was spiritual: the Indians' lack of knowledge of the true faith made them incapable of controlling their passions or distinguishing between good and evil. Thus they had a proclivity for vice, to being deceived by the devil, and to slipping back into their old religion. As miserables, as morally impoverished, and as minors, the Indians fell under the king's protection. 10 The documents in which the colonial authorities directly address the Indians refer to this role of the king. In 1575, when the viceroy Toledo called together the curacas of Arequipa, Cuzco, and Collao to explain their duties as authorities, he culminated his address by saying, 'That they and others, being poor, are the King's vassals, and because they are so, His Majesty has them for children '. 11 The concept of government revealed here, as Mitchell Dean affirms, is of 'patriarchal relations of service and obligation between sovereign and subjects, heads of family and wives, parents and children, masters and servants, and so on down the line'. 12 Government, sovereignty, and aid are three concepts that presuppose sociopolitical linkages whose form and nature I am interested in exploring. The position of the king as protector of the poor was always unstable. The Church, invoking tradition and ecclesiastical law, also claimed this role for itself. In the realm of poor relief, this rivalry was made manifest in the creation and functioning of the Indian hospitals. The Hospitals for Indians in Government Policy From a very early date, the Crown issued instructions to the viceroys, audiencias (royal supreme courts with administrative duties), and governors to establish hospitals. 13 However, rather than being a manifestation of the new ideas about social policy that had begun to appear in the intellectual debate in Spain, the founding of hospitals appears to reflect the policy followed during the Reconquest: closely tied to military activity, the occupation of territory, and the religious conversion of the recently conquered population. 14 While the Indian hospital of Lima was begun at the initiative of its archbishop in 1548, those of the principal viceregal cities such as Trujillo (1551), Piura (1553), Cuzco (1556), and Huamanga (1556) were created by order of their town councils. 15 All invoked charity, but the emphasis varied. In Lima, the indigenous population increased markedly due to the migrations that followed the conquest. Regardless of whether they were permanent or temporary, many migrants lost the ties that had connected them to their places of origin, leaving them exposed to illness, hunger, and violence. The attitude of the Spanish in the face of the Indians' plight reveals their doubts about the pertinence of being charitable to non-Christians or to those recently converted to Christianity. Fray Domingo de Santo Tomás denounced to the king the indifference of many to the spectacle of corpses abandoned in the streets, and demanded royal support for the recently founded Indian hospital of Santa Ana. 16 The chronicler Fernando de Montesinos affirms that in Cuzco the conquistadors soon acknowledged that, since they were benefitting from the Indians' labour, it behoved them to concern themselves with the growing number of paupers populating the city. Organisers in the town council founded the hospital, undertook its patronage, and provided it with property. 17 Both cities shared the concern for public order and the control of the poor. Because it was considered first and foremost a place for the cure of souls, the Church claimed control of the hospital. In Lima, although the institution belonged to the Patronato Real, the archbishop Jerónimo de Loayza sought to block the involvement of laymen. His reasons were based upon medieval tradition, but also on more recent ecclesiastical dispositions. For some centuries past, the bishops had had the right to supervise and inspect the hospitals, but what was more, the recently concluded Council of Trent had validated this faculty. 18 the king's call to merge the hospitals, Loayza countered that to do so would betray the will of the benefactors of Santa Ana, since the hospital for Spaniards could not benefit from its assets. 19 After the first foundations of hospitals for Indians, the elites of the principal cities of the Peruvian viceroyalty continued to participate in their governance, but did not retain control of them, as some theorists and rulers of the early period appear to have wanted originally. In his Gobierno del Perú (1567), the oidor (magistrate and royal official) Juan de Matienzo underscored the hospitals' value as civilising institutions and as institutions of government. In a short passage he suggests that he considered the city of Venice's system of public assistance as an example to be imitated. It is possible that Matienzo may have been referring to its lay leadership. Perhaps he found attractive the fact that, in Venice, the goods that were distributed to the needy came for the most part from the community rather than from the collection of alms. 20 The Toledan Administration and the Indian Hospitals The general inspection visit of the viceroy Toledo, begun in 1569, prompted the founding of hospitals in various parts of Peru, establishing hospitals for Indians in cities such as Potosí, La Plata, La Paz, and Huancavelica. The ordinances issued by the viceroy reveal the influence of Matienzo and his interest in the lay leadership of the hospitals. Upon founding the hospital of La Plata, Toledo organised the city's notables into a brotherhood which would supervise its operation; the oidor Matienzo himself was one of its members. 21 In the directives for the hospital in Potosí, Toledo recommended the appointment of deputies in charge of overseeing its administration, and ordered that it be placed under the supervision of the corregidor (magistrate). 22 The intervention of the viceroy indicated a shift in the attitude of the state, the Church, and the elites toward their responsibility to the poor. Matienzo explained that they were obliged to offer assistance to the Indians: 'since all we who inhabit this kingdom eat by [means of] their sweat'. 23 These ideas were far removed from any concern for social justice, though, since the motives for helping the Indians were eminently pious. 24 At the same time, it was argued that the support of the institutions that would assist the Indians would be their own responsibility. The resources to maintain the hospitals would come from the communities. Three salient issues emerge from the creation and administration of the hospitals for Indians: the manner in which they were justified by the colonial functionaries; the consequences that the organisation of the hospitals had upon their supposed beneficiaries; and finally, how they were received, especially among the indigenous population. Although I will refer to those hospitals located in the most important cities of the viceroyalty, my focus is the rural and provincial hospitals, whose existence is practically unknown. 25 The arguments justifying the existence of Indian hospitals in the sixteenth century varied remarkably over the course of a period of little more than twenty years. The contrasts reveal changes in colonial policy and administration, and also show how the Spaniards perceived the response of the Andean population to the profound calamity brought about by the conquest. While in the 1540s friars like Domingo de Santo Tomás called for the king's support of the hospital, citing the abuses of the Spaniards and their lack of compassion, in later years the predominant discourse that explained why the hospitals were necessary alleged a supposed lack of charity among the Indians. Toward the end of the 1560s, before Toledo arrived in Peru, a feeling of hostility towards the Indians grew among the colonial and ecclesiastical authorities, and was reflected in the writing of history, in the studies of local religion, in manuals for evangelisation, and in political treatises. This attitude has been explained by the need to justify the conquest and to give substance to the Spanish crown's claims to sovereignty since, it was argued, it had liberated the indigenous people from the 'tyranny' of the Incas. In his treatise on Inca religion, Polo de Ondegardo wrote that Andeans did not have charitable feelings and scorned the elderly and the poor. 26 For his part, Matienzo asserted that the Indians had little compassion for their neighbours, did not help each other, and failed to attend to the sick, the elderly, or those unable to work, not even when it came to their own relatives. 27 When Toledo issued ordinances for the foundation of hospitals in the Andes, he essentially reproduced Ondegardo's words and especially those of Matienzo about the purported Andean attitudes toward the poor. The ordinances for the hospital of La Plata, issued in 1574, underscored the need to inculcate the practice of charity among the Indians to facilitate their conversion to Christianity. In this text, Toledo asserted that never before had there existed anywhere people as devoid of charitable instincts as the Andeans. He maintained that in Peru one saw parents who had no regard for their children, and vice versa. It was imperative to teach the Indians to have compassion for the weak, and the hospitals were a means of accomplishing this. 28 The image of poverty, of the poor, and of assistance that emerges from these testimonies is disconcerting. It is not easy to understand what prompted men like Ondegardo and Matienzo to describe the way Andeans treated the poor in such terms. It is possible that, in the years following the conquest, in various parts of the Andes, the weakest had been left unprotected. Such an attitude would have reflected the conditions of extreme penury to which many men and women saw themselves reduced. One might imagine then that, seeing the effects of so profound a crisis, Ondegardo and Matienzo mistook certain survival strategies for social norms. The perception of the existence of widespread poverty as well as the objectives of evangelisation and spiritual reform justified the founding of hospitals, not just in the most important urban centres, but also in the provinces and rural areas. Prior to Toledo's tenure, 25 Studies on colonial hospitals in Peru focus on the city of Lima only. See for example Miguel Rabí Chara, the hospitals' sources of support generally came from alms and bequests, as was customary in Europe. Chucuito is among the first provinces we know of in which community assets were applied to the costs of health care. 29 This measure would become widespread a short time afterwards, when Toledo ordered that a portion of indigenous tribute, known as the tomín de hospital, be earmarked to support the Indian hospitals. 30 It seems that this decision, too, was influenced by Matienzo's opinion on the advisability of utilising community resources in preference to depending on almsgiving. The commentary about charity being an attitude unknown in the Andes must have served as a basis for the existence of the tomín de hospital: the payment of this contribution not only solved a practical problem, but would also serve to ensure that the Indians learned the meaning of this virtue. Thus in the New World changes were introduced which in Spain at the time would have been impractical or even inconceivable: that alms would come mainly out of the pockets of the poor. Moreover, while in Europe there was talk of creating houses where the poor would be confined and compelled to work to earn their keep, in the Andes it seems the principle being put into practice was that the Indians should leave their towns to work and help themselves. 31 The fact that hospital care was to depend upon Indian contributions soon had consequences for indigenous assets. During the smallpox epidemic that struck the Peruvian viceroyalty between 1588-9, the viceroy directed that money be taken from the funds of the communities of the Lima diocese as a loan to the hospital of Santa Ana. Contributions totalling 2,000 pesos were collected, a significant sum given the penury which the epidemic must have engendered among the native population. 32 The negative consequences of the epidemic for the indigenous population of Lima did not end there. Five years later, the doctor Marco Antonio Gentil sued for breach of contract, claiming the Indians had not paid his salary. Gentil maintained that he had originally been nominated in 1580 by Toledo, but for various reasons did not fulfil his duties. Confirmed by the viceroy Villar in 1587, Gentil asserted that he had attended the Indians of the reducciones (settlements to which the native population had been removed) that surrounded the city of Lima, and was now claiming his payment in money, foodstuffs, and livestock. The curacas of Lima, represented by the protector de naturales (a royal official charged with protecting Indian interests) rejected the claim. They contended that Gentil only appeared in the towns on feast days when 'he idled about with the Indians'. They added that Gentil neither visited the sick nor enquired after them. Finally, they said that it was their understanding that the doctor had been contracted for the years of the epidemic, but this had not entailed a permanent obligation. The arguments of the protector de naturales were rejected and the judge ruled that the Indians had to pay the fees claimed by Gentil. 33 appointment Toledo had made and the payment the doctor demanded represented an expense over and above that comprised by the tribute and its fraction, the tomín de hospital. 34 The creation of systems of assistance and the establishment of hospitals in the provinces of the viceroyalty were also implicated in the undermining of indigenous systems of authority. As he passed through the province of Chucuito some years after the visit of Diez de San Miguel, Toledo issued instructions for the management of its community funds. 35 To a certain degree, these directives were adjusted to the particular circumstances of the place, even as they were altering them. The objectives of the ordinances were basically twofold: to safeguard resources that would serve to meet tribute payments in the event that the settlers were unable to do so, and to provide a means of supporting those who could not work. The appointment of a Spanish administrator seems to have had as its objective the weakening of the native authorities. Faced with the difficulties of administering the resources of so extensive a province, the Toledan ordinances resorted to indigenous support, but bypassed the curacas. It is worth noting that these instructions amply elaborated the themes of administration and assistance in a communal context, but were not specifically concerned with the hospital, an institution that was mentioned only in passing. The examples that we have seen here place squarely before us a series of diverse social practices as much as they do an institutional reality. Below, we will consider the hospitals that were created in the cities, but we are especially interested in understanding what took place in the smallest districts. We should investigate what kind of hospitals existed in these places, and how they worked. The Varied Faces of the Indian Hospitals The European notion of the hospital that was exported from Spain to the Andes had contrasting components. First, it bore features of the Spanish medieval institutions which had proliferated in urban centres, although also in the country and along the pilgrimage routes: houses that offered shelter and a bit of nourishment, where medical attention was secondary to the spiritual services provided by members of the religious orders or by devout persons. These establishments were supported by alms originally provided by their founders and ordinarily procured by their administrators. Second, the concept also included the urban hospitals whose objectives were not sharply differentiated from those of the establishments just described, but in which the role of doctors was considerably more important: examinations and licences were required to practice, although the need for traditional healers continued. 36 In the New World such institutions were placed under royal patronage, where they remained in the care of the viceroys and were subject to the scrutiny of the bishops. In both Europe and the New World, hospitals were often established in buildings specifically constructed for that purpose. These mostly urban institutions bore a little more resemblance to modern hospitals. Finally, for reasons of space, social hierarchies, cultural differences, and even political expediency the Spanish hospital 'system' permitted the assistance of the sick in their houses via the distribution of food and, exceptionally, the visit of a doctor, surgeon, or barber. In sum, in the European concept of the hospital that was established in the Andes there coexisted different manifestations of the idea of assistance that reflected diverse circumstances, social relations, and interpretations of the jurisdictions involved. 37 The hospital for Indians in Lima and others established in the principal cities of the viceroyalty had some features of the Crown-sponsored institutions that had emerged in the Habsburg period, 38 but while they were governed by certain common principles, they were not part of institutional networks. Many of the hospitals which were created in the environs of the capital and in the interior of the viceroyalty languished and eventually disappeared, while others were adapted to local circumstances and remained in operation at least until the time of the Bourbons. The documentation on provincial hospitals is very scanty. It is dispersed in various repositories, and the information it offers is frequently unclear. Notwithstanding the disappearance of some papers and the classification of others in unexpected places, the picture that the archives offer the historian represents a tangle of complicated interests, precarious administration, conflicts of jurisdiction, and different approximations of the idea of the hospital that existed in colonial Peru. Because the hospitals were identified with the governments of their respective localities, neither the viceroy nor the audiencias commissioned reports on the number and status of these establishments for an entire corregimiento or province. In the memorias of the viceroys the references to hospitals are usually found in the section called 'ecclesiastical government'. The visitas pastorales (reports of diocesan visitations) certainly include this sort of information, although it tends to be minimal. The reports compiled during the inspection visits of the Archbishop Toribio de Mogrovejo to the then far-flung diocese of Lima between the end of the sixteenth century and the beginning of the seventeenth reveal the conflicts of jurisdiction, the different points of view on what constituted the proper scope of the hospital, and the lack of specificity about the sources of its support, circumstances that would persist in the decades that followed. In 1585 the priests in charge of the doctrinas (Indian parishes) of the province of Huaylas presented a memorial (written statement) describing the difficult state of their parishes and of the hospitals within this jurisdiction. They deplored the fact that, although there were sufficient resources with which to build them, the Indians 'died like beasts' in the fields for lack of hospitals. The religious superior of the province explained in a letter to the archbishop that, as long as there were no hospitals, nor the 'necessary cleanliness' for the divine cult (in other words, to say Mass and administer the sacraments properly), the priests would not provide spiritual assistance to the Indians. 39 complaints singled out the corregidores, whom they accused of appropriating community funds which, they argued, should have gone to the adornment of the churches and the equipping of the hospitals. When Mogrovejo himself asked the corregidor of Cajatambo for the money for the completion of the province's churches as well as for beds and medicine for the hospitals, the latter refused to give it to him. 40 The archbishop then opted to excommunicate him. 41 The dispute between the corregidor and the parish priests for the resources of the community exemplifies the difficulties that arose from putting into practice the principals of the Patronato Real, as well as determining the character of the hospitals in the doctrinas. 42 The observations under the heading of 'hospital' in the registers of the pastoral visits that took place between 1593 and 1606 under the heading of 'hospital' list only their property. In practically every case, there appear small quantities of livestock. Other income, such as money from annuities and rentals, appears only very exceptionally. The income derived from the tomín -the cause of the incident of 1585 -is not mentioned, nor is there any indication of what the corregidores did with it. 43 Remarkably, hospital buildings receive practically no mention, and not all the towns visited had property assigned to this category. In no case is there any mention of persons responsible for attending to the sick. In light of later sources, one might conclude that the hospitals that Mogrovejo had visited did not exist, at least not in a form that would correspond to the urban model. The livestock that they possessed was used to feed the sick who were cared for in the their homes. It is likely that in at least some cases this help was administered according to the membership of the beneficiary in some particular kinship group. 44 These hospitals appear to represent actions more than they do establishments. They bear more resemblance to the domestic, barely medicalised version, but not to the institution that the Habsburgs sponsored in the sixteenth century. Their principal source of funding was the community funds like those regulated by Toledo in Chucuito. The report commissioned in 1619 by the archbishop Bartolomé Lobo Guerrero contains one of the most complete pictures of the state of the hospitals in the Indian parishes of the diocese of Lima and reveals marked contrasts with regard to Mogrovejo's visit. The idea of the hospital that the inspectors had is clear: it should be an establishment that met certain minimum conditions for tending to the sick. For this reason, the document describes what the inspectors found on their tour: in the great majority of cases a series of empty houses that were called a 'hospital'. 45 While in Mogrovejo's inspection visit some Indian parishes had premises designated to serve as a hospital along with some goods, the parishes in Lobo Guerrero's report lacked both and appeared never to have had them. 46 Of the 152 places visited, which includes some towns in which Spaniards and Africans also lived, in fifty-seven there was no hospital, and in twenty-one the inspectors left no clear indication of whether one existed, although probably none did. Of the seventy-four remaining villages that did have a hospital, in sixty-three cases the inspectors indicated that these were deserted or that the inhabitants did not want to make use of them. In the localities with hospitals to which Indians did resort, these were not for their exclusive use, but rather were 'multi-racial' and urban or semi-urban, as in the case of the coastal towns of Santa, Cañete, Ica, and Chancay, or the city of Huánuco in the central sierra. Barely a handful of hospitals in the Indian parishes functioned according to the inspectors' criteria: in Pacarán, some residents made use of the hospital, 47 while in San Damián the poor were fed. Inasmuch as they provided assistance, these can be said to have fulfilled their function. 48 The remote town of Tauca seems to have been the exceptional case with an active rural hospital although, lamentably, I have found no information that explains why. 49 Three elements can explain the sombre image that the emissaries of the archbishop presented: the lack of confidence that the hospitals inspired in their supposed beneficiaries, the conduct of the corregidores, and conflicts of jurisdiction. From the time of Jerónimo de Loayza, first archbishop of Lima, it was known that the Indians refused to enter the hospital because of the fear it instilled in them. The Indians referred to the hospital as 'the house of the dead'. This attitude was not limited to the indigenous population of Lima. Among the reasons for rejecting the hospitals given in 1567 by the curacas of Chucuito were inadequate diet and hygiene: the curacas said that they preferred to stay in their homes, where they could eat their fill, and which were not so full of lice. 50 In 1586, in response to the questionnaire sent by the Crown which would result in the Relaciones Geográficas de Indias, the corregidores of Atunsoras, Atunrucana, and Laramati, in the province of Huamanga, also registered the Indians' refusal to turn to the hospitals. 51 In addition to the cultural differences surrounding the treatment of illness, for many curacas and heads of kinship groups, the insinuation that they were not capable of supporting their relatives and subordinates and providing for their needs could be seen as an affront to their authority. The negative image of the corregidores given in Lobo Guerrero's report comes as no surprise. Some of them appropriated the community money earmarked to support the hospital, and there was no need for them to be in remote places to do so: in Magdalena, on the outskirts of Lima, the corregidor gave nothing to the people of the town. 52 Others limited themselves to the sporadic distribution of some goods among the sick. Some of the corregidores who responded to the questionnaire of the Relaciones Geográficas gave reports in which they presented themselves in quite a favourable light. In 1586 Diego Dávila Briceño, corregidor of Yauyos, asserted that not only had he been busy carrying out the reducción of the province's Indian towns, but in practically all of them he had established hospitals endowed with livestock that their encomendero had left them. These were staffed by indigenous specialists whom Dávila Briceño said had been taught some curative techniques such as bloodletting. 53 In 1582 the corregidor of Jauja, Andrés de Vega, noted that the hospitals of this province were supported by the tomín de hospital in addition to alms. 54 While these initiatives were indeed taken, their impact was negligible. Instead, there prevailed the idea that the corregidores were a hindrance rather than a help. In the years that followed, the hospitals disappeared, or they took a different direction from the one that was originally intended. The conflicts of jurisdiction were manifested not only in the tense relations between parish priests and corregidores, or between the latter and the bishops. To these were added the difficult relations with the religious orders, which resisted the controlling impetus of the bishops. In 1619 the Dominicans in charge of the parishes of Yauyos refused to permit the archbishop's emissaries to make an inspection visit, nor would they provide information about their parishioners. In the corregimiento of Huamalíes, the inspectors found a similar attitude among the friars of La Merced. 55 That same year, Pedro de Valencia, bishop of La Paz, described the state of the hospitals of the Chucuito province, then under his jurisdiction. 56 Valencia relates that each of the seven towns of the province had a hospital in a house equipped for that purpose. In addition to the tomín, the hospitals, partly following the pre-Hispanic model of provisioning from the resources of the province, were allotted the harvests of maize that came from the distant lowlands of Moquegua and Larecaja. 57 To this was added the income produced by some general stores. Three surgeons were given charge of attending the inhabitants of the seven towns. In each of these, notes Valencia, there was an Indian barber. 58 On the surface, the 'hospital network' of Chucuito suggested an unusually good degree of organisation. But the bishop's report did not end there. Valencia lamented that the chief administrator of the hospitals was a layman appointed by the viceroy who, maintaining that the hospitals were under the Patronato Real, would not allow the bishop to visit them and took advantage of the situation to make a profit. The hospitals were precarious and the Indians refused to use them, since they preferred to cure themselves in their homes and rely on the native specialists. According to Valencia, the only acceptable institution was the hospital staffed by the Jesuits in the Indian parish of Juli, where it seems medical attention was offered. 59 Valencia's description allows us to form an approximation of what happened in the hospitals of Chucuito in the half-century that had passed since the visit of Diez de San Miguel. He had proposed that a hospital be established in Juli, and that the barber who lived in Chucuito, who was paid with the proceeds from the rental of the community's general stores, periodically visit the people in their towns. 60 Fifty years later the hospital of Juli, now administered by the Jesuits, was not the central establishment envisioned by 53 Diez de San Miguel. Instead, there was a hospital in each town, in accordance with the proposal of some of the curacas and local notables, who had been questioned in 1567. 61 Valencia harshly criticises the indigenous authorities to the extreme of recommending their extinction, but he does not say that they participated in the running of the hospitals. 62 What draws attention in the description of 1619 are the lay administrator, his alleged business dealings with the community's produce, his refusal to allow the Church to inspect the hospitals, and the apparent absence of the curacas. Chucuito shared some of these features with other Indian hospitals in colonial rural Peru. A first conclusion that can be drawn from the examination of the documents on Indian hospitals is that the curacas and other authorities like native notables, governors, and bosses participated actively in their management. To the degree that they claimed responsibility for the care of the poor as an inherent aspect of their investiture, this stance can be read as the resigned acceptance of a fait accompli, but it could also have been a strategy. The provision of assistance to the poor involved not just the community revenues but also the authority of the curacas. However, if the curacas -or at least some of them -cooperated, we could explain the empty and abandoned hospitals because, of the possible models for a hospital, they chose the one that would not oblige the sick and the needy to abandon their homes. This model could be more easily adapted to local customs and possibilities and to their political culture. The parish priests and the archbishop Mogrovejo himself had to allow it, because this type of assistance was common in the rural Spain of the Ancien Régime. 63 Financing the Hospitals Effectively As one would expect, the question of hospital assets was the thorniest. The collection of the tomín should have generated not-insignificant sums, but a considerable portion did not reach its destination, since it tended to remain in the hands of those who administered it. This tendency only worsened in the decades that followed. The questioning of the composition of the tithe that the Indians paid must have affected the collection of the tomín de hospital, as can be deduced from the memorias de gobierno (reports made to their successors) of the viceroys. Since a portion of the tithe was allocated to the hospitals, some considered the tomín to be redundant. 64 By the end of the eighteenth 61 The Indians of the Anansaya kin group in Chucuito stated that, 'it would be good to build a hospital in each town, but not establish one single hospital for the whole province because the patients won't be able to make use of it'. Diez de San Miguel, op. cit. century, the tomín income to the hospital of Santa Ana in Lima was insignificant. 65 On the local scale, the assets with which the hospitals had been supported were those that the Toledan regulations described as 'community'. In various towns in the diocese of Lima, the curacas and the other indigenous authorities assumed the duties of mayordomos (stewards) and administrators of the hospitals. From this position, they kept watch over indigenous interests, although certainly there were those who benefitted personally from this. Along the way, they lacked neither confrontations nor the option to establish alliances, since pressure on hospital assets came from various directions. The locality of Marca in the corregimiento of Huaylas was one of the places where the envoys of Lobo Guerrero in 1619 had described the hospital as a 'deserted house'. 66 So laconic a report sheds little light on what took place. Thus one might suppose that the inhabitants of the place had abandoned the hospital to its fate, but for a complaint brought by the curacas before the ecclesiastical courts in 1597, which informs us that its facilities -which also included a chapel -had been seized and ruined by the mayordomos of a powerful rancher and landowner of the province, who used them to house livestock and warehouse wool. 67 In their formal complaint the curacas not only mentioned the services that had been provided in the facilities, but also alluded to the sacred character of the place. In this region, dominated by ranches and textile workshops, the continuous siege by the landowners on Indian property and labour could have contributed to the ruin of the hospitals. Pressure also came from the parish priests, with requests for loans and donations to meet needs that were as much symbolic as material. In 1622 the mayor of Picamarca, Yauyos, who was also the mayordomo of its hospital, asked for and was granted an order that the town be reimbursed a sum of money that the curate of their parish had taken and deposited in a bank in the city of Lima. 68 The following year, the curacas of San Pedro de Pilas, in Yauyos, demanded the return of the livestock that the parish priest had requested as a loan to alleviate the needs of a neighbouring parish. Three years before, their priest had imposed on them a term of twenty days in which to buy a low canopy under which to carry the viaticum to the sick. Under the exigency of the deadline, the curacas and other leading residents of the town pleaded with the ecclesiastical inspector to authorise them to take money from the hospital funds. When the state of the hospital's assets was examined, it was discovered that the livestock had not been returned, and the canopy had never been purchased. Many resources must have been dissipated owing to the multiple demands that burdened the pueblos, the comings and goings to the courts to file complaints and obtain a response, and the near-impossibility of fulfilling obligations under conditions of asymmetrical social relations. 69 The cases brought before the ecclesiastical tribunal concerning the administration of hospital property show that the institution had an impact on the economy as much as on local structures of authority. In 1609 the curaca of Orcotuna, in the province of Jauja, and 68 This was the bank owned by Juan de la Cueva, a noted financial institution in the seventeenth century. AAL, Hospitales, leg. 2, exp. 2. 69 The curacas stated during the investigation on this case that 'the parish priests usually take for themselves the property and livestock belonging to the hospitals'. AAL, Hospitales, leg. 2, exp. 8. other authorities of several towns accused the administrator of the hospital's livestock of a series of offenses that ranged from using the labour of the shepherds for his own benefit, to selling the livestock at prices disadvantageous to the community, to 'giving banquets and clothing', to filing lawsuits indiscriminately. As a result, they maintained, the hospital could neither fulfil its purpose nor carry out the intentions of the encomendero who had founded it. The declarations of the accusers suggest that they thought that the post should be filled by someone who would at least have the approval of the curacas as well as possess wealth of his own, an aristocratic conception of the position. The possibility of making use of the resources of the hospital to supplant the role of the curacas must have provoked alarm and disapproval. 70 To receive official confirmation in the position of administrator from the Spanish authorities became an objective of some indigenous authorities. After the administrator of the hospitals of Yauyos renounced the position, the Indian leaders took the reins, by decision of their parish priest. Two years later, in 1629, they asked that the archbishop endorse the appointment. To defend their cause, they modified history in their favour, and defended the original plan of the hospitals: they maintained that since the time of Archbishop Loayza it had been customary for the hospitals to have Indian mayordomos, and asserted that if they were given charge, there would be no Spaniards and clerics despoiling the poor of their property. 71 Although we do not know the disposition of this particular case, it is clear that some curacas came to consider the administration of the hospitals to be an integral aspect of their duties. The fact that the buildings or spaces called hospitals for the most part did not exist was beside the point, since the objective was the assistance that was offered to the needy in their homes. The recourse to the ecclesiastical courts in these parishes in the diocese of Lima suggests that the curacas resorted to it whenever it was suitable or expedient. In the case of Yauyos, because the founder of its hospitals -the archbishop Loayza -was a member of a religious order, the Church acted as mediator in the conflicts over hospital property, which were drawn as much from within as from outside of the communities. But this was not necessarily the case in the other jurisdictions, such as Jauja. 72 In the last instance, the indeterminate boundaries of the jurisdictions were of considerable import in the towns' disputes over the hospitals' assets. State, Church, and Indian Hospitals in the Seventeenth and Eighteenth Centuries It is clear that the Indian hospitals did not follow a smooth upward path from simple, precarious institutions to organised establishments that accomplished the objectives their creators had assigned to them. The jurisdictional conflicts, the administrative problems, and the presumption that in the best of cases the hospitals were barely a shadow of what their founders had imagined they would be must have awakened doubts among the high-ranking civil and ecclesiastical authorities about their viability as an instrument of government. The assets of the rural and provincial hospitals continued to serve as an object of controversy and different actors, including the viceregal government, the Church, the 70 administrators and the indigenous authorities, held different perceptions of the nature of the hospitals' revenues and how they should be administered. While in the sixteenth century and the beginning of the seventeenth century the highranking authorities of the Lima diocese took note of the Indians' refusal to patronise the hospitals and blamed the corregidores for appropriating their resources, later on the accusations of the abandonment of these establishments and the disappearance of their assets would fall upon the parish priests, but above all on the Indians themselves. This propensity was exemplified in the investigation of the assets of the hospitals of Yauyos that was ordered by the visitador (inspector) Juan Sarmiento de Vivero in 1660. The cleric he sent to investigate stated that the inquiry was necessary because, during a recent epidemic, many people had died of hunger because they had no meat with which to feed themselves. Except for mestizos and Spaniards, about whose presence in the province we have no information, practically no one was safe from criticism. What little that has survived of the questioning of the curacas indicates that the assets of the hospitals had been confused with those of the parish, finally 'disappearing' or possibly remaining in the hands of a few. The building designated as the hospital was in ruins. The allusion to the 'common good' in the words of the diocesan representative indicates a significant change with regard to the values that were expected of the curacas and leaders as those evidently now responsible for the hospitals. Yet to be unequal to the task was a position that discredited them. While it cannot be said with certainty that there is a direct connection, cases like this must have influenced the changes that took place in the years that followed. In his memoria de gobierno, the viceroy-archbishop Melchor Liñán y Cisneros (1678-81) wrote about the need to strengthen the big hospitals and praised the religious orders dedicated to hospital care. In recommending measures for financing the hospital of Santa Ana in Lima, he explains that by a royal order issued in 1666 the tomín tax had been amended and removed because 'the hospitals have died out and come to an end in the Indian reducciones', 74 although he acknowledged that in some provinces the corregidores continued to collect it. The predecessors of Liñán y Cisneros had favoured urban hospitals and they entrusted them to the Bethlehemites and the order of San Juan de Dios, both hospital orders, 75 with the idea that not only would they improve hospital administration, but also that their ethos would inspire a kind of spiritual renewal. Following this lead, Liñán y Cisneros reports that he placed the hospital in Huanta in the hands of the Bethlehemites. 73 A judicial proceeding initiated by the curacas of Huanta in 1756 to claim possession of their hospital permits us to glimpse the effects of these changes. 76 This hospital must have been founded in the sixteenth century according to a bequest in the will of the encomendero who left property and a sum of money by way of restitution to the Indians of three communities of Huanta. A copy of the will remained in the community treasury and, many years later, the viceroy Marqués de Mancera (1639-48) authorised the hospital's founding. It was stipulated that under the supervision of the Jesuit rector a house be purchased to house the hospital, and the protector de naturales was placed in charge of its administration. But after a period of time the hospital was abandoned and its rental income was adjudicated to the hospital of Huamanga, administered by the religious order of San Juan de Dios. It is not evident that the placement of the Huanta hospital in the care of the Bethlehemites that had been ordered by Liñán y Cisneros was ever carried out. In 1756 the claim of the curacas of Huanta for the restoration of the hospital and its property to their locality was made against the town council of Huamanga and the order of San Juan de Dios. They presented the evidence of the encomendero's bequest and explained that, due to the distance, the Indians of their communities could not be treated in the hospital in Huamanga. The court decided that, until the hospital in Huanta was rehabilitated, the hospital in Huamanga would continue in possession of its resources. Thirty years later the dispute was still going on, but now the religious of San Juan de Dios considered it a grievance that anyone from Huanta was attempting to assert some claim to this property. By this time, the curacas had disappeared from the scene and it was the curate of Huanta who was pursuing the case. 77 Comparable situations occurred in other localities where the government favoured the religious orders, handing the income of the hospitals over to them, to the detriment of the communities. 78 In the years that followed, the Crown sought to ensure that the royal courts would be able to oversee the hospitals that were in the hands of the religious orders by, first, asserting that these were still subject to the Patronato Real and, second, curbing the religious orders' ambitions to appropriate the assets of the hospitals. 79 The reforming impetus of the Bourbons also reached the administration of the hospitals. 80 A trial that took place around 1748 81 brings us back to the province of Chucuito to examine how hospital assistance was organised in the century after the 1619 report of the bishop Valencia. The arguments used permit us to analyse different perceptions of the nature of the hospitals, the role that fell to the government at the moment in which it was trying to reform the colonial administration, the participation of the curacas and other indigenous authorities in the provision of aid, the fate of the tomín and of the community funds, as well as the participation of the Church. The trial was conducted before the Superior Gobierno (central government) beginning with a report by the officials of the royal treasury against Ignacio de las Cuentas, administrator of the tomín de hospital in Chucuito and the protector de naturales there. The relevance of his position and salary was questioned, since a hospital no longer existed in this province. The case had come to light several years earlier, when in 1738 a decree ordered that De las Cuentas cease to administer the money from the tomín, and deposit it in the royal treasury. He was also commanded to return any salary that he had collected since the decree had been issued. In his defence, the administrator tried to show that the royal officials had incorrectly interpreted the nature of the tomín, the rights and responsibilities it involved, and its administration. He maintained that the tomín de hospital, created to address the health needs of the Indians, did not belong to the king, but rather was 'the Indian's own wealth, which must be turned to the benefit of this same Indian when he is sick', thus the directive to deposit it in the royal treasury was unlawful. He also recalled that, by law, the tomín was collected by the corregidores and alcaldes mayores (district magistrates) of the towns, and that in the case of Chucuito, from 'time immemorial' the protector de naturales had been responsible for its administration. The administrator argued that the tomín served to help the Indians who did not live in the cities, which set aside the well-established role of the community funds, and he went on to explain the notion of the hospital that applied in the province. The tomín -he declared -was paid not 'for the walls of the hospital' but rather so that the Indians would receive medical attention. He acknowledged that there was no hospital in Chucuito, but he maintained that this mattered little, because the Indians indeed received assistance. To have a hospital in a central location was impractical because, apart from there not being sufficient means to support one, great distances would have to be travelled by those who needed it. As for the hospital that the Jesuits ran in Juli, it did not count, since it only served the people of that parish. De las Cuentas explained that he himself delivered the help and, moreover, in each town he had 'trustworthy persons' curacas and local leaders -who were in charge of distributing food and medicine to those in need. The parish priests participated in the system, issuing vouchers to the caciques for modest quantities which were delivered individually. 82 This adaptation of the local conditions, which of necessity recognised the role of the indigenous authorities, made the system viable. It is impossible to know how effective the organisation described by De las Cuentas was. It is probable that, as had happened in other provinces, the money did not arrive in the amounts hoped for by those for whom it was intended, but undoubtedly the arguments presented sum up a practice that the government authorities as well as the ecclesiastics, in their zeal to impose their model of assistance, repeatedly refused to recognise. The result of this trial is surprising, since the government ruled in favour of the administrator, admitting that, although 'there was no hospital in the material sense, it did exist in [another] form'. It may have been difficult for De las Cuentas to imagine that the effects of the imperial policies of the Bourbons would prevent him from receiving justice and recognition of his work as an agent of the government. Yet when he asked for the return of the money that he had been obliged to restore to the royal treasury, the only response he received was that it was impossible, because 'it had been consumed by the costs of the war'. Conclusion Assistance to the poor was one of the pillars upon which the monarchy supported its efforts to legitimate its right to govern. In Spanish America, this principle was applied to the governance of the Indians, conceptualising them as miserables (wretches) and the king as 'patron of the poor' or 'protector of the Indians'. From this point of view, the Indian hospital must have been an instrument of government through which the role of the king was made tangible, creating a bond between the king and his subjects which was replete with the political and not a little of the sacred. The Church formed part of this bond, sometimes confirming it, and other times challenging it, depending on the circumstances in which it could intervene, siding with or mediating between one party and another. In practical terms, this nexus was made possible by means of the contributions of the (supposed) beneficiaries themselves. In light of the diversity of forms encompassed by the notion of the hospital that was transplanted to the Andes, it should not seem odd, after having been established for so many years, that how hospitals ought to be, the nature and purpose of the tax created to support them, and the manner in which they were administered were fuel for controversy. In between there had been not only different interpretations and practices of health and mutual aid but also, as I have tried to show here, of authority, and of who should wield it and how. That is to say, through which channels would the protective power of the king flow, and how would these be directed to the native population: what role would be played by the corregidores, protectores de naturales, parish priests, the communities, and indigenous authorities, as vehicles for and administrators of authority. The role of the curacas was the most controversial point. Upon their participation depended the success or failure of the provincial hospitals and those of the Indian parishes. The project of creating systems of public assistance or hospitals assumed that it was necessary to bypass the indigenous authorities and weaken the bonds of kinship so that the hospitals, understood as places of isolation, could prosper. This entailed the transformation of their users into authentic miserables. Attempts to impose this institutional model failed not only because of the refusal of the curacas and the indigenous population to adopt it, but also because, in the end, the parish priests as well as the provincial colonial authorities recognised that, without the cooperation of the curacas the project was not viable. It was the local administration of hospitals, and most importantly, the fact that these same communities together with their resources supported the actions of assistance, that tended to strengthen the authority of the curacas. But the power of the latter was far from stable, depending as it did on conditions that, as much from within as from outside the communities they headed, had influence on their cohesion, their social hierarchy, and their ability to safeguard their wealth. The actions of encomenderos, landowners, ranchers, parish priests, and religious orders, and the processes of socioeconomic differentiation within the indigenous population, exerted a fundamental pressure on the shaping of the communal institutions of which the hospital formed a part. Finally, outside the cities, the colonial State lacked the human and financial resources and suffered from political limitations too serious to be able to execute its project of assistance as it had originally been conceived.
2016-05-12T22:15:10.714Z
2013-03-21T00:00:00.000
{ "year": 2013, "sha1": "ea0bd7375dc103637a852540feb1a73956d15061", "oa_license": null, "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/FE0AC4875ADA5A73C34E4A02A97ADDCA/S0025727312001020a.pdf/div-class-title-indian-hospitals-and-government-in-the-colonial-andes-div.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ea0bd7375dc103637a852540feb1a73956d15061", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Medicine" ] }
245908546
pes2o/s2orc
v3-fos-license
Antagonistic Activities of Lactobacillus rhamnosus JB3 Against Helicobacter pylori Infection Through Lipid Raft Formation Helicobacter pylori is a Gram-negative pathogen that can increase the risk of stomach cancer in infected patients. H. pylori exploits lipid rafts to infect host cells. Infection triggers clustering of Lewis x antigen (Lex) and integrins in lipid rafts to facilitate H. pylori adherence to the gastric epithelium. H. pylori infection can be treated with probiotics containing lactic acid bacteria that offer numerous benefits to the host while lacking the side effects associated with antibiotic therapy. Previously, we showed that the cell-free supernatant (CFS) derived from Lactobacillus rhamnosus JB3 (LR-JB3) at a multiplicity of infection (MOI) of 25 attenuated the pathogenicity of H. pylori. In this study, we established a mucin model to simulate the gastric environment and to further understand the influence of mucin on the pathogenesis of H. pylori. Porcine stomach mucin dramatically upregulated H. pylori virulence gene expression, including that of babA, sabA, fucT, vacA, hp0499, cagA, and cagL, as well as the adhesion and invasion ability of H. pylori and induced increased levels of IL-8 in infected-AGS cells. The CFS derived from LR-JB3 at a MOI of 25 reduced the expression of H. pylori sabA, fucT, and hp0499 in mucin, as well as that of the Lex antigen and the α5β1 integrin in AGS cells during co-cultivation. These inhibitory effects of LR-JB3 also suppressed lipid raft clustering and attenuated Lewis antigen-dependent adherence, type IV secretion system-mediated cell contact, and lipid raft-mediated entry of VacA to host cells. In conclusion, LR-JB3 could affect H. pylori infection through mediating lipid raft formation of the host cells. The currently unknown cues secreted from LR-JB3 are valuable not only for treating H. pylori infection, but also for treating diseases that are also mediated by lipid raft signaling, such as cancer and aging-associated and neurodegenerative conditions. INTRODUCTION Helicobacter pylori is a Gram-negative pathogen that colonizes the gastric mucosa and increases the risk of stomach cancer in infected patients (1,2). The treatment of H. pylori infection mainly relies on antibiotics, which may affect the balance of gut microflora and also facilitate the development of antibioticresistant strains (3,4). Lactic acid bacteria are known to offer numerous benefits to the host, including protection against H. pylori infection (5). Previously, we indicated that oral administration of Lactobacillus rhamnosus JB3 (LR-JB3) to mice could eliminate H. pylori infection and attenuate inflammatory responses by suppressing vacA gene expression (6). LR-JB3 also induced superoxide dismutase and catalase activity, and the serum levels of beneficial amino acids, which are impaired by H. pylori infection (7). These results indicated that LR-JB3 has potential in eliminating H. pylori infection. Therefore, the mechanism that underlies LR-JB3 activity against H. pylori infection should be further investigated to support potential clinical applications. Previously, we used a cell model to explore part of the molecular mechanism of LR-JB3 that is involved in modulating H. pylori infection. LR-JB3 at a multiplicity of infection (MOI) of 25 suppressed expression of sabA and fucT in H. pylori, translocation of CagA, and expression of the Lewis x (Le x ) antigen and toll-like receptor 4 (TLR 4) in H. pylori-infected-AGS cells (8). However, it has been indicated that the Le x antigen and TLR 4 are clustered in lipid rafts to facilitate the adherence of H. pylori to the gastric epithelial cells (9)(10)(11)(12), and disrupting lipid rafts may prevent H. pylori-associated gastric cancer (13). Lipid rafts consist of phospholipids, sphingolipids, and cholesterol and act as signaling platforms and gateways for microorganisms to adhere to host cells (10,(14)(15)(16)(17). Furthermore, H. pylori expresses cholesteryl a-D-glucopyranoside acyltransferase (CGAT), which is encoded by the hp0499 gene, to enhance lipid raft clustering on host cell membranes (9). Thus, H. pylori can manipulate the formation of lipid rafts to promote infection. Therefore, lipid raft-mediated pathways may be involved in the mechanism of LR-JB3 against H. pylori infection. The stomach is covered by a thick layer of mucus that physically protects epithelial cells from the gastric juices and interaction with pathogens (18); therefore, H. pylori needs to migrate through the mucus layer to colonize the epithelial cell surface (19). We used mucin to help simulate the conditions under which H. pylori encounters LR-JB3 in the stomach and then explored the mechanisms whereby LR-JB3 modulates H. pylori infection through lipid raft-mediated pathways. Mucin Preparation and Treatments of H. pylori and AGS Cells Porcine stomach mucin-type II (Sigma-Aldrich, St. Louis, MO, USA) in PBS at pH 7 was sterilized by autoclaving (20). H. pylori harvested from TSA agar plates was resuspended at 5 × 10 6 colony forming units (CFU) per mL with 50 mg/mL of mucin in PBS and incubated at 37°C and 5% CO 2 for 2 h. H. pylori was then collected for mRNA isolation. Mucin-treated H. pylori was also used to infect AGS cells in the following experiments. The autoclaved mucin in PBS was centrifuged at 10,000 × g. Mucins were collected and resuspended in equal volumes of RPMI; 50 mg/mL of mucin in RPMI medium was then added into the culture medium for AGS cells in the control group. Reverse Transcription Quantitative Polymerase Chain Reaction Analysis Total mRNA was isolated using the total RNA Miniprep Purification Kit (GMBiolab, Taichung, Taiwan), and reverse transcription was performed using the MMLV Reverse Transcription Kit (Protech Technology Enterprise, Taipei, Taiwan) according to the manufacturers' instructions. All oligonucleotide primers used in this study were synthesized by Mission Biotech, Taipei, Taiwan. The RNA sample was incubated with random primers at 65°C for 5 min and then chilled on ice to open the secondary structure of the GC-rich RNA templates. Each sample was then incubated with a mixture of 5× Reaction Buffer, dNTP pre-mix, and MMLV reverse transcriptase. RT-qPCR was performed at the following conditions: 5 min at 65°C, 10 min at 25°C, 60 min at 42°C, and a final 10 min at 72°C. qPCR was performed in 96-well optical plates (Applied Biosystems, Waltham, MA, USA). Targeted cDNA dilutions were mixed with appropriate primers ( Table 1) and SYBR Green Master Mix (Thermo Fisher Scientific, Waltham, MA, USA). The PCR program was: initial denaturation at 95°C for 10 min, followed by amplification for 40 cycles with denaturation at 95°C for 10 sec; annealing at 50°C for 20 sec; and extension at 72°C for 20 sec. The melting curve analysis was: 95°C for 5 sec, 65°C for 1 min, and then an increase to 95°C at 0.08°C/sec ramp rate. Each assay was run on a qPCR QuantStudio 3 (Applied Biosystems, Waltham, MA, USA) and the fold-changes in expression were derived using the comparative DDC t method (21). The 16S rRNA of H. pylori served as an internal control for sample loading and mRNA integrity (22). Results were calculated using the mean of triplicate readings. Adhesion and Invasion Assays The adhesion and invasion abilities were defined as the numbers of H. pylori adhering and invading into the AGS cells, respectively. AGS cells were seeded into 24-well plates at 5 × 10 4 cells/well in antibiotic-free RPMI medium with 10% FBS at 37°C and 5% CO 2 for 18 h. H. pylori treated with mucin as previously described or PBS for 2 h was then co-incubated with mucin-treated AGS cells at a MOI of 50 at 37°C and 5% CO 2 for 6 h. Cell culture supernatants were removed by centrifugation at 1,500 × g for 5 min at room temperature. The cells were then washed with PBS twice, and osmotic lysis performed to calculate the total quantity of bacteria remaining, which included adhering and invading H. pylori. For this purpose, sterile water was added to the infected cells following washing and the resulting cell lysates were then resuspended in PBS and plated using serial dilutions on TSA supplemented with 5% sheep blood. These plates were then cultured with 100 mL from each dilution at 37°C for 48 h. Bacterial cell numbers were determined by manual colony counting. To determine the number of viable intracellular bacteria, the standard gentamicin assay was applied (23). The same batch of infected cells was washed three times in PBS and incubated with 100 mg/mL of the membrane-impermeable antibiotic gentamicin (Sigma-Aldrich; Merck Millipore, Darmstadt, Germany) at 37°C and 5% CO 2 for 1.5 h to remove extracellular bacteria (23). Bacterial cell numbers were then determined by manual colony counting. The adherent bacteria number was calculated by deducting the invading bacteria number from the total sum. The adhesion and invasion activities of H. pylori was determined as the mean of triplicate readings at each treatment. AGS cells incubated with 50 mg/mL of mucin in RPMI medium at 37°C and 5% CO 2 for 6 h were assigned as the control group, whereas AGS cells infected by PBS-pretreated and mucin-pretreated H. pylori were assigned as the non-mucin group and the mucin treatment group, respectively. The non-mucin group was used to establish 100% adhesion and invasion. The cell-free supernatants (CFSs) were used in the detection of interleukin (IL)-8. Enzyme-Linked Immunosorbent Assay of IL-8 Levels Detection of IL-8 in the supernatants of H. pylori-infected human AGS cells was conducted using human IL-8 ELISA Ready-SET-Go!R ® kits (eBioscience, Inc., San Diego, CA, USA) according to manuals' instruction. Each sample was analyzed individually. Results were calculated as the mean of triplicate readings. Pretreatment of H. pylori or AGS Cells Using CFS for Co-Cultivation CFSs were collected from the cultures grown with 5 × 10 4 cell/mL of AGS cells, 5 × 10 6 CFU/mL of H. pylori, 1.25 × 10 6 CFU/mL of LR-JB3, and 5 × 10 6 CFU/mL of LR-JB3 for 2 h and were referred to as AGS, HP-100, JB3-25, and JB3-100, respectively. CFSs were mixed with 100 mg/mL of porcine stomach mucin at a 1:1 ratio for the pretreatment study. For the H. pylori pretreatment group, CFSs with mucin were co-incubated with H. pylori at 37°C for 2 h. The pretreated H. pylori was collected by 10,000 × g for 1 min at room temperature and then resuspended in RPMI medium. These H. pylori at a MOI of 50 were used to infect AGS cells in an antibiotic-free RPMI medium supplemented with 10% FBS at 37°C in 5% of CO 2 . For the AGS cell pretreatment group, these cells were co-incubated using CFSs with mucin at 37°C and 5% of CO 2 for 2 h and then infected by mucin-pretreated H. pylori at a MOI of 50. After 6 h of incubation, the culture supernatants of both pretreatment groups were then used for the detection of IL-8 levels via ELISA. H. pylori in co-culturing conditions were assessed via adhesion and invasion assays. The total mRNA was isolated for the virulence gene expression in H. pylori. AGS cells co-incubated with 50 mg/mL of mucin in RPMI medium at 37°C and 5% of CO 2 for 6 h were assigned as control group. AGS cells infected by mucin-pretreated H. pylori were referred to as the infection group. Results were calculated as the mean of triplicate readings. Preparation of Cell Extracts and Western Blot Analysis Cells or bacteria were lysed with ice-cold RIPA buffer (Genestar Biotechnology, Kaohsiung, Taiwan). Protein concentration was determined using the Bradford method (Bio-Rad, Hercules, CA, USA). Protein sample (30-40 µg) was separated via SDS-PAGE using a Hoefer mini VE system (Amersham Biosciences, Piscataway, NJ, USA). Proteins were transferred to Immobilon-E polyvinylidene fluoride membrane (Merck Millipore, Carrigtwohill, County Cork, Ireland) according to the manufacturer's instructions. Following the transfer, the membrane was washed with Tris-Buffered Saline (TBS) and blocked for 1 h at 37°C with 5% fat-free milk in TBS and 0.1% of Tween-20 (TBST). Diluted primary antibodies were added Primer name Sequence (5′-3′) and co-incubated with the membrane at 4°C overnight. Blots were then incubated with peroxidase-conjugated secondary antibodies horseradish peroxidase-conjugated goat anti-mouse IgG or goat anti-rabbit IgG. Following removal of the secondary antibody, blots were washed with TBST and developed using the ECL-western blotting system (Advansta, San Jose, CA, USA). Immunoblot densities were quantified using the Hansor Luminescence Imaging System LIS02 (Han-Shuo Life Technology, Taichung, Taiwan). Statistical Analysis Data were analyzed using SAS 9.4 software (SAS, Inc., Cary, NC, USA). Statistical significance was assessed between two groups using Student's t test and Dunnett. Results were presented as the mean ± standard error of the mean. p<0.05 was considered to indicate statistical significance. Effects of Mucin on H. pylori Virulence Gene Expression After entering the lumen of the stomach, H. pylori moves through the mucus layer to increase the accessibility of adhering to gastric epithelial cells (24). Therefore, we investigated the effects of mucin on H. pylori virulence gene expression in this study. The mRNA expression of sabA and babA increased nearly 50-fold compared with that of the nonmucin group ( Figure 1A) while the expression of cagL, cagA, vacA, and fucT was upregulated from 4.0-to 17.7-fold. Furthermore, mucin-pretreated H. pylori showed stronger adherence and invasion abilities to AGS cells than the untreated bacteria ( Figure 1B). These results indicated that mucin enhanced the virulence of H. pylori. Effects of CFSs With Mucin on H. pylori Virulence Gene Expression The inhibitory effect LR-JB3-derived CFSs at a MOI of 25 (JB3-25) or 100 (JB3-100) on H. pylori infecting AGS cells through the secretion of unknown cues has been previously demonstrated, including suppressing the expression of sabA, vacA, and fucT in H. pylori, and the H. pylori-induced Le x antigen levels in AGS cells (8). Therefore, to study the effects of mucin on this phenomenon, H. pylori or AGS cells were treated using different CFSs with mucin before co-cultivation. Pretreating H. pylori JB3-25 with mucin suppressed expression of sabA, vacA, and fucT mRNAs (Figure 2A), while expression of vacA mRNA was further decreased by pretreatment with JB3-100 with mucin. The adhesion and invasion abilities of H. pylori to AGS cells as well as H. pylori-induced IL-8 levels were also attenuated by pretreating H. pylori or AGS cells using JB3-25 with mucin ( Figures 2B, D). Furthermore, the Le x antigen expression in H. pylori infected-AGS cells was decreased in both pretreatment groups using JB3-25 with mucin ( Figures 2C, E). This indicated that the inhibitory effects of JB3-25 remained in the presence of mucin which significantly increased the virulence of H. pylori. Interestingly, pretreating H. pylori using JB3-25 with mucin was able to affect the Le x antigen expression in AGS cells. The Effects of CFSs in Mucin on H. pylori-Induced Lipid Raft Clustering of AGS Cells To observe the formation of lipid rafts on the H. pylori-infected-AGS cells, GM1 was used as a marker to locate lipid rafts (25). Le x antigen clustered in the lipid raft after H. pylori infection ( Figures 3A, B). In both pretreatment groups, the lipid raft clustering as well as Le x antigen co-localization were reduced by JB3-25 with mucin. Mucin treatment also increased the levels of hp0499 mRNA expression by 11.9-fold compared with those treated in the non-mucin group ( Figure 1A), whereas pretreatment using JB3-25 with mucin suppressed hp0499 mRNA expression to the same degree as that of fucT and sabA (Figure 2A). CagA is a major virulence factor of H. pylori that is delivered into host cells via the type IV secretion system (T4SS). CagL is a pilus component of T4SS and interacts with a5b1 integrin to trigger the translocation of CagA into host cells (10,11). Translocated CagA is then phosphorylated and activates nuclear factor-kappa B (NF-kB) leading to the expression of proinflammatory cytokine IL-8 (26,27). The expression of both cagA and cagL in H. pylori were induced by mucin but were not affected by JB3-25 with mucin ( Figures 1A, 2A). However, the level of H. pylori-induced-IL-8 in AGS cells was suppressed by mucin pretreated with JB3-25 in both pretreatment groups ( Figure 2B). Therefore, we then studied the association of a5b1 integrin with lipid rafts. After pretreating of H. pylori or AGS cells by JB3-25 with mucin, both lipid raft clustering and a5b1 integrin co-localization were suppressed ( Figures 4A, B, D, E). The expression of a5b1 integrin (Figures 4C, F) along with the amounts of translocated CagA and phosphorylated CagA ( Figures 4C, F) were all reduced in both pretreatment groups. VacA is a multifunctional toxin that causes vacuolation, apoptosis, and autophagy and that also regulates cell adhesion and inflammatory responses (28). VacA has been demonstrated to be internalized into host cells through an interaction with fatty acid chains of phospholipids, glycosphingolipids, and sphingolipids enriched in lipid raft domains (29)(30)(31), which triggers the p38/ATF-2-mediated signaling pathway and induces inflammatory responses (32). Here, VacA was colocalized within the lipid raft after H. pylori infection ( Figures 5A, C). In the H. pylori pretreatment group, both pretreatment using either JB3-25 or JB3-100 with mucin reduced the amount of VacA localizing in the lipid raft domain, whereas pretreatment using JB3-100 with mucin did not affect the clustering of lipid rafts ( Figure 5A). The levels of p-p38, p-ATF2, and Cox-2 triggered by translocated VacA in H. pylori infected-AGS cells were also decreased by the same pattern ( Figure 5B). However, pretreated AGS cells using JB3-100 with mucin did not suppress VacA delivery into the AGS cells, and the levels of p-p38, p-ATF2 and Cox-2 proteins were therefore maintained compared with those in the infection group ( Figures 5C, D). DISCUSSION The current gold standard in treating H. pylori infection is triple therapy combining a proton pump inhibitor and two antibiotics (33). However, antibiotic therapy has limitations including development of antibiotic-resistant strains (34) and disruption of the balance in intestinal microflora, which can lead to gastrointestinal diseases (35). Considering these side effects of antibiotic therapy, a new therapeutic strategy for H. pylori infection is urgently required. We have previously shown that LR-JB3 has potential in attenuating H. pylori infection in vivo and in vitro (6)(7)(8). Therefore, in this study, we further investigated a number of the underlying mechanisms. Mucus plays a key role as a physical barrier that protects epithelial cells from the acidic gastric juice of the stomach and also from the entrance of pathogens (36). Studies have indicated that mucin promotes the expression of virulence genes in Pseudomonas aeruginosa and Clostridium septicum (37,38); however, most of the cell-based models do not include mucus in the study. Furthermore, Lactobacilli has been shown to colonize in the mucus layer of the stomach where H. pylori expresses urease to reduce mucus viscosity to facilitate movement toward the gastric epithelial cells (19,39). The H. pylori (5 × 10 6 CFU/mL) was treated with PBS (non-mucin group) or in 50 mg/mL mucin (pH 7) at 37°C and 5% CO 2 for 2 h (mucin group). After cultivation, total mRNAs were isolated and assessed via RT-qPCR to measure the H. pylori virulence gene expression. PBS-or mucin-pretreated H. pylori at a MOI of 50 were co-cultivated with AGS cells at 37°C and 5% CO 2 for 6 h AGS cells incubated with 50 mg/mL of mucin in RPMI medium for 6 h were assigned as the control group. The number of H. pylori adhering and invading into AGS cells was calculated by colony counting, and the IL-8 levels in the supernatant were measured by ELISA. *Indicates significant differences compared with the non-mucin group, p < 0.05. mucus layer is the location where LR-JB3 and H. pylori first encounter each other in the stomach. Therefore, mucins were used in our cell model to offer an alternative view of the interaction between LR-JB3 and H. pylori. In this study, the presence of mucin dramatically increased the expression of sabA and babA in H. pylori. These genes both encode outer membrane proteins that are involved in the adherence of H. pylori to highly glycosylated mucins via binding to the Lewis b (Le b ) antigen and sialyl-Lewis x (SLe x ) antigen, respectively (40). FucT is an enzyme involved in the Le x biosynthesis of lipopolysaccharide (LPS) O-antigen of H. pylori (2) and fucT mRNA expression was also significantly increased by the mucin. Thus, our data suggests that mucin treatment facilitated H. pylori adhering to gastric epithelial cells in a Lewis CFSs were collected from AGS cells at 5 × 10 4 cell/mL, H. pylori at 5 × 10 6 CFU/mL, LR-JB3 at 1.25 × 10 6 CFU/mL, and LR-JB3 at 5 × 10 6 CFU/mL growing in RPMI medium were referred to as AGS, HP-100, JB3-25, and JB3-100 respectively. CFSs were mixed with 100 mg/mL of porcine stomach mucin at 1:1 ratio for pretreating H. pylori or AGS cells. After pretreatment, H. pylori and AGS cells were co-cultivated at 37°C and 5% CO 2 for 6 h. The AGS cells incubated with 50 mg/mL of mucin in RPMI medium for 6 h were assigned as the control group. The AGS cells infected by mucin-pretreated H. pylori were assigned to the infection group. The number of H. pylori adhering and invading into AGS cells was calculated by colony counting, and the IL-8 levels in the supernatant were measured by ELISA. Total proteins isolated from cell lysis were used for Western blot analysis. * indicates statistically significant differences compared with the infection + nonmucin group, p < 0.05. antigen-dependent manner. Furthermore, the expression of two well-known virulent factors in H. pylori, cagA and vacA, was also increased. These virulence factors affect various host cell pathways such as cell inflammatory response, vacuolation, and apoptosis (41). Thus, passing through the mucus layer of the stomach increased the ability of H. pylori to infect gastric epithelial cells. However, LR-JB3 inhibited H. pylori colonization, which is agreement with our previous findings (8), even though the virulence of H. pylori was boosted by mucin. Interestingly, the infection of JB3-25-pretreated H. pylori could also suppress the expression of infection-induced a5b1 integrin and Le x antigen in AGS cells. H. pylori infection is known to trigger the expression of a5b1integrin to enhance CagA translocation, thereby promoting gastric pathogenesis (42). Furthermore, H. pylori infection has been shown to stimulate the CagA-dependent expression of b3 GlcNAc T5 (b3GnT5), which is a GlcNAc transferase essential for the expression of Lewis antigens on the epithelial cell surface (43). and Le x antigen (green) gathering in the rafts domain in AGS cells after 6 h co-cultivation. CFSs were collected from AGS cells at 5 × 10 4 cell/ mL, H. pylori at 5 × 10 6 CFU/mL, LR-JB3 at 1.25 × 10 6 CFU/mL, and LR-JB3 at 5 × 10 6 CFU/mL growing in RPMI medium were referred to as AGS, HP-100, JB3-25, and JB3-100 respectively. CFSs were mixed with 100 mg/mL of porcine stomach mucin at 1:1 ratio for pretreating H. pylori or AGS cells. After pretreatment, H. pylori and AGS cells were co-cultivated at 37°C and 5% CO 2 for 6 h The AGS cells incubated with 50 mg/mL of mucin in RPMI medium for 6 h were assigned as the control group. The AGS cells infected by mucin-pretreated H. pylori were assigned to the infection group. Samples were imaged at a magnification of 40×. Therefore, JB3-25 may regulate the cellular responses of AGS cells through modulating the virulence of H. pylori. The unknown cue in JB3-25 suppressed the ability of H. pylori to induce the expression of a5b1integrin in AGS cells, sequentially reduce the amount of the translocated CagA, and then downregulate the expression of Lewis antigens in AGS cells. a5b1 integrin and Le x antigen are gathered in the raft domain to promote the adhesion of H. pylori to epithelial cells and initiate pathogenicity (9). The lipid raft plays an important role in the process of H. pylori infection. Pretreating AGS cells by JB3-25 reduced lipid raft clustering, and pretreating H. pylori also decreased its expression of CGAT, which participates in the lipid raft formation of the host cells. An inhibitor of CGAT, amiodarone, could prevent H. pylori from adhering to AGS cells and effectively suppressed the translocation of CagA into AGS cells (9). Therefore, the unknown cue in JB3-25 may contain CGAT inhibitor-like molecules to interfere with the gathering of the lipid raft and further attenuating H. pylori infection. In our previous study, the levels of 15 amino acid levels were decreased in H. pylori-infected mice (7). Among these, phenylalanine and proline are able to reduce cellular cholesterol levels through repressing the ABCA1 gene that mediates cholesterol synthesis (44). LR-JB3 treatments could integrin, b1 integrin, translocated CagA, and phosphorylated CagA in AGS cells after 6 h co-cultivation. CFSs were collected from AGS cells at 5 × 10 4 cell/mL, H. pylori at 5 × 10 6 CFU/mL, LR-JB3 at 1.25 × 10 6 CFU/mL, and LR-JB3 at 5 × 10 6 CFU/mL growing in RPMI medium were referred to as AGS, HP-100, JB3-25, and JB3-100 respectively. CFSs were mixed with 100 mg/mL of porcine stomach mucin at 1:1 ratio for pretreating H. pylori or AGS cells. After pretreatment, H. pylori and AGS cells were co-cultivated at 37°C and 5% CO 2 for 6 h. The AGS cells incubated with 50 mg/mL of mucin in RPMI medium for 6 h were assigned as the control group. The AGS cells infected by mucin-pretreated H. pylori were assigned to the infection group. Samples were imaged at a magnification of 40×. restore plasma levels of phenylalanine and proline in infected mice. A cholesterol synthesis inhibitor simvastatin could also reduce CagA translocation via disrupting lipid raft clustering (13). b-cyclodextrin has been reported as an agent that disrupts cholesterol and sphingolipid-rich microdomains to facilitate the disassembly of lipid rafts leading to reduce VacA internalization into Hela cells (45). Thus, the unknown cue in JB3-25 may disrupt the formation of lipid rafts through regulating the cholesterol levels in host cells. In contrast to JB3-25, JB3-100 specifically reduced the expression of the vacA gene. This result is consistent with our previous finding (8). Here, mucin treatment increased vacA gene expression by 13-fold compared with that in the non-mucin group, although the inhibitory effect of LR-JB3 on vacA expression was still maintained. JB3-100 pretreatment did not affect infection-induced lipid raft formation and the expression of a5b1 integrin and Le x antigen in pretreated AGS cells. Thus, JB3-100 only acted on H. pylori but not on host cells. The major difference between JB3-25 and JB3-100 are the densities of LR-JB3 in the cultures. This finding implies that bacterial density plays an important role in producing metabolites that interact with both H. pylori and AGS cells. A summary of the possible pathways of CFSs derived from LR-JB3 against H. pylori is shown in Figure 6. Briefly, H. pylori infection triggers the formation of lipid rafts and recruitment of the Le x antigen and a5b1 integrin to the lipid raft domains to facilitate the adherence of H. pylori to epithelial cells; CGAT is also secreted by H. pylori and is delivered to epithelial cells to enhance lipid rafts clustering. H. pylori expresses SabA and FucT to increase Lewis antigen-dependent adherence. CagL the adhesion subunit of T4SS binds to a5b1 integrin for T4SS-dependent CagA delivery causing the expression of IL-8. T4SS also enhances the adherence of H. pylori to AGS cells. Furthermore, the translocated VacA is also gathered in the lipid raft and can stimulate the p38/ATF-2mediated signal pathway to induce inflammatory responses. The unknown cue derived from JB3-25 (blue triangle) can interfere with lipid raft formation, reducing CGAT expression in H. pylori, and suppressing the expressions of the Le x antigen and a5b1 integrin in AGS cells. Therefore, Lewis antigen-dependent adherence, T4SSmediated cell contact, and lipid raft-mediated entry of VacA are all using CFSs with mucin on the lipid rafts clustering (red), (A, C) intracellular VacA (green) gathering in the rafts domain, and (B, D) the expression of p38, phosphorylated p38, ATF-2, phosphorylated ATF-2, and Cox-2 in AGS cells after 6 h co-cultivation. CFSs were collected from AGS cells at 5 × 10 4 cell/mL, H. pylori at 5 × 10 6 CFU/mL, LR-JB3 at 1.25 × 10 6 CFU/mL, and LR-JB3 at 5 × 10 6 CFU/mL growing in RPMI medium were referred to as AGS, HP-100, JB3-25, and JB3-100 respectively. CFSs were mixed with 100 mg/mL of porcine stomach mucin at 1:1 ratio for pretreating H. pylori or AGS cells. After pretreatment, H. pylori and AGS cells were co-cultivated at 37°C and 5% CO 2 for 6 h The AGS cells incubated with 50 mg/mL of mucin in RPMI medium for 6 h were assigned as the control group. The AGS cells infected by mucin-pretreated H. pylori were assigned to the infection group. Samples were imaged at a magnification of 40×. attenuated. Furthermore, another unknown cue (yellow triangle) from JB3-25 and JB3-100 inhibits the internalization of VacA resulting in a reduced Cox-2 level mediated by the p38/ATF-2mediated signal pathway. The reduced intracellular levels of VacA are caused by suppressed expression in H. pylori using LR-JB3 treatment. The mechanism of unknown cues from LR-JB3 that interfere with the lipid raft clustering on H. pylori-infected-AGS cells requires further investigation. To the best of our knowledge, this is the first study to show that L. rhamnosus can affect H. pylori infection through mediating lipid raft formation of the host cells. The unknown cues secreted from LR-JB3 are valuable not only for treating H. pylori infection but also for treating diseases mediated by lipid raft signaling, such as cancers and neurodegenerative and agingassociated diseases (46). DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS Y-MH designed this study. ADD and C-HS performed experiments; Y-MH and ADD wrote the paper. All authors approved this final manuscript. All authors have read and agreed to the published version of the manuscript.
2022-01-14T14:16:06.462Z
2022-01-14T00:00:00.000
{ "year": 2022, "sha1": "32823e9e5a017b5c1bb06d449c1b82d60221ec5b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "32823e9e5a017b5c1bb06d449c1b82d60221ec5b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233701927
pes2o/s2orc
v3-fos-license
Education–Occupation Mismatch and Its Wage Penalties in Informal Employment in Thailand This study examines the incidence of vertical mismatch among formal and informal workers in Thailand. Using the 2011, 2013, and 2015 Thailand Household Socio-economic Surveys, the study analyzes the relationship between vertical mismatch and wage penalties and premiums across four types of workers: formal government, formal private firm, informal private firm, and informal own-account workers. The incidence of overeducation is modest among the oldest cohort (8.7%) but prevalent among the youngest cohort (29.3%). Government employees face the highest overeducation wage penalties (28.2%) compared to matched workers, while in private firms, informal workers have consistently higher overeducation wage penalties than formal workers. Educated young workers are increasingly absorbed into low-skill informal work in private firms and face large overeducation wage penalties. The inability of many young workers to capitalize on their educational investments in Thailand's formal labor market is a concern for future education and employment policy development in Thailand. I. Introduction Over the past several decades, developing economies have emphasized the expansion of education and increasing educational attainment for their citizens as a means to achieve economic development. Despite rapidly increasing educational attainment, subsequent skilled job growth has often lagged behind. The combination of a rapidly growing educated workforce and slow growth of skilled employment can lead to a problem of "overeducation"-also called vertical mismatch-in developing countries, meaning that educated workers engage in employment that requires less formal education than they have acquired. The existence of widespread informal employment in developing economies adds a layer of concern against increasing rates of overeducation. According to the International Labour Organization (ILO), own-account workers working in informal enterprises, as well as employees whose "employment relationships [are], in law or in practice, not subject to national labor legislation, income taxation, social protection or entitlement to certain employment benefits," are considered informally employed (International Labour Organization 2003). Informal employment is generally associated with low skill and low pay. Thus, in a developing country context where formal employment growth is often slow, low-skill informal employment may need to absorb a growing educated workforce, potentially exacerbating overeducation wage penalties. This paper evaluates the incidence of vertical mismatch and associated wage penalties and premiums across formal and informal employment in Thailand. Thailand is a representative case of a developing country with a rapidly expanding educated workforce alongside high rates of informal employment and slow formal employment growth. Since the government's supply of education and compulsory education laws vary across different generations of workers, we analyze the incidence of vertical mismatch and associated wage penalties across age cohorts. In addition, this paper analyzes the relationship between vertical mismatch and wage penalties and premiums across four types of workers, including formal government, formal private firm, informal private firm, and informal own-account workers. We hypothesize that the incidence of overeducation will be higher among younger cohorts due to rapid increases in compulsory education relative to skilled job growth. Likewise, we expect the incidence of overeducation to be higher in informal employment because the average skill level for informal jobs is low while informal work has increasingly absorbed Thailand's young, educated workforce. We hypothesize that overeducation wage penalties are relatively high for formal government employees compared to other types of workers because of the rigid compensation system that sets pay based on occupation and experience, but gives little additional reward for education completed beyond what is required for the position. By contrast, the private sector is more flexible in allowing overeducated employees to fully utilize their abilities and is more likely to pay based on capabilities (Dolton and Vignoles 2000). By extending the same logic, we expect workers in informal private firm employment and particularly in informal ownaccount work to have lower overeducation wage penalties than formal government workers. However, it is an empirical question whether formal or informal workers in private firms have higher overeducation wage penalties. The analysis uses individual-level data from the 2011, 2013, and 2015 rounds of the Thai Socio-economic Survey (SES). Consistent with our hypothesis, we find that the incidence of overeducation is most prevalent (29.3%) among the youngest cohort born between 1981 and 1990 and least prevalent (8.7%) among the oldest cohort born between 1951 and 1960. We also find high rates of overeducation in informal employment. This is particularly the situation among the youngest cohort, where 37.3% of informal workers in private firms and 50.1% of informal ownaccount workers are overeducated. Using an augmented Mincerian wage regression, we find that the overall overeducation wage penalty is 20.9%, while the undereducation wage premium is 10.2%. In general, we find that overeducation wage penalties are higher in older cohorts, suggesting that these penalties become larger later in one's career. The penalties and premiums are similar across men and women. As expected, wage penalties for government employees are relatively high at 28.2%, while the lowest penalties belong to informal own-account workers at 3.9%. As for employees in private firms, informal workers have consistently higher overeducation wage penalties than formal workers across all age cohorts. Educated young workers are increasingly absorbed into low-skill informal work in private firms and face large overeducation wage penalties. The inability of many young workers to capitalize on their educational investments in Thailand's formal labor market is a concern for future education and employment policy development in Thailand. This paper is organized as follows. Section II provides a background on Thailand's education policies since the 1970s, its rising educational attainment, and the growth of its formal workforce. Section III gives a brief review of the literature on measuring overeducation and its wage penalties. This is followed by a description of the data used in the analysis in section IV and the methodology in section V. Section VI presents the empirical results followed by a discussion and conclusion in section VII. II. Thailand's Rising Educational Attainment, Structural Change, and Formalization of Work As is the case with many developing countries, Thailand has prioritized the expansion of education as a means to achieve economic development. Since the 1970s, Thailand has increased compulsory levels of schooling from 4 years to 9 years and initiated a large expansion of secondary and tertiary education. 1 The 1980 National Primary Education Act mandated that all villages should be equipped with schools. One of the major changes in the Thai education system was the increase in government-mandated compulsory education from 4 years to 6 years in the 1970s and from 6 years to 9 years implemented in 2002. Consequently, the share of workers who have completed upper secondary school went up from 17% in 1990 to 25% in 2 decades (Aemkulwat 2010). Over the same period, the number of workers with vocational qualifications increased from 1.8 million to 3 million (Aemkulwat 2010). Thailand also saw a significant increase in the number of educational institutions at all levels, especially at the secondary and tertiary levels. For example, the number of higher education institutions rose from a handful in 1970 to 185 institutions in 2014 (Paweenawat and Vechbanyongratana 2015). The expansion of schools combined with changes in the compulsory education laws led to a steady increase in primary, secondary, and tertiary gross enrollment rates from 1971 to 2013, as shown in Figure 1. Primary education enrollment became universal in the 1980s, while secondary enrollment increased from 18% to 82%, and tertiary education from 3% to 50% since 1970. In the past, Thailand's economy was based primarily on agriculture. Thailand has undergone a significant economic transformation that started in the 1970s. It experienced a rapid demographic transition, encouraged investment to develop its manufacturing sector, and saw people move out of rural agriculture and into work in urban areas (Baker and Phongpaichit 2009). Following the world's oil crisis in 1973 and other external factors, Thailand shifted toward export-oriented manufacturing in the 1980s, increasing exports of primarily labor-intensive products by approximately 24% per year during 1984-1989(Baker and Phongpaichit 2009. From the 1990s onward, the tourism and service sectors experienced growth in part due to the government's promotion of Thailand as a tourist destination (Kaosa-ard 2002). Figure 2 shows the contributions of each sector to total employment in Thailand between 1991 and 2018. Figure 2 demonstrates that the agriculture sector saw a rapid decline in its contribution to employment, dropping from 60% to 32% over the 28-year period. At the same time, employment in the service sector rose rapidly from 22% to 45%, while the share of workers in manufacturing continued to rise during this period, albeit more slowly, from 18% to 23%. Thailand's smallholder agricultural past means that most employment was traditionally considered informal. Since the 1990s, a significant number of workers moved from a work status of "unpaid family worker" to "employees of private companies" (Aemkulwat 2010). However, even though Thailand experienced a major transformation of its economy over the past 4 decades, the country largely did not experience concurrent formalization of employment. The Thai government defines formal workers as employees who are covered by employer-provided social insurance (such as the Civil Servants' Welfare Scheme or protection under the Social Security Act B.E. 2533 [1990]) and protection under the labor law. 2 The growth of formal private firm employment through the expansion of social security has been slow, but it has picked up in recent years. The number of private firm workers covered by Section 33 of the Social Security Act has grown from 8.6 million workers in 2008 to 10.8 million workers in 2017, which represents an increase from 23% to 29% of the total workforce. Despite efforts to expand formal employment, Thailand's informal workers continue to make significant contributions to the country's economy, with official figures putting the share of informal workers in the total workforce at 55% in 2018 (National Statistical Office 2018). Informal employment is not distributed evenly across all occupations. Figure 3 shows the distribution of formal workers (government workers and formal private firm employees) and informal workers (informal private firm employees and own-account workers) who receive cash remuneration across occupational categories based on the 1-digit 2008 International Standard Classification of Occupations (ISCO-08). Occupations requiring the highest levels of education and skill are located toward the left side of Figure 3, including managers, professionals, and technicians and associate professionals. These categories largely encompass civil servants and highly skilled workers in larger private firms, and thus workers employed in these occupations are generally formal. The occupational categories that require the least education and skills are located toward the right side of the figure, including craft and related trades workers, plant and machine operators and assemblers, and elementary occupations. Informal workers are disproportionately represented in the occupational groups located on the right side of the figure. The one exception is the high number of formal workers in occupation category 8-plant and machine operators and assemblers-which encompasses lower skilled factory work. The government's push to develop the manufacturing sector during the 1980s and 1990s attracted larger firms such that the government subsequently required them to register for tax (including employment tax) purposes, which explains why workers in occupation category 8 are largely formal. Despite the government's mandate that all firms hiring one or more workers must register their employees for social security, many smaller enterprises remain unregistered, often intentionally to avoid taxation and social security contributions. Own-account workers-who are informally employed by definition-generally work in lower skill occupational categories. In fact, approximately 95% of own-account workers are classified as working in occupation categories 5 through 9, making up a significant proportion of workers in these categories. Although one finds that informal workers are disproportionately represented in occupations requiring lower skill and education, it is important to note that within the Thai context, one finds both formal and informal workers often performing the same jobs. For example, according to the 2016 Thai Labor Force Survey Informal Supplement, informal workers engaged in food, beverages, textile, and wearing apparel manufacturing constituted 38%, 32%, 32%, and 47% of the workers in these manufacturing subcategories, respectively (Vechbanyongratana et al. 2021). III. Related Literature on Education-Occupation Mismatch With the growth in educated workforces around the world and the unintended consequences of vertical education-occupation mismatch, several empirical studies on the incidence and implications of a mismatch between attained and required levels of education have been published in recent years. One of the challenges in studying the wage impacts of vertical mismatch is how to quantify it. Hartog (2000) summarizes three possible options as follows: i. Job analysis. This method follows systematic evaluation by professional job analysts such as the Dictionary of Occupational Titles published by the United States (US) Department of Labor or recommendations of minimum required degrees by Thailand's Ministry of Labor (e.g., Paweenawat and Vechbanyongratana 2015). ii. Worker self-assessment. Mismatch is directly evaluated by workers themselves. Surveys ask workers their opinion on the minimum education needed to perform their jobs (e.g., Duncan and Hoffman 1981, Sicherman 1991, Dolton and Vignoles 2000. iii. Realized matches. This method was introduced by Verdugo and Verdugo (1989). This study used the mean education level plus 1 standard deviation to determine the required level of education needed to perform a job. This is then compared with the actual level of education attained by each worker, which determines whether a worker has the education that matches the required education for employment. Other studies apply this method but use a modal value instead of the mean (e.g., Mendes de Oliveira, Santos, and Kiker 2000). Our paper uses the modal method described here. Duncan and Hoffman (1981) made significant contributions to empirically measuring the impact of overeducation on wages by introducing the overeducation, required education, and undereducation (ORU) model. In this model, overeducation or undereducation is determined by the difference in attained and required education. Earnings are regressed on required years of education, years of overeducation, and years of undereducation. Using the US 1976 Panel Study of Income Dynamics, Duncan and Hoffman (1981) find that 46% of individuals are perfectly matched, while 42% of workers receive higher levels of education than required for their jobs. In addition, the results show that wages are determined mainly by the required education level, and the coefficient of surplus education (overeducation) is positive and significant. This method has been used by scholars in several country contexts to estimate wage impacts of vertical mismatch, including Dolton and Vignoles (2000) using British data; Hartog (2000) on the Netherlands, Portugal, Spain, the United Kingdom, and the US; and Johansson and Katz (2007) and Korpi and Tåhlin (2009) using Swedish data. All of these studies find that returns to required levels of schooling are higher than returns to surplus education, which is consistent with the original findings by Duncan and Hoffman (1981). Several studies regress the natural log of wages on a series of dummy variables that identify workers as overeducated, undereducated, or matched educated. The expected sign on the overeducation dummy variable is negative since it is expected that workers who are overeducated for their job would earn less than a matched-educated worker (excluded category) with the same amount of education. Verdugo and Verdugo (1989) pioneered this approach and found a 13% wage penalty among workers in the US. A study using Australian data by Mavromaras et al. (2013) shows a 21.5% penalty among male workers aged 16-64 with a university degree or equivalent. Similarly, a study using data from the United Kingdom by McGuinness and Sloane (2011) estimates a 31% to 39% wage penalty among early career university graduates. There are two recent studies on overeducation wage penalties specific to Thailand. The first by Paweenawat and Vechbanyongratana (2015) analyze wage penalties among male university graduates. The average wage penalty was found to be 19%, but when stratified by cohort, younger workers were found to have higher overeducation wage penalties that can be explained by an increasing supply of young university graduates and a dearth of commensurate jobs in the market. Another study by Pholphirul (2017) estimates both vertical and horizontal mismatch (i.e., a mismatch between job and field of study) using Thailand's 2008 Labor Force Survey. For vertical mismatches, the author uses the modal value method to determine education-occupation matches for each worker. The author finds that overeducated workers who completed compulsory lower secondary education or above face on average an 18.6% wage penalty. Despite the existence of recent studies on Thailand, no study, to date, has taken into consideration potential systematic differences in the incidence and wage impacts of undereducation and overeducation across formal and informal workers. This is important to consider since a significant proportion of workers in Thailand's economy-and developing economies more generally-are in fact informally employed and not covered by relevant labor regulations. This paper adds to the literature by determining the incidence of undereducation and overeducation and estimating wage premiums and penalties associated with vertical education-occupation mismatch between formal and informal workers. Furthermore, this study considers the incidence of vertical mismatch and the associated penalties and premiums across four cohorts of workers who were exposed to different education policies and early career labor market opportunities in Thailand's rapidly changing economy. IV. Data This study uses the Thailand SES, a nationally representative household survey collected by the National Statistical Office, for the years 2011, 2013(National Statistical Office 2011, 2013. We define formal employees as government and private firm workers who are covered by the Civil Service Welfare Scheme, Section 33 under the Social Security Act (1990), or other employer-provided welfare program. 3 Informal workers are defined as those in private firm employment without employer-provided social welfare, as well as those engaged in own-account work. 4 The dataset includes observations on The workers are coded into five education classifications that are harmonized with the ISCO-08 skill level classifications (International Labour Organization 2012). Table 1 shows the National Statistical Office's harmonization of Thai education levels with the ISCO-08 skill level classifications. The classification of overeducation, undereducation, and matched education for each individual is based on realized matches suggested by Verdugo and Verdugo (1989) and Mendes de Oliveira, Santos, and Kiker (2000). Following Mendes de Oliveira, Santos, and Kiker (2000), the modal educational category (0-4) within each occupation is used to determine "required education." After finding the modal educational category within each ISCO-08 occupation code at the 3-digit level, each worker's education level is then compared to the modal education level for their occupation to determine whether the worker is overeducated, undereducated, or matched educated. 6 For example, if a worker completed an upper secondary diploma (category 3) but works in a job that primarily employs workers with primary education (category 1), this worker would be considered overeducated for their current job. Table 2 reports summary statistics for the sample used in this study. Informal private firm employees and own-account workers on average have lower levels of education, with 62.6% and 53% having completed primary school or less, respectively. This is in contrast to formal government workers of which only social security coverage are most likely registered for one of the voluntary social security schemes (Section 39 or 40). The coding does not impact the results. 5 For own-account workers, we use business income instead of labor income. Since own-account workers are self-employed and do not have other employees, business income is comparable to labor income in this case. 6 If there is more than one modal value, the smaller value is selected. Also, the estimations are not sensitive to the method of constructing the vertical mismatch dummy variables. Using the median level of education in each occupational category yields qualitatively similar results to the modal method. 11% have completed primary school or less. Formal government workers are also significantly more likely to have completed higher education with 58% completing a bachelor's degree or higher compared to only 5% of informal private firm employees and 9% of own-account workers. Thus, it is not surprising that real monthly earnings for formal workers are on average significantly higher than for informal workers. Formal government employees and formal private firm employees earn on average 21,855 baht (B) and B14,810 compared to B7,759 and B13,448 for informal employees and own-account workers, respectively. Given generational differences in access to education and early career labor market opportunities, it is instructive to see the differences in completed education and the incidence of formal and informal employment stratified by birth cohort shown in Figures 4 and 5. The overall picture in Figure 4 is one of increasing educational attainment across successive birth cohorts. Among the oldest cohort, more than half of workers completed less than primary education and 39% completed lower secondary education or more. Among the youngest cohort, only 2% completed less than primary education, while 85% completed lower secondary education or higher. Figure 5 indicates that there is declining informality across successive birth cohorts. The incidence of informality among employees and own-account workers is highest among the oldest cohort at 61%. However, despite rapid industrialization and structural change in the Thai economy, the rate of informal employment is still high among the youngest cohort at 40%. Interestingly, individuals in the youngest cohort are much less likely to be own-account workers and government employees than previous generations. The youngest workers are much more likely to be employed by private firms, but the incidence of informality among young workers in private firms is 40%. The incidence of undereducation and overeducation for the entire sample stands at 27.4% and 22%, respectively, but differs across birth cohorts and employment sector, as illustrated in Figures 6 and 7. The proportion of undereducated workers has declined over successive birth cohorts for every work status, particularly for formal private firm employees, own-account workers, and informal private firm employees. This pattern is consistent with increasing educational attainment among the younger cohorts due to more compulsory education and increased opportunities to complete secondary and tertiary education. The proportion of overeducated formal government workers is similar across cohorts. However, the incidence of overeducated formal and informal private firm employees and own-account workers has increased over successive cohorts, which is consistent with increasing levels of education. Although the youngest cohort is the least likely to be engaged in own-account work, the incidence of overeducation among those in this group is high at 50%. Likewise, among the 30% of the youngest cohort employed informally by private firms, the incidence of overeducation is 37%. V. Methodology We use an augmented Mincerian wage regression model to estimate the overeducation and undereducation wage penalties and premiums, respectively. We run an ordinary least squares model that includes dummy variables for overeducation and undereducation with matched education as the excluded category. The dependent variable, ln w i , is the natural log of real monthly earnings, X i is a vector of individual characteristics, including potential work experience (age − years of schooling − 6) and potential work experience squared; and dummy variables for level of education completed (primary, lower secondary, upper secondary, and tertiary); married; female; urban area; region (central, north, northeast, and south); and survey year. OverEd i is a dummy variable that indicates that an individual's educational attainment is greater than the modal value of education found in their occupation, and UnderEd i is a dummy variable that indicates that an individual's level of education is lower than the modal value for their occupation. We first run regression (1) using the pooled sample from 2011, 2013, and 2015, and then we run it separately by employment sector. We then repeat the analysis stratified by male and female to see whether there are any gendered differences in overeducation wage penalties and undereducation wage premiums. The final analysis is stratified by birth cohorts and employment sector to see if the overeducation wage penalties and undereducation wage premiums diverge for individuals facing different compulsory education policies, educational access, and early career labor markets. VI. Empirical Findings The empirical results for the baseline pooled regression and regressions stratified by sector of employment are reported in Table 3. The average overeducation wage penalty and undereducation wage premium are 20.9% and 10.2%, respectively. The 20.9% wage penalty is comparable to the previous estimate of 19% in the study by Pholphirul (2017) using the 2008 Labor Force Survey. The overeducation wage penalties differ across employment sectors. The largest overeducation wage penalty is in the formal government sector at 28.2%. The high penalty may reflect the rigidity of the Thai civil service system where remuneration is strictly tied to occupation and experience. A government worker with high levels of education would be paid similarly with a government worker with lower academic credentials working in the same position. At 21.8%, informal private firm workers have higher overeducation wage penalties than formal private firm workers (17.9%). Interestingly, own-account workers have the lowest overeducation wage penalties at 3.9%. This may reflect the nature of own-account work in which workers are their "own bosses," allowing them flexibility to work according to their own productivity regardless of occupation. Table 3 indicates that on average-after controlling for a full set of covariates-women earn 19.2% less than men. The results stratified by employment sector show that the gender wage differentials are smaller within formal work (15.7%-17.5%) compared to informal work (22.1%-22.2%). Given that women appear to be at a wage disadvantage compared to men, it is of interest to know whether women and men experience different overeducation wage penalties and undereducation wage premiums. Table 4 reports the regression results stratified by gender. Despite the fact that women have a wage disadvantage when controlling for personal characteristics, women experience similar wage penalties and premiums as men. Overall, the wage penalty for men is 19.7% compared to 21.9% for women, while the undereducation wage premiums are 12.5% and 8.1% for men and women, respectively. The wage penalties are also similar across all four employment sectors. The similarities in overeducation wage penalties may be due in part to the fact that men and women in the Thai labor market have similar worker characteristics, including labor force participation and educational attainment. As mentioned previously, many of the oldest workers were required to complete only 4 years of compulsory schooling and entered the labor market when Thailand was just beginning its structural transformation, and it was still primarily an agricultural economy. In contrast, the youngest cohort in the sample was required to complete 6-9 years of compulsory education and had access to free education through secondary school and expanded tertiary education opportunities. Moreover, younger workers entered the job market in an economy that was much more diversified with a broader range of occupations requiring various skill levels. Because the oldest and youngest workers faced very different education policies and labor market conditions, which resulted in lower incidences of undereducation and higher incidences of overeducation in younger cohorts, it is of interest to see whether older and younger workers face different undereducation wage premiums and overeducation wage penalties. Table 5 reports regression results across employment sectors and birth cohorts. Columns (1) and (2) in Table 5 show results across four birth cohorts in formal government and formal private firm employment, while columns (3) and (4) show the results for informal workers in private firms and own-account work. The results show that along with the decrease in the incidence of undereducation, the undereducation wage premium is lower for formally employed workers in younger cohorts. Similar to workers in formal employment, informally employed private firm workers and own-account workers generally have decreasing undereducation wage premiums across successive birth cohorts. The youngest generation of workers born in the 1980s, for which undereducation is rare, have no undereducation wage premiums with the exception of a small premium in informal private firm work. Despite the increase in the incidence of overeducation over successive birth cohorts, the overeducation wage penalty is lower for younger workers in formal government employment, formal private firm employment, and informal private firm employment. Since the survey data used for the analysis was collected between 2011 and 2015, we observe wages for each of the cohorts at different points within their careers. The high overeducation wage penalties in the oldest cohort and relatively low wage penalties in the youngest cohort likely reflect different earnings trajectories for overeducated versus matched-educated workers. For example, a university graduate who spends their career in restaurant service (overeducated) will likely have a shallower earnings trajectory than a university graduate who works as an accountant (matched educated) throughout their career. This scenario would result in larger overeducation wage penalties later in one's career. For the youngest cohort of formal workers, the overeducation wage penalty is relatively modest at around 15%. However, the wage penalties within each cohort are higher for informally employed private firm workers than for formally employed private firm workers. This is an important observation considering that informal work in private firms continues to absorb a large number of younger workers (see Figure 5) who are more likely to be overeducated than in previous generations (see Figure 7). As for informal own-account work, there is no clear pattern across generations. Most own-account workers are employed in services and crafts and related trades (ISCO-08 occupational categories 5 and 7). Although the overeducation wage penalty is 14.5% among the oldest cohort born in the 1950s, cohorts born in the 1960s and 1970s face no overeducation wage penalties. Although only 11% of the youngest cohort is employed as own-account workers, 50% are overeducated and face a wage penalty of 9.3%. We acknowledge that workers are not randomly assigned to be overeducated, matched educated, or undereducated for their jobs, which could bias the coefficient estimates. There are relevant unobservable factors, such as low ability or degree completion from low-quality institutions, that cannot be corrected for using the existing data, potentially leading to overestimated wage penalties for overeducated persons who in fact work at their correct level of productivity. Although we cannot directly solve the ability bias in this present study, previous work on overeducation wage penalties shows that even when taking into account unobserved individual heterogeneity, the negative impact of overeducation on wages generally does not disappear. For example, Korpi and Tåhlin (2009) employ a fixed-effect approach using panel data from Sweden. Their results suggest that even after accounting for unobservable personal characteristics, returns to years of education beyond what is required for the job are positive and significant, suggesting that the ordinary least squares estimates are not merely capturing differences in unobserved ability. A study by Mavromaras et al. (2013) employs fixed effect and random effect models to panel data and finds that unobservable individual heterogeneity cannot explain all of the negative impact of overeducation and overskilling among working-age Australian men. Papers by McGuinness and Bennett (2007) and Paweenawat and Vechbanyongratana (2015) use a quantile approach to show that overeducation occurs at all points along the wage and ability distribution, which suggests that overeducation is not synonymous with low ability in Northern Ireland and Thailand, respectively. Specifically in the case of Thailand, overeducated male university graduates born between 1966 and 1985 face large overeducation wage penalties at all points along the ability distribution, which is consistent with an imbalance between the number of university graduates and jobs available in the economy (Paweenawat and Vechbanyongratana 2015). Results from previous related studies give us some level of confidence that our estimated coefficients on the undereducation and overeducation variables are not entirely driven by the ability bias and do in fact capture in part the relationship between vertical education-occupation mismatch and wages in formal and informal employment. VII. Discussion and Conclusions Since the 1970s, Thailand has enacted a variety of policies to pursue economic development. These policies include increasing compulsory education from 4 years to 9 years, providing free education through upper secondary school and expanding higher education opportunities. The government also worked to change the structure of the economy, transforming it from a largely informal agriculture-based economy to a formalized industrial and service-based economy. While the former has resulted in dramatic increases in the average educational attainment of the populace, the latter, while diversifying job opportunities, has failed to fully formalize work, leaving the majority of Thailand's workers still engaged in informal employment. This paper estimates the incidence of vertical education-occupation mismatch and its associated wage premiums and penalties across formal and informal employment over four cohorts of workers. It adds to the existing literature by considering the consequences of vertical mismatch in a developing country context where the labor force is largely informal. The paper also extends Pholphirul's (2017) earlier work on Thailand by going beyond the mean wage impact of vertical mismatch on wages by taking into consideration informality and generational differences in education and early career labor market conditions. Informal workers continue to make large contributions to the Thai economy, thus understanding the interaction of vertical mismatch and its consequences within formal and informal employment is important for pinpointing potential inefficiencies in education and labor market policies and helping to develop potential solutions. This paper has shown that the Thai government's education and economic policies have led to an increase in the incidence of overeducation among younger cohorts of workers, which is especially pronounced among informal workers. This implies that employment opportunities in Thailand do not match with its increasingly educated populace. Although the youngest cohort born between 1981 and 1990 is more likely to be formally employed than in previous generations, 40% of this cohort is still absorbed into informal employment, of which 41% are classified as overeducated. Overeducated informal workers in private firms face the highest overeducation wage penalties within the youngest birth cohort. Dissonance between formal job development and government education policies is an issue that policy makers in developing economies need to heed. Thailand's current approach to education that encourages students to complete high levels of general education without the promise of formal employment commensurate with their educational qualifications incurs costs to both individuals (i.e., time costs, wage penalties, and potentially forced entry into informal employment) and society (i.e., inefficient education spending and potential losses of tax revenues from unregistered employees). The government may want to consider better aligning its curriculum and degree offerings with formal job development. At present, the Thai government is focused on increasing high-skilled job opportunities. Thailand has introduced the "Thailand 4.0" policy, which is aimed at advancing the development of the country through innovation (Royal Thai Embassy 2018). As part of its strategy, the government has identified 10 target industries for development. 7 One of the government's current target industries, for example, is automobile manufacturing. The development of vocational education aimed at filling formal technical jobs within automobile manufacturing would (i) better target the amount of education an individual needs to complete, thus minimizing time and monetary costs of education and (ii) channel young workers into well-matched formal employment. If the government is successful in moving Thailand 4.0 forward and creating more high-skilled, formal employment that is commensurate with academic credentials, vertical education-occupation mismatch and its penalties would be expected to decline. Time will tell whether this or other government policies to develop more formal sector high-skill jobs will help alleviate the high incidence of informality among younger workers and allow them to earn at their potential. Finally, we acknowledge the limitations of the above analysis given the use of cross-sectional data. However, given the results from previous related research using panel data, particularly the research by Paweenawat and Vechbanyongratana (2015) that shows overeducation occurs across the entire ability distribution in Thailand, we believe our results are not entirely driven by the ability bias. In the future, we hope to extend this work and better control for individual heterogeneity by using panel data. Future work will also include an analysis by level of education, particularly differences in penalties between vocational and general education.
2021-05-05T00:08:35.169Z
2021-03-22T00:00:00.000
{ "year": 2021, "sha1": "99c5c6abce618ab5d9563f71f8b93bd01ea33fd6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1162/adev_a_00160", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "518ef20177a13328af9586ba00dea33804f4f8dc", "s2fieldsofstudy": [ "Economics", "Education", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
258153763
pes2o/s2orc
v3-fos-license
The analysis of ramipril/ramiprilat concentration in human serum with liquid chromatography-tandem mass spectrometry – interpretation of high concentrations for the purposes of forensic toxicology Ramipril is a popular angiotensin-converting enzyme inhibitor applied in the treatment of hypertension. Its therapeutic effect is oriented on the concentration of the active metabolite ramiprilat. The information about toxic drug levels is missing in the literature. Therefore, the aim of this work was an indication of possible toxic ranges based on the analysis of real samples with high ramiprilat concentrations. For these purposes, an appropriate analytical LC–MS/MS method was developed and validated according to forensic guidelines and applied in the routine. Most real samples targeted for ramipril/ramiprilat were associated with the typical therapeutic drug range of 1–40 ng/mL described in the literature. However, higher drug levels with ramiprilat concentrations above 100 ng/mL could also be observed infrequently in cases of driving under the influence of drugs or attempted suicides. To the best of the author’s knowledge, this is the first time antemortem ramipril and ramiprilat concentrations associated with driving under the influence of drugs and suicide attempts were discussed from a forensic point of view. The collected data enabled an indication of the ramiprilat toxic concentration range from about 600 ng/mL to at least 3500 ng/mL. The toxic concentration range discussed can be applied in the forensic practice as a reference for future cases. Introduction Ramipril is a well-known angiotensin-converting enzyme (ACE) inhibitor adapted for clinical application in the treatment of hypertension since the late 1980s [1].Since the ramipril metabolite ramiprilat is responsible for the major activity of the parent drug, both drug concentrations are important and of interest in different issues of forensic toxicology like driving under the influence of drugs or postmortem cases.Blood concentrations of drugs discussed were analysed in different studies.In detail, Mendes et al. reported an average ramipril/ramiprilat concentration of 13/9.8 ng/mL at 0.5/3.4h after a single oral dose of 5 mg [2].Based on Shionori et al. a single oral dose of 10 mg ramipril yielded an average ramipril/ramiprilat concentration of 18/4.7 ng/mL at 1.2/3.2h [3].Meyer et al. revealed that a dose of 20 mg ramipril resulted in an average ramipril/ ramiprilat concentration of 52/34 ng/mL at 0.7/2.1 h [4].The therapeutic effect of ramipril is estimated on the basis of the ramiprilat concentration expected in the range of 1-40 ng/ mL [5].No toxic/lethal ranges could be defined for the ACE inhibitor discussed. In general, ramipril poisoning can be defined as rare.However, a monointoxication after an intended suicide with 20 ramipril tablets was described recently [6].Additionally, a similar case with 100 tablets containing metoprolol and 20 tablets containing ramipril was described earlier by Wagner et al. [7].No ramipril concentration values were published in the context of the described cases [6,7].Furthermore, postmortem concentration distribution was studied by Theofel et al. [8].Ramiprilat concentration determined in femoral blood representing 33 cases was defined as comparable to therapeutic values expected for antemortem samples. Ramipril/ramiprilat analyses are predominantly performed with conventional liquid chromatography-tandem mass spectrometry (LC-MS/MS) analytical methods.In this context, different sample preparation strategies based on protein precipitation, liquid-liquid extraction or solid-phase extraction were described in the literature [9][10][11][12].Both the positive and negative electrospray ionisation (ESI) mode could be applied for these purposes. Based on the literature, it can be stated that there is not enough information about the toxic/lethal concentration range of ramiprilat.Therefore, the aim of this work was (a) the development and validation of a simple protein precipitation based LC-MS/MS analytical method for the analysis of ramipril/ramiprilat in human serum and (b) analysis of real samples with an indication of possible toxic ranges. Chemicals and equipment Chemicals and solvents used for the analyses were of analytical/LC-MS grade.Blank human serum was purchased from the blood bank of the Hannover Medical School and ramipril, ramipril-D3 and ramiprilat from Toronto Research Chemicals.Other materials used were purchased from J.T. Baker (methanol), Honeywell Riedel-de Haёn (water) and Merck (acetic acid, ammonium acetate). Data acquisition was performed with a Sciex QTRAP 4500 LC-MS/MS System coupled with an ExionLC UHPLC system.Chromatographic separation was performed with an analytical column purchased from Phenomenex Luna 5 μm C18(2), 100 A, 150 mm × 2 mm.The analytical instruments used were operated with the analyst 1.7.2 software.Additionally, all calculations necessary for method validation were performed with the Valistat 2.0 software from Arvecon GmbH. Conditions Two mobile phases with 10 mM ammonium acetate and 0.1% acetic acid (v/v) were applied for chromatographic separation and defined as A (H 2 O/methanol = 95/5, v/v) and B (H 2 O/methanol = 3/97, v/v).UHPLC was operated with a total flow rate of 0.5 mL/min, an oven temperature of 60 °C, a sample cooler temperature of 15 °C and with the following elution programme: (1) starting with 20% B, (2) ramping to 100% B between 0.00 and 2.50 min, (3) holding 100% B from 2.50 to 4.80 min, (4) reducing to 20% B between 4.80 and 5.00 min and finally (5) holding 20% B from 5.00 to 8.00 min.After 8 min, the system was equilibrated for the next run.The injection volume was 25 µL. The LC-MS/MS system was operated with the multiple reaction monitoring (MRM) scan type and the following source/gas parameters of the positive electrospray ionisation (ESI): curtain gas (N 2 ), 35 psi; collision gas, medium; ion spray voltage, 5500 V; temperature, 500 °C; ion source gas 1 (N 2 ), 40 psi; ion source gas 2 (N 2 ), 50 psi.Compound dependant parameters are summarised in Table 1. Extraction procedure Protein precipitation was performed in a microcentrifuge tube.For these purposes, 10 µL of the internal standard (a methanolic solution with 100 ng/mL ramipril-D3) was mixed with 100 µL human serum and 300 µL methanol.After a shaking time of 15 min, the samples were centrifuged and the supernatant evaporated into dryness.The extract reconstitution was performed in a mobile phase mixture consisted of 80% A: 20% B (100 µL).Finally, extract filtration was followed by the injection into the LC-MS/ MS system. Method validation The analytical method was validated according to the guidelines of the German Society of Toxicological and Forensic Chemistry (GTFCh) [13].Thus, the linearity of the calibration range was investigated by the analysis of the following calibration points: 1, 5, 10, 25, 50, 75 and 100 ng/ mL for ramipril and ramiprilat.Each level was analysed six times.Additionally, the limit of detection (LOD) and quantification (LOQ) were calculated on the basis of calibration curves prepared in narrower concentration ranges 1-12.5 ng/mL.Selectivity of the method was investigated by the analysis of different lots of blank matrix performed 6 × with and 2 × without the internal standard.The matrix effect and recovery were evaluated by the strategy described by Matuszewski et al. [14].For these purposes, a low (QC-L with 2.5 ng/mL ramipril/ramiprilat) and high (QC-H with 75 ng/mL ramipril/ramiprilat) quality control sample, with corresponding concentrations in the starting eluent and extracted blank matrix, were applied.The processed sample stability was investigated by the analysis of QC-L and -H samples performed at constant intervals during 3 h.Finally, both QC samples were applied for the calculation of method precision (intra-and inter-day precision) and accuracy.Appropriate, each QC sample was analysed twice a day during a time period of 8 days. Method application The method developed was applied in the analysis of blood serum samples targeted for ramipril/ramiprilat provided by the police.Blood was collected in cases of both driving under the influence of drugs and attempted suicide. Method validation The results of the validation experiments are summarised in Table 2. Linearity of the calibration curve could be confirmed for both ramipril and ramiprilat in the range of 1-100 ng/mL.No interferences could be registered during selectivity experiments and the analysis of real samples performed.Typical chromatograms representing a blank human serum without internal standard and a positive real sample are presented in Fig. 1.The LOD and LOQ values enabled analyte analysis into the subtherapeutic concentration range and were comparable to other LC-MS/MS-based methods focused on the simultaneous ramipril/ramiprilat quantification [15,16].Additionally, method precision, accuracy and processed sample stability were in accepted ranges.As expected for a simple protein precipitation-based sample preparation, a good recovery could be achieved.The matrix effect observed for the drugs investigated could be defined as negligible and was in the range of 96-109% for ramipril (enhancement) and 93-94% for ramiprilat (suppression). In general, validation experiments demonstrated that the method developed can be applied for ramipril/ramiprilat analysis in human serum. Method application Since ramipril can be defined as a very popular drug applied for the treatment of hypertension, a lot of targeted quantifications are performed each year in our laboratory.The results of positive ramipril/ramiprilat cases summarised in Table 3 are selected examples of cases representing different analyte concentrations observed/expected in real samples.Case 1 represents a subtherapeutic concentration of ramiprilat.Appropriately, the concentration calculated below the calibration range should be seen as an approximate value, whereas the cases 2-7 reflect typical ramiprilat concentrations observed in the majority of positive samples.They are characterised by a negative or low ramipril concentration and can be associated with a clinically intended therapeutic effect of the drug.Occasionally, atypical high ramiprilat concentrations could be observed (cases 8-11) with a moderate (cases 8 and 9) and strong concentration increase (cases 10 and 11). Discussion In the analysis of postmortem ramiprilat concentrations, Theofel et al. already observed a concentration increase in a small number of cases [8].However, these concentration increases were discussed as an effect of potential postmortem drug redistribution and were not further investigated. In the evaluation of high ramiprilat concentrations, it is also important to note that in a study published by Heintz et al. practically no ramiprilat accumulation could be observed in human serum after 2 weeks [17].On the basis of this information, high ramiprilat concentrations should be associated either with a drug abuse or with an inappropriate drug dosage.Based on the effect of ramipril, an intended drug abuse can be defined as contra-productive since an overdose can result in severe hypotension manifested usually within 6 h after the ingestion [18].On the other site, a potential explanation of a ramipril misuse could be a neurological disease like dementia.High ramiprilat concentrations can be explained by some health disorders and higher age.In general, according to the further literature, increased ramiprilat concentrations can be expected in the elderly and patients with renal impairment and heart failure [19].In detail, even a ten times greater mean trough concentration was observed for patients with renal failure in comparison to healthy subjects [20].On the other site, two separate studies demonstrated additionally that the maximal mean ramiprilat concentration in elderly (mean age ≥ 71 years) with normal renal function can be increased by 20% and 200% when compared to healthy young volunteers [19,21,22].Although no negative effect on ramipril elimination could be observed for the patient groups mentioned, concentrations of the parent drug can also be potentiated in patients with hepatic impairment [19].Since there are different explanations for increased ramiprilat concentrations, it seems probable that high drug levels observed in postmortem material can also be explained partly in this way [8].Additionally, this aspect should also be taken into account in the interpretation of ramiprilat concentrations in antemortem samples when possible intoxications are considered. High ramiprilat concentrations in cases 8 and 9 have an unknown genesis.However, an appropriate disease can usually be suspected after a detection of different drugs in Fig. 1 Chromatogram of a blank human serum and real sample with 1.6 ng/mL ramipril and 36 ng/mL ramiprilat (case 7 in Table 3); T, target, Q, qualifier human serum.Therefore, case 8 can be associated with a 71-year-old person.Bisoprolol, hydrochlorothiazide and blood alcohol (2.27‰) could be detected additionally to ramipril/ramiprilat, whereas in case 9 (50-year-old person), pregabalin, metformin, tilidine, ibuprofen and blood alcohol (2.30‰) could be detected as well.Only a high ramiprilat concentration with a relevant blood alcohol concentration could be defined as remarkable in the discussed cases 8 and 9.Under the assumption that prescribed drug doses were taken by car drivers, the high ramiprilat levels could be explained only by the age or/and different diseases [19][20][21][22].Without the assumption of the intake of prescribed drug doses, high ramiprilat concentrations could also be discussed as a result of an intake of an excessive drug dose.Incorrect drug dosing can be performed, for example, under the effect of a high blood alcohol concentration. Cases 10 and 11 are associated with the detection of ramipril/ramiprilat only.Case 10 is a confirmed suicide attempt with approximately 20 tablets with 5 mg ramipril and the person was described as heavily dazed, unsteady on his feet and with a wound on the wrist, whereas case 11 can be defined as a suspected suicide attempt with the drug (the information is strongly limited).Since case 10 can be defined as a ramipril/ramiprilat monointoxication with obvious adverse effects of the drug, the quantified ramiprilat concentration of 562 ng/mL represents a toxic drug concentration.Based on cases 10 and 11 (Table 3) a wide ramiprilat toxic concentration range can be assumed.A lethal ramiprilat concentration would occur when a problematic hypotension, resulted after a drug overdose, would not be treated in an appropriate way.Till now, no lethal ramipril/ramiprilat intoxication could be identified in our routine work. The presented cases are associated with driving under the influence of drugs or attempted suicides.Therefore, the information was restricted to protocols provided by the police and no clinical data was available.Thus, high ramiprilat concentrations were interpreted on the basis of previous findings [19][20][21][22] and an assumption that therapeutic drug doses were taken [8,9].This fact can be seen as a limitation of the data evaluation presented since we do not assume as likely but cannot also fully exclude that these high ramiprilat blood serum concentrations could be a result of an inappropriate dosing.It should be also pointed out that reference drug ranges are very important in the forensic expertise.Since no toxic levels could be defined for ramiprilat, there is a lack of information about this popular drug [5].Therefore, the toxic concentration range signalised in this paper from about 600 ng/mL to at least 3500 ng/mL can be defined as useful for the evaluation/interpretation of forensic cases in the future. Conclusions An analytical LC-MS/MS-based method was developed for the parallel analysis of ramipril and its active metabolite ramiprilat after a simple and fast sample preparation strategy based on protein precipitation.Validation data confirmed the applicability for forensic toxicological quantifications.The method could be applied successfully for the targeted analysis of routine samples. To the best of our knowledge, this is the first time antemortem ramipril and ramiprilat concentrations were discussed from the forensic point of view in the context of high and possible toxic levels.Until now, no toxic ramiprilat concentrations were specified in the literature for a forensic interpretation of an adverse drug effect.The presented data enables an indication of the toxic concentration range from about 600 ng/mL to at least 3500 ng/mL. Key points 1. Since no toxic concentration levels could be defined for ramipril/ramiprilat, the aim of this work was the analysis of real samples with an indication of possible toxic ranges.For these purposes, an LC-MS/MS analytical method based on a fast protein precipitation sample preparation strategy was developed and validated according to forensic guidelines.2. The validation experiments and comparison with other methods published demonstrated the applicability of the LC-MS/ MS quantification developed for the analysis of real samples. Table 1 DP declustering potential, EP entrance potential, CE collision energy, CXP collision cell exit potential, T target, Q qualifier Table 2 Results of the validation experiments for ramipril and ramiprilat Table 3 Examples of positive ramipril/ramiprilat quantifications performed in real samples Below the calibration range (approx.value); **above the calibration range (sample diluted) *
2023-04-16T06:18:07.491Z
2023-04-15T00:00:00.000
{ "year": 2023, "sha1": "16288705834d9c323ee50e4015d102f7f1e93e54", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12024-023-00621-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9e67aafb256a6d706343f92ec65464957772e523", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266326262
pes2o/s2orc
v3-fos-license
From Turing to Transformers: A Comprehensive Review and Tutorial on the Evolution and Applications of Generative Transformer Models : In recent years, generative transformers have become increasingly prevalent in the field of artificial intelligence, especially within the scope of natural language processing. This paper provides a comprehensive overview of these models, beginning with the foundational theories introduced by Alan Turing and extending to contemporary generative transformer architectures. The manuscript serves as a review, historical account, and tutorial, aiming to offer a thorough understanding of the models’ importance, underlying principles, and wide-ranging applications. The tutorial section includes a practical guide for constructing a basic generative transformer model. Additionally, the paper addresses the challenges, ethical implications, and future directions in the study of generative models. Introduction 1.Background and Significance of Generative Models in AI Generative models serve as an essential building block in the realm of artificial intelligence (AI).At their core, these models are designed to generate new data samples that are similar to the input data they have been trained on.This capability has profound implications, enabling machines to create, imagine, and replicate complex patterns observed in the real world. The inception of generative models can be traced back to the early days of AI, where the foundational work of Alan Turing laid the groundwork for the evolution of generative models and the broader field of AI.Following Turing's pioneering contributions, the field witnessed the emergence of simple algorithms designed to mimic and reproduce sequential data.An exemplar of this era is the Hidden Markov Models (HMM) proposed by Leonard Baum in a series of seminal papers published in the late 1960s [1][2][3].These models were groundbreaking for their time, providing a probabilistic framework to understand and predict sequences.The most notable application of HMMs was in the realm of speech recognition [4], where they became a foundational component, enabling systems to decode and understand human speech with increasing accuracy. The introduction of Recurrent Neural Networks (RNNs) in 1982 by John Hopfield [5] and Long Short-Term Memory (LSTM) networks in 1997 by Hochreiter and Schmidhuber [6] marked significant advancements in the field.RNNs brought the ability to remember previous inputs in handling sequential data, while LSTMs addressed the challenges of longterm dependencies, making them pivotal for tasks such as time series prediction, speech recognition, and natural language processing.Together, they set foundational standards for modern generative AI models handling sequences. However, with the advent of deep learning and the proliferation of neural networks, the potential and capabilities of generative models have expanded exponentially.Neuralbased generative models, such as Variational Autoencoders (VAEs) [7,8] introduced in 2013 and Generative Adversarial Networks (GANs) [9,10] introduced in the following year, have showcased the ability to generate high-fidelity new data samples based on training data, ranging from images to text and even music. The significance of generative models in AI is multifaceted.Firstly, they play a pivotal role in unsupervised learning, where labeled data is scarce or unavailable.By learning the underlying distribution of the data, generative models can produce new samples, aiding in tasks such as data augmentation [11,12], anomaly detection [13], and image denoising [14,15].Secondly, the creative potential of these models has been harnessed in various domains, from image [16][17][18][19], video, and music generation to drug discovery [20,21] and virtual reality [22][23][24].The ability of machines to generate novel and coherent content has opened up avenues previously deemed exclusive to human creativity. Furthermore, generative models serve as powerful tools for understanding and interpreting complex data distributions.They provide insights into the structure and relationships within the data, enabling researchers and practitioners to uncover hidden patterns, correlations, and features [25].This interpretative power is especially valuable in domains such as biology [26], finance [27], and climate science [28], where understanding data intricacies can lead to groundbreaking discoveries. Generative models stand as a testament to the advancements and possibilities within AI.Their ability to create, interpret, and innovate has not only broadened the horizons of machine learning but has also reshaped our understanding of intelligence and creativity. The Rise of Transformer Architectures While Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have significantly advanced the field of generative AI, another monumental shift in the deep learning landscape emerged with the introduction of the transformer architecture.Presented in the seminal paper "Attention is All You Need" by a team of Google researchers led by Vaswani in 2017 [29], transformers have redefined the benchmarks in a multitude of tasks, particularly in natural language processing (NLP). The transformer's innovation lies in its self-attention mechanism, which allows it to weigh the significance of different parts of an input sequence, be it words in a sentence or pixels in an image.This mechanism enables the model to capture long-range dependencies and intricate relationships in the data, overcoming the limitations of previous architectures such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks.RNNs and LSTMs, while effective in handling sequential data, often struggled with long sequences due to issues such as vanishing and exploding gradients [30].Transformers, with their parallel processing capabilities and attention mechanisms, alleviated these challenges. The success of the transformer architecture was not immediate but became evident with the introduction of large language models such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer).BERT, developed by researchers at Google, demonstrated the power of transformers in understanding the context of words in a sentence by considering both left and right contexts in all layers [31].This bidirectional approach led to state-of-the-art results in several NLP tasks, from question answering to sentiment analysis [32].On the other hand, OpenAI's GPT showcased the generative capabilities of transformers [33], producing human-like text and achieving remarkable performance in tasks such as machine translation [34] and text summarization [35] without task-specific training data. The transformer's versatility extends beyond NLP.Vision Transformer (ViT) [36], an adaptation of the architecture for image classification tasks, has shown that transformers can rival, if not surpass, the performance of traditional convolutional neural networks (CNNs) in computer vision tasks [37,38].This cross-domain applicability underscores the transformer's potential and its foundational role in modern AI. Another driving factor behind the rise of transformers is the ever-growing computational power and the availability of large-scale datasets.Training transformer models, especially large ones, require significant computational resources.The feasibility of training such models has been made possible due to advancements in GPU and TPU technologies [39], coupled with the availability of vast amounts of data to train on.The combination of innovative architecture and computational prowess has led to the development of models with billions or even trillions of parameters, pushing the boundaries of what machines can generate to new heights. Generative AI models have undergone significant transformations since their inception, with each milestone contributing to the capabilities we see today.From the foundational Turing machines to the latest GPT-4 and LLaMA models, the journey of generative AI has been marked by groundbreaking advancements.A detailed timeline capturing these key milestones is presented to offer a comprehensive overview of the field's evolution (Figure 1). Purpose and Structure of the Paper The fast growth in artificial intelligence, especially with recent technologies such as generative models and transformers, highlights the need for a comprehensive study that spans both their historical development and current applications.The primary objective of this paper is to provide readers with a holistic understanding of the evolution, significance, architecture, and capabilities of generative transformers, contextualized within the broader landscape of AI. Our motivation for this paper is informed by the existing body of work on transformerbased models and generative AI.While there are several comprehensive reviews, each focuses on specific aspects of the topic.For example, Gozalo-Brizuela and Garrido-Merchan [40] concentrate on the taxonomy and industrial implications of large generative models, providing a compilation of popular generative models organized into various categories such as text-to-text, text-to-image, and text-to-audio.Lin et al. [41] present an exhaustive review of various transformer variants, their architectural modifications, and applications.Additionally, there are survey papers that focus on the use of transformers for specific tasks such as natural language processing [42,43], computer vision [44][45][46][47], time series analysis and forecasting [48,49], among others.These existing reviews are invaluable, but our paper aims to provide a more comprehensive overview that bridges these specialized areas. While these papers offer valuable insights, there is a gap in the literature for a resource that combines a historical review, a hands-on tutorial, and a forward-looking perspective on generative transformer models.Our paper aims to fill this void, serving as a comprehensive guide for newcomers and seasoned researchers alike.The historical review section helps readers understand how generative AI has developed and progressed in the wider context of AI.Meanwhile, our practical tutorial guides readers through the foundational concepts and practical implementations, equipping them to build their own generative transformer models.We offer a unique blend of theoretical understanding and practical know-how, setting our work apart from existing reviews.Additionally, we strive to provide a unique balance between explaining the historical evolution, technical aspects, and applications of transformers.This makes our paper a go-to source for researchers and professionals seeking a wholesome understanding and knowledge of transformers. Turing machines A theoretical framework for understanding computation and algorithmic processes. Turing test The first practical measure for machine intelligence. ELIZA The first chatbot that simulates conversations with a human.The structure of the paper, which is designed to guide the reader through a logical progression, is as follows: • Historical Evolution: We embark on a journey tracing the roots of computational theory, starting with the foundational concepts introduced by Alan Turing.This section provides a backdrop, setting the stage for the emergence of neural networks, the challenges they faced, and the eventual rise of transformer architectures.• Tutorial on Generative Transformers: Transitioning from theory to practice, this section offers a practical approach to understanding the intricacies of generative transformers. Readers will gain insights into the architecture, training methodologies, and best practices, supplemented with code snippets and practical examples. • Applications and Challenges: Building upon the foundational knowledge, we delve into the myriad applications of generative transformers, highlighting their impact across various domains.Concurrently, we address the challenges and ethical considerations associated with their use, fostering a balanced perspective. • Conclusion and Future Directions: The paper concludes with a reflection on the current state of generative transformers, their potential trajectory, and the exciting possibilities they hold for the future of AI. In essence, this paper endeavors to be more than just a review or a tutorial, it aspires to be a comprehensive guide, weaving together history, theory, practice, and prospects, providing readers with a panoramic view of the world of generative transformers. Historical Evolution The development of computational theory and artificial intelligence has been shaped by pioneering figures, innovative ideas, and transformative discoveries.Central to this narrative is Alan Turing, whose unparalleled contributions laid the foundations for modern computation and the subsequent emergence of AI.This section delves deeper into Turing's groundbreaking work, and the lasting legacy that continues to shape the digital age. Turing Machines and the Foundations of Computation One of Turing's major contributions was the idea of the Turing machine proposed in his 1936 paper titled "On Computable Numbers, with an Application to the Entscheidungsproblem" [50].This abstract machine was a simple but powerful theoretical construct that was designed to perform computations by manipulating symbols on an infinite tape based on a set of rules.The infinite tape is divided into discrete cells, each cell can contain a symbol from a finite alphabet, and the machine itself has a "head" that can read and write symbols on the tape and move left or right.The machine's behavior is dictated by a set of transition rules, which determine its actions based on the current state and the symbol being read.In essence, the Turing machine is a rule-based system that manipulates symbols on a tape, embodying the fundamental operations of reading, writing, and transitioning between states. While the concept might seem rudimentary, the implications of the Turing machine are profound.Turing demonstrated that this simple device, with its set of rules and operations, could compute any function that is computable, given enough time and tape.This assertion, known as the Church-Turing thesis [51] (independently proposed by Alonzo Church in his paper titled "An Unsolvable Problem of Elementary Number Theory" also published in 1936 [52]), posits that any function computable by an algorithm can be computed by a Turing machine.This thesis, although not proven, has stood the test of time, with no evidence to the contrary.It serves as a foundational pillar in computer science, defining the boundaries of what is computable. World War II saw Turing's theoretical concept manifest in tangible, real-world applications.Stationed at Bletchley Park, Britain's cryptographic hub, Turing played a key role in deciphering the Enigma code used by the German military.Turing helped develop a machine called the Bombe, which expedited the decryption process of Enigma-encrypted messages [53].This secret work was crucial for the Allies' success and showed how computer science could have a major impact on real-world events. After World War II, Turing turned his attention to the development of electronic computers.He was instrumental in the design of the Automatic Computing Engine (ACE) [54], one of the earliest computer models capable of storing programs.This showed Turing's forward-thinking approach to the digital age.Beyond computing, he also delved into the nature of intelligence and how it could be replicated in machines. The Turing machine's significance transcended its immediate mathematical implications.The true brilliance of Turing's insight, however, lies in the concept of universal computation.Turing's subsequent proposition of a Universal Turing Machine (UTM)-a machine capable of simulating any other Turing machine given the right input and rules-was a revolutionary idea [50].Given a description of a Turing machine and its input encoded on the tape, the UTM could replicate the behavior of that machine.This meta-level of computation was groundbreaking.It suggested that a single, general-purpose machine could be designed to perform any computational task, eliminating the need for task-specific machines.The UTM was a harbinger of modern computers, devices that can be reprogrammed to execute a wide array of tasks. The implications of universal computation extend beyond mere hardware.It challenges our understanding of intelligence and consciousness.If the human brain, with its intricate neural networks and synaptic connections, operates on computational principles, then could it be simulated by a Turing machine?This question, which blurs the lines between philosophy, neuroscience, and computer science, remains one of the most intriguing and debated topics in the field of artificial intelligence. Turing's Impact on Artificial Intelligence and Machine Learning Alan Turing's influence on the fields of artificial intelligence (AI) and machine learning (ML) is both profound and pervasive.While Turing is often lauded for his foundational contributions to computational theory, his vision and insights into the realm of machine intelligence have played a pivotal role in shaping the trajectory of AI and ML. His 1950 paper, "Computing Machinery and Intelligence", Ref. [55] introduced the famous Turing Test as a practical measure of machine intelligence.Alan Turing introduced the Turing Test within the context of an "Imitation Game", involving a man, a woman, and a judge as players.They communicate electronically from separate rooms, and the goal of the judge is to identify who is the woman.The man aims to deceive the judge into thinking he is the woman, while the woman assists the judge.Turing then adapts this game into his famous test by replacing the man with a machine, aiming to deceive the questioner in the same way.Although the original game focused on gender identification, this aspect is often overlooked in later discussions of the Turing Test. In this work, Turing posed the provocative question: "Can machines think?"Rather than delving into the philosophical intricacies of defining "thinking", Turing proposed a pragmatic criterion for machine intelligence: if a machine could engage in a conversation with a human, indistinguishably from another human, it would be deemed intelligent.This criterion, while straightforward, sparked widespread debate and research, laying the foundation for the field of artificial intelligence. The Turing Test, in many ways, encapsulated the essence of AI-the quest to create machines that can mimic, replicate, or even surpass human cognitive abilities.It set a benchmark, a gold standard for machine intelligence, challenging researchers and scientists to build systems that could "think" and "reason" like humans.While the test itself has been critiqued and refined over the years, its underlying philosophy remains central to AI: the aspiration to understand and emulate human intelligence. Beyond the Turing Test, Turing's insights into neural networks and the potential of machine learning were visionary.In a lesser-known report written in 1948, titled "Intelligent Machinery" [56], Turing delved into the idea of machines learning from experience.He envisioned a scenario where machines could be trained, much like a human child, through a process of education.Turing postulated the use of what he termed "B-type unorganized machines", which bear a striking resemblance to modern neural networks.These machines, as Turing described, would be trained, rather than explicitly programmed, to perform tasks.Although in its infancy at the time, this idea signaled the rise of machine learning, where algorithms learn from data rather than being explicitly programmed. Turing's exploration of morphogenesis, the biological process that causes organisms to develop their shape, further showcased his interdisciplinary genius [57].In his work on reaction-diffusion systems, Turing demonstrated how simple mathematical models could give rise to complex patterns observed in nature.This work, while primarily biological in its focus, has profound implications for AI and ML.It underscores the potential of simple algorithms to generate complex, emergent behavior, a principle central to neural networks and deep learning. Alan Turing's impact on artificial intelligence and machine learning is immeasurable.His vision of machine intelligence, his pioneering insights into neural networks, and his interdisciplinary approach to problem-solving have left an indelible mark on the field.As we navigate the intricate landscape of modern AI, with its deep neural networks, generative models, and transformers, it is imperative to recognize and honor Turing's legacy.His work serves as a beacon, illuminating the path forward, reminding us of the possibilities, challenges, and the profound potential of machines that can "think". From Turing's Foundations to Generative Transformers The journey from Alan Turing's foundational concepts to the sophisticated realm of generative transformers is a testament to the evolution of computational theory and its application in artificial intelligence.While at first glance Turing's work and generative transformers might seem worlds apart, a closer examination reveals a direct lineage and influence.Alan Turing's conceptualization of the Turing machine provided the bedrock for understanding computation.His idea of a machine that could simulate any algorithm, given the right set of instructions, laid the groundwork for the concept of universal computation.This idea, that a single machine could be reprogrammed to perform a myriad of tasks, is the precursor to the modern notion of general-purpose computing systems. Fast forward to the advent of neural networks, which Turing had touched upon in his lesser-known works.These networks, inspired by the human brain's interconnected neurons, were designed to learn from data.The foundational idea was that, rather than being explicitly programmed to perform a task, these networks would "learn" by adjusting their internal parameters based on the data they were exposed to.Turing's vision of machines learning from experience resonates deeply with the principles of neural networks. Generative transformers, a cutting-edge development in the AI landscape, are an extension of these neural networks.Transformers, with their self-attention mechanisms, are designed to weigh the significance of different parts of an input sequence, capturing intricate relationships within the data.The "generative" aspect of these models allows them to produce new, previously unseen data samples based on their training. Drawing a direct link, Turing's Universal Turing Machine can be seen as an early, abstract representation of what generative transformers aim to achieve in a more specialized domain.Just as the Universal Turing Machine could simulate any other Turing machine, given the right input and set of rules, generative transformers aim to generate any plausible data sample, given the right training and context.The universality of Turing's machine finds its parallel in the versatility of generative transformers. Furthermore, Turing's exploration into machine learning, the idea of machines learning from data rather than explicit programming, is the very essence of generative transformers.These models are trained on vast datasets, learning patterns, structures, and nuances, which they then use to generate new content.The bridge between Turing's early insights into machine learning and the capabilities of generative transformers is a direct one, showcasing the evolution of a concept from its theoretical inception to its practical application. While Alan Turing might not have directly worked on generative transformers, his foundational concepts, vision of machine learning, and the principles he laid down have directly influenced and shaped their development.The journey from Turing machines to generative transformers is a testament to the enduring legacy of Turing's genius and the continual evolution of artificial intelligence. Early Neural Networks and Language Models The realm of artificial intelligence has witnessed a plethora of innovations and advancements, with neural networks standing at the forefront of this revolution.These computational models, inspired by the intricate web of neurons in the human brain, have paved the way for sophisticated language models that can understand, generate, and manipulate human language with unprecedented accuracy. Introduction to Neural Networks Neural networks [58,59], at their core, are a set of algorithms designed to recognize patterns.They interpret sensory data through a kind of machine perception, labeling, and clustering of raw input.These algorithms loosely mirror the way a human brain operates, thus the nomenclature "neural networks". A basic neural network consists of layers of interconnected nodes or "neurons".Each connection between neurons has an associated weight, which is adjusted during training.The fundamental equation governing the output y of a neuron is given by: where x i are the input values, w i are the weights, b is a bias term, and f is an activation function. The activation function introduces non-linearity into the model, allowing it to learn from error and make adjustments, which is essential for learning complex patterns.One of the commonly used activation functions is the sigmoid function, defined as: Neural networks typically consist of an input layer, one or more hidden layers, and an output layer.The depth and complexity of a network, often referred to as its "architecture", determine its capacity to learn from data. Evolution of Recurrent Neural Networks (RNNs) While traditional neural networks have proven effective for a wide range of tasks, they possess inherent limitations when dealing with sequential data.This is where Recurrent Neural Networks (RNNs) come into play.RNNs are designed to recognize patterns in sequences of data, such as time series or natural language. The fundamental difference between RNNs and traditional neural networks lies in the former's ability to retain memory of previous inputs in its internal state.This is achieved by introducing loops in the network, allowing information to persist. The output of an RNN at time t, denoted h t , is computed as: where W hh and W xh are weight matrices, x t is the input at time t, and h t−1 is the output from the previous timestep.While RNNs are powerful, they suffer from challenges such as the vanishing and exploding gradient problems, especially when dealing with long sequences [30].This makes them less effective in capturing long-term dependencies in the data. Long Short-Term Memory (LSTM) Networks To address the vanishing gradient problem of RNNs, Long Short-Term Memory (LSTM) networks were introduced.LSTMs, a special kind of RNN, are designed to remember information for extended periods [60]. The core idea behind LSTMs is the cell state, a horizontal line running through the entire chain of repeating modules in the LSTM.The cell state can carry information from earlier time steps to later ones, mitigating the memory issues faced by traditional RNNs. LSTMs introduce three gates: 1. Forget Gate: It decides what information from the cell state should be thrown away or kept.Mathematically, the forget gate f t is given by: 2. Input Gate: It updates the cell state with new information.The input gate i t and the candidate values Ct are computed as: 3. Output Gate: It determines the output based on the cell state and the input.The output h t is given by: where o t is the output gate, defined as: LSTMs, with their ability to capture long-term dependencies and mitigate the challenges faced by traditional RNNs, have paved the way for advancements in sequence modeling, particularly in the domain of natural language processing. The Advent of Transformers In the ever-evolving landscape of artificial intelligence and machine learning, the transformer architecture stands out as a significant leap forward, especially in the domain of natural language processing.Introduced in the seminal paper "Attention Is All You Need" by Vaswani et al. [29], transformers have revolutionized the way we approach sequence-to-sequence tasks.This section aims to demystify the transformer architecture, breaking it down into its core components and principles. Introduction to the Transformer Architecture At a high level, the transformer is a type of neural network architecture designed to handle sequential data, making it particularly well-suited for tasks such as language translation, text generation, and more.Unlike its predecessors, such as RNNs and LSTMs, which process data in order, transformers leverage a mechanism called "attention" to draw global dependencies between input and output. The heart of the transformer architecture is the attention mechanism.In essence, attention allows the model to focus on different parts of the input sequence when producing an output sequence, much like how humans pay attention to specific words when understanding a sentence. Mathematically, the attention score for a given query q and key k is computed as: where score is a function that calculates the relevance of the key k to the query q.The output of the attention mechanism is a weighted sum of values, where the weights are the attention scores.The transformer model consists of an encoder and a decoder.Each of these is composed of multiple layers of attention and feed-forward neural networks. The encoder takes in a sequence of embeddings (representations of input tokens) and processes them through its layers.The decoder then generates the output sequence, leveraging both its internal layers and the encoder's output. One of the distinguishing features of transformers is the use of "multi-head attention", which allows the model to focus on different parts of the input simultaneously, capturing various aspects of the information. Advantages of Transformers Transformers have brought significant advancements in the processing of sequential data, characterized by several key advantages.One notable feature of transformers is parallelization.Unlike RNNs, which process sequences step-by-step, transformers can process all tokens in parallel, leading to faster training times. Transformers are also known for their adeptness at handling long-range dependencies.The attention mechanism enables transformers to capture relationships between tokens, regardless of their distance in the sequence.This capability is particularly beneficial for complex tasks where context and relationships between distant elements are crucial for accurate interpretation and response. Scalability is another advantage of transformer models.Transformers are highly scalable, making them well-suited for dealing with large datasets and intricate tasks.This scalability ensures that transformers remain effective and efficient even as the size and complexity of the data or the task increase. Attention Mechanism: The Heart of Transformers The attention mechanism, a pivotal innovation in the realm of deep learning, has transformed the way we approach sequence-to-sequence tasks in natural language processing.Serving as the cornerstone of the transformer architecture, attention allows models to dynamically focus on different parts of the input data, capturing intricate relationships and dependencies.This section aims to elucidate the principles and mathematics behind the attention mechanism, shedding light on its significance in the transformer architecture. Conceptual Overview of Attention In traditional sequence-to-sequence models, such as RNNs and LSTMs, information from the entire input sequence is compressed into a fixed-size context vector, which is then used to generate the output sequence.This approach, while effective for short sequences, struggles with longer sequences as the context vector becomes a bottleneck, unable to capture all the nuances of the input data. The attention mechanism addresses this challenge by allowing the model to "attend" to different parts of the input sequence dynamically, based on the current context.Instead of relying on a single context vector, the model computes a weighted sum of all input vectors, where the weights represent the "attention scores". Mathematics of Attention The core of the attention mechanism is the computation of attention scores.Given a query q and a set of key-value pairs (k, v), the attention score for a specific key k is computed as: The attention weights, which determine how much focus should be given to each key-value pair, are computed using a softmax function: The output of the attention mechanism is a weighted sum of the values: As depicted in Figure 2, the attention mechanism computes scores based on the query and keys, derives attention weights, and produces an output based on a weighted sum of values. Significance in Transformers In the transformer architecture, attention is not just a supplementary feature; it is the core component.Transformers employ a variant called "multi-head attention", which runs multiple attention mechanisms in parallel, capturing different types of relationships in the data. The attention mechanism's ability to focus on different parts of the input sequence, irrespective of their position, empowers transformers to handle long-range dependencies, making them particularly effective for tasks like language translation, text summarization, and more. Furthermore, the self-attention mechanism, a special case where the query, key, and value are all derived from the same input, enables transformers to weigh the significance of different parts of the input relative to a specific position.This is crucial for understanding context and semantics in natural language processing tasks. Generative Transformers and Their Significance Generative transformers have emerged as a groundbreaking advancement in the domain of artificial intelligence, particularly in natural language processing and generation.These models, characterized by their ability to generate coherent and contextually relevant sequences of text, have set new benchmarks in various tasks, from text completion to story generation.This section introduces the notable generative models available, including the GPT series and other significant contributions in this domain. GPT (Generative Pre-Trained Transformer) Series The GPT series, developed by OpenAI, fully demonstrates the power and potential of generative transformers.Built upon the transformer architecture, the GPT models leverage the attention mechanism to understand and generate human-like text.The GPT series has seen rapid evolution, with each iteration bringing enhanced capabilities and performance. GPT-1.The first in the series, GPT-1 [61], was released in 2018.It laid the foundation for subsequent models.With 117 million parameters, it showcased the potential of transformers in generating coherent paragraphs of text. GPT-2.Released in 2019, GPT-2 [62] increased its parameters to 1.5 billion.Its ability to generate entire articles, answer questions, and even write poetry garnered significant attention from the research community and the public alike. GPT-3.GPT-3 [63] has 175 billion parameters.Its capabilities extend beyond mere text generation; it can translate languages, write essays, create poetry, and even generate code. GPT-4.The most recent model from OpenAI, GPT-4 [64], consists a staggering 1.76 trillion parameters, positioning it among the most advanced language models currently available.Leveraging advanced deep learning methodologies, it surpasses the capabilities of its forerunner, GPT-3.Remarkably, GPT-4 can handle up to 25,000 words simultaneously, a capacity 8-fold greater than GPT-3.Furthermore, GPT-4 is versatile in accepting both text and image prompts, allowing users to define tasks across vision and language domains.A notable improvement in GPT-4 is its reduced propensity for hallucinations compared to earlier versions. Other Notable Generative Transformer Models Beyond the GPT series, the landscape of generative transformers is rich and diverse, with several models making significant contributions to the field. BERT (Bidirectional Encoder Representations from Transformers).Developed by Google, BERT [31] revolutionized the way we approach natural language understanding tasks.Unlike GPT, which is generative, BERT is discriminative, designed to predict missing words in a sentence.Its bidirectional nature allows it to capture context from both the left and the right of a word, leading to superior performance in tasks like question-answering and sentiment analysis. LLaMA.LLaMA [65] is an auto-regressive language model built on the transformer architecture, introduced by Meta.In February 2023, Meta unveiled the initial version of LLaMA, boasting 65 billion parameters and adept at numerous generative AI functions.By July 2023, LLaMA 2 was launched with 3 distinct model sizes: 7, 13, and 70 billion parameters. LaMDA.LaMDA [66] is a specialized family of transformer-based neural language models for dialog applications developed by Google in 2022.With up to 137 billion parameters and pre-training on 1.56 trillion words of public dialog and web text, LaMDA aims to address two key challenges: safety and factual grounding.The model incorporates fine-tuning and external knowledge consultation to improve its safety metrics, ensuring responses align with human values and avoid harmful or biased suggestions.For factual grounding, LaMDA employs external knowledge sources like information retrieval systems and calculators to generate responses that are not just plausible but also factually accurate.The model shows promise in various domains, including education and content recommendations, offering a balanced blend of quality, safety, and factual integrity. Tutorial on Generative Transformers In this section, we delve into a hands-on tutorial on generative transformers, guiding readers through the foundational concepts and practical implementations.By the end of this tutorial, readers should have a clear understanding of the transformer architecture and be equipped to build their own generative transformer models. Basics of the Transformer Architecture The transformer architecture, introduced by Vaswani et al. in their seminal paper "Attention Is All You Need" [29], has become the backbone of many state-of-the-art models in natural language processing.We will now break down its core components. Overview As depicted in Figure 3, the transformer consists of an encoder and a decoder.The encoder processes the input sequence, and the decoder generates the output sequence. Both the encoder and decoder are composed of multiple layers of attention mechanisms and feed-forward neural networks. Attention Mechanism As previously discussed, the attention mechanism allows the model to focus on different parts of the input sequence when producing an output.The mechanism computes attention scores based on queries, keys, and values. Mathematical Representation: Given a query q, key k, and value v, the attention output is computed as: where d k is the dimension of the key. Multi-Head Attention Instead of using a single set of attention weights, the transformer uses multiple sets, allowing it to focus on different parts of the input simultaneously.This is known as multi-head attention. Each method and its body are indented with a tab or four spaces, which is the standard Python indentation.This makes the code easier to read and understand. Self-Attention Mechanism The self-attention mechanism is a variant of the attention mechanism where the input sequence itself serves as the queries, keys, and values.This allows the transformer to weigh the significance of different parts of the input relative to a specific position, crucial for understanding context and semantics. Mathematical Representation: Given an input sequence X, the queries Q, keys K, and values V are derived as: where W Q , W K , and W V are weight matrices.The self-attention output is then computed using the attention formula: 3.1.6.Positional Encoding Transformers, by design, do not have a built-in notion of sequence order.To provide the model with positional information, we inject positional encodings to the input embeddings.These encodings are added to the embeddings to ensure the model can make use of the sequence's order. Mathematical Representation: The positional encodings are computed using sine and cosine functions: 16) where pos is the position and i is the dimension. Multi-Head Attention Multi-head attention is an extension of the attention mechanism, allowing the model to focus on different parts of the input simultaneously.By running multiple attention mechanisms in parallel, the model can capture various types of relationships in the data. Mathematical Representation: Given queries Q, keys K, and values V, the multi-head attention output is computed as: where each head is computed as: and W Qi , W Ki , W Vi , and W O are weight matrices.Figure 4 showcases the multi-head attention mechanism, where multiple attention heads operate in parallel, and their outputs are concatenated and passed through a dense layer to produce the final output.Understanding the intricacies of the transformer architecture, from the self-attention mechanism to multi-head attention, is crucial for harnessing its full potential.By delving into the mathematical foundations and practical implementations, one can build powerful models capable of handling a wide range of tasks in natural language processing. Encoder and Decoder Modules The Transformer architecture consists of an encoder and a decoder, each made up of multiple layers.Here, we'll walk through the implementation of these modules. Encoder Module.The encoder module consists of multiple encoder layers, each containing multi-head attention and feed-forward neural networks. Code Snippet: In these code snippets, 'MultiHeadAttention' and 'PointWiseFeedForwardNetwork' are custom classes that you would define based on your specific needs for multi-head attention and point-wise feed-forward networks, respectively. Building a Simple Generative Transformer Building a generative transformer from scratch involves several steps, from data preprocessing to model training and text generation.In this section, we'll walk through each of these steps, providing a comprehensive guide to constructing your own generative transformer. Data Preprocessing and Tokenization Before feeding data into the model, it is essential to preprocess and tokenize it.Tokenization involves converting raw text into a sequence of tokens, which can be words, subwords, or characters. Defining the Transformer Model Assuming one has already defined the EncoderLayer and DecoderLayer classes, one can define the complete Transformer model as follows: Building a generative transformer, while complex, is made accessible with modern libraries and tools.By understanding the steps involved, from data preprocessing to model training and generation, one can harness the power of transformers for a wide range of applications. Advanced Techniques and Best Practices While the foundational concepts and basic implementations provide a solid starting point, mastering generative transformers requires a deeper understanding of advanced techniques and best practices.This section offers insights into improving generation quality, handling long sequences, memory issues, and leveraging fine-tuning and transfer learning [67]. Techniques for Improving Generation Quality Achieving high-quality text generation necessitates a combination of model architecture tweaks, training strategies, and post-processing methods. Temperature Sampling.By adjusting the temperature during sampling, one can control the randomness of the generated text [68].A lower temperature makes the output more deterministic, while a higher value introduces randomness. where p i is the adjusted probability, z i is the original probability, and T is the temperature.Top-k and Top-p Sampling.Instead of sampling from the entire distribution, one can restrict the sampling pool to the top-k tokens or those tokens that have a cumulative probability greater than a threshold p [69]. Gradient Clipping.To prevent exploding gradients during training, gradient clipping can be employed, ensuring that the gradients remain within a defined range [70].Gradient clipping can be implemented in PyTorch as follows: torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) Handling Long Sequences and Memory Issues Transformers, by design, have quadratic complexity with respect to sequence length.This can lead to memory issues for long sequences. Gradient Accumulation.Instead of updating the model weights after every batch, gradients can be accumulated over multiple batches, effectively simulating a larger batch size without the memory overhead [71]. Model Parallelism.For models with billions of parameters, distributing the model across multiple GPUs can alleviate memory constraints [72]. Gradient Checkpointing.This technique involves storing intermediate activations during the forward pass and recomputing them during the backward pass, reducing memory usage at the cost of increased computation. Fine-Tuning and Transfer Learning Transfer learning, the practice of leveraging pre-trained models on new tasks, has proven highly effective in the NLP domain. Fine-tuning.Once a model is pre-trained on a large corpus, it can be fine-tuned on a smaller, task-specific dataset.This approach often yields superior results compared to training from scratch [73,74]. Adapters.Instead of fine-tuning the entire model, adapters allow for training only a small portion of the model, introducing task-specific parameters without altering the pre-trained weights [75]. Mastering generative transformers goes beyond understanding the basics.By incorporating advanced techniques and best practices, one can achieve state-of-the-art performance, handle large models and sequences efficiently, and adapt pre-trained models to new tasks with ease.As the field of NLP continues to evolve, staying abreast of these practices ensures robust and high-quality model deployments. Applications and Use Cases Generative transformers, with their unparalleled capability to understand and generate human-like text, have found applications across a myriad of domains [40].This section provides an in-depth exploration of some of the most prominent applications, shedding light on the transformative impact of these models on various industries. Text Generation for Creative Writing The realm of creative writing, traditionally seen as the bastion of human creativity, has witnessed significant advancements with the advent of generative transformers [76].These models, trained on vast corpora of literature, can produce text that mirrors the style, tone, and complexity of human authors. Novel and Short Story Generation.AI-powered applications based on GPT-3 and other large language models have been employed to generate entire novels or assist authors by suggesting plot twists, character developments, and dialogues [77].The generated content, while sometimes requiring human oversight, exhibits creativity and coherence. Poetry and Song Lyrics.The nuanced and abstract nature of poetry and song lyrics poses a significant challenge for traditional models.However, the advent of generative transformers has enabled these models to produce verses that resonate with human emotions and experiences.A recent study demonstrated that AI-generated poems were often indistinguishable from those written by humans [78], showcasing the success of these algorithms in replicating human-like poetic expressions. Chatbots and Conversational Agents The rise of digital communication has spurred the demand for intelligent chatbots and conversational agents.Generative transformers, with their ability to generate contextually relevant and coherent responses, stand at the forefront of this revolution.One of the most prominent examples of a conversational agent built on generative transformer architecture is ChatGPT, developed by OpenAI.ChatGPT reached 100 million monthly active users just 2 months after launching, making it the fastest-growing application in history. Customer Support.Businesses employ transformer-based chatbots to handle customer queries, complaints, and feedback [79,80].These chatbots can understand the context, provide accurate information, and even escalate issues when necessary. Personal Assistants.Digital personal assistants, such as Siri and Alexa, are integrating transformer models to enhance their conversational capabilities, making interactions more natural and context-aware. Code Generation and Programming Assistance Software development is undergoing a significant transformation with the introduction of transformer models capable of understanding and generating code.One such model that transforms natural language instructions to code is the Codex model developed by OpenAI [81].These models assist developers by suggesting code snippets, detecting bugs, and even generating entire functions or modules. Code Completion.Integrated Development Environments (IDEs) are incorporating transformers to provide real-time code completion suggestions, enhancing developer productivity. Bug Detection and Fixing.Transformers can be trained to detect anomalies in code and suggest potential fixes, reducing debugging time and ensuring more robust software. Other Notable Applications Beyond the aforementioned domains, generative transformers have found applications in diverse areas: Translation.While traditional machine translation models have limitations, transformers can produce translations that consider the broader context, resulting in more accurate and idiomatic outputs [34]. Summarization.Generative transformers can read lengthy articles or documents and produce concise summaries, retaining the core information and intent [35]. Gaming.In the gaming industry, transformers are used to generate dialogues, plotlines, and even assist in game design by suggesting scenarios or character backstories [82]. The applications of generative transformers are vast and continually expanding.As research progresses and models become more sophisticated, it is anticipated that their integration into various domains will become even more profound. Challenges and Limitations While generative transformers have showcased remarkable capabilities, they are not devoid of challenges and limitations.This section delves into some of the most pressing concerns surrounding these models, from interpretability issues to ethical dilemmas and computational constraints. Model Interpretability Deep learning models, especially those with millions or billions of parameters such as generative transformers, are often criticized for being "black boxes".Understanding why a model made a particular decision can be elusive [83]. Attention Maps.One approach to interpretability is visualizing attention maps [29,84].These maps show which parts of the input the model focused on when producing an output.Attention maps are generated by the attention mechanism that computes a set of attention scores, which can be visualized as a heatmap. Attention maps serve as a tool for interpreting transformer models in NLP by providing insights into various aspects of text processing.They help in analyzing the roles of words in sentences, identifying key topics, evaluating text quality, and detecting errors or biases.However, while attention maps provide insights, they do not offer a complete understanding of the model's decision-making process. Mathematical Analysis.Efforts are being made to develop mathematical tools and frameworks to dissect the inner workings of transformers [85,86].Yet, a comprehensive understanding remains a research frontier. Hallucination in Text Generation Generative transformers are sometimes susceptible to generating text that, while coherent and grammatically correct, is factually incorrect or nonsensical.This phenomenon is commonly referred to as a hallucination.Ji et al. conducted a comprehensive survey of the issue of hallucination in natural language generation (NLG) [87]. The causes of hallucination are multifaceted and can vary.They may include inadequate training data, which limits the model's understanding of the subject matter.Overfitting to the training set is another common issue, where the model learns the noise in the data rather than the actual pattern.Additionally, high model complexity leading to over-parameterization can also contribute to hallucination. Addressing the issue of hallucination involves multiple strategies.One approach is to fine-tune the model on a more specific dataset that is closely aligned with the task at hand.Another strategy involves incorporating external knowledge bases that can factcheck the generated text in real-time.Ensemble methods, which combine the outputs of multiple models, can also be used to validate the generated text and reduce the likelihood of hallucination. Efforts are underway to quantify the degree of hallucination in generated text.Although a standard measure has yet to be established, one simplistic way to quantify it is through the Hallucination Score, defined as the ratio of the number of hallucinated tokens to the total number of generated tokens, as shown in Equation (21). Hallucination Score = Number of hallucinated tokens Total number of generated tokens (21) Ethical Considerations in Text Generation Generative transformers, with their ability to produce human-like text, raise several ethical concerns [88]. Misinformation and Fake News.There is potential for these models to generate misleading or false information, which can be weaponized to spread misinformation. Bias and Fairness.Transformers, being trained on vast internet datasets, can inherit and perpetuate biases present in the data [89].Addressing this requires careful dataset curation and post-hoc bias mitigation techniques. where P model is the model's prediction, P true is the true distribution, and n is the number of samples. Computational Requirements and Environmental Impact Training a large language model demands significant computational resources.For example, the GPT-3 model, which has 175 billion parameters, would require 3.14 × 10 23 FLOPS for training, translating to 355 GPU-years and a cost of USD 4.6 million on a V100 GPU [90].Memory is another bottleneck; the model's 175 billion parameters would need 700 GB of memory, far exceeding the capacity of a single GPU.To manage these challenges, OpenAI used model parallelism techniques and trained the models on a high-bandwidth cluster.As language models grow in size, model parallelism is becoming increasingly essential for research. Energy Consumption.The energy required to train state-of-the-art models can be equivalent to the carbon footprint of multiple car lifetimes.This raises environmental concerns. Exclusivity.The computational demands mean that only well-funded organizations can train the most advanced models, leading to concerns about the democratization of AI. While generative transformers offer immense potential, it is crucial to address their challenges and limitations.Balancing the pursuit of state-of-the-art performance with ethical, environmental, and computational considerations is paramount for the sustainable and responsible advancement of the field. The Future of Generative Transformers Generative transformers, evolving from early models such as the Recurrent Neural Networks (RNNs) to the sophisticated Generative Adversarial Networks (GANs) and now the powerful transformers, have revolutionized numerous domains.With advancements in model architectures, training techniques, and hardware capabilities, we can anticipate models that not only understand and generate human-like text but also exhibit enhanced creativity, reasoning, and a form of artificial consciousness. The way forward is full of opportunities for exploration and innovation.As the field of generative transformers continues to evolve, there are numerous avenues for research and development that remain unexplored or underexplored.The evolution from rulesbased systems to advanced LLMs has dramatically improved performance and training efficiency.These improvements are not confined to text and language processing but extend to computer vision and other modalities, creating avenues for interdisciplinary research. Multimodal Models The future sees generative models that seamlessly integrate multiple modalities-text, image, sound, video, and more-offering a holistic understanding of the world and generating content that overcomes the limitations of current models.Recent advancements have already led to transformers capable of generating not just text, but also image, audio, and video [91].These multimodal models are expected to evolve into sophisticated systems capable of processing and understanding inputs from various modalities simultaneously. In the future, we anticipate the emergence of single applications and more advanced multimodal models.These systems would not only understand inputs from different sensory channels-such as visual, auditory, and textual-but also generate outputs in various forms, moving well beyond mere text generation.The integration of these modalities in a single model offers a more comprehensive approach to understanding complex real-world scenarios and creating more nuanced and contextually relevant outputs. Domain-Specific Models The development of domain-specific GPT models is becoming increasingly crucial across various applications [92].While current large language models are adept at understanding natural language and generating content, their effectiveness and accuracy can vary significantly when applied to specialized domains such as medicine, law, and finance [93].A big challenge in tailoring these models to a specific domain lies in the acquisition of high-quality, domain-specific data.Another significant challenge is the fine-tuning process, which involves adapting the model to the unique characteristics and vocabulary of the domain. Despite these obstacles, there has been progress in the development and implementation of domain-specific GPT models.The emergence of these models marks a future towards more tailored AI solutions.Companies with unique large datasets stand to gain competitive advantages by training their own bespoke models.This trend is exemplified by Bloomberg's development of a specialized LLM for financial tasks [94].Other companies such as Hugging Face and Databricks are also playing pivotal roles in providing the necessary resources and platforms for developing and fine-tuning these customized models. In the future, we can expect these domain-specific GPT models to offer enhanced efficiency, improved interpretability, and better domain generability compared to existing large language models.However, the development of these models must also focus on optimizing energy consumption and addressing the challenges of knowledge retention during the fine-tuning process. Model Efficiency The growing size of models necessitates research in computational efficiency and energy consumption.This includes efforts to develop more sustainable AI infrastructure and predictive infrastructure, essential for the data-intensive nature of enterprise AI applications. Ethical AI With the widespread implementation of generative AI across various sectors, ensuring ethical use becomes paramount.This involves research into bias mitigation, fairness, transparency, and the development of guidelines for responsible AI usage [95], especially as AI begins to automate complex tasks like legal work and medical fields like drug design and medical diagnosis. Interdisciplinary Integration The future of generative AI involves its fusion with other fields such as neuroscience and cognitive science.This integration could lead to breakthroughs in understanding both artificial and natural intelligence, with generative AI applications expanding beyond technical fields to impact popular culture and everyday life, such as in the creation of high-resolution images and user-friendly AI applications for enhancing productivity. Conclusions As we reflect upon the evolution of generative transformers, from their foundational roots with Alan Turing to their current state-of-the-art capabilities, it becomes clear that we are at a turning point in the development of artificial intelligence.In the words of Alan Turing, "We can only see a short distance ahead, but we can see plenty there that needs to be done". As we reflect upon the evolution of generative transformers, from their foundational roots with Alan Turing to their current state-of-the-art capabilities, it becomes clear that we are at a turning point in the development of artificial intelligence.In the words of Alan Turing, "We can only see a short distance ahead, but we can see plenty there that needs to be done".This foresight aptly describes the current state of AI.The advancements in generative transformers have not only redefined what machines are capable of doing but also opened up a myriad of possibilities for future exploration and innovation.As we advance and develop new technologies, it is crucial to navigate the ethical implications, environmental and societal impacts of these technologies.The goal is not just to push the boundaries of what AI can achieve but to do so responsibly, ensuring that these advancements benefit society at large. Figure 2 . Figure 2. Schematic representation of the attention mechanism. Figure 3 . Figure 3. Expanded schematic representation of the transformer architecture with a smaller Features block. The decoder module is similar to the encoder but has an additional multi-head attention layer to attend to the encoder's output.
2023-12-17T16:08:00.122Z
2023-12-15T00:00:00.000
{ "year": 2023, "sha1": "4016fe2e7916b349f58d53f2e8a756bccb9c147a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2413-4155/5/4/46/pdf?version=1702628551", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f1a5f97eef977cfb5a98f44e596a95a1be399333", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
35385224
pes2o/s2orc
v3-fos-license
Spontaneous coronary artery dissection: Case series from two institutions with literature review Spontaneous coronary artery dissection (SCAD) is a rare cause of acute coronary syndrome (ACS). Consequently, its presentation and optimal treatment are yet to be clearly defined. In the current literature, all case series report less than 50 patients, most of whom are either young peripartum women or women who have used oral contraceptives over long periods. All information in this study was compiled by the database service from two hospitals, the first one between 2003 and 2012 and the second one between 2007 and 2012, to include the clinical characteristics, angiography. and treatment approaches in the study population. The study population consisted in four women (50%) and four men (50%) whose ages ranged between 28 and 57 years. Two women had a history of oral contraceptive use and three women presented during peripartum. None of the patients had traditional cardiovascular risk factors or previous heart disease. In 88% of the cases, the principal diagnoses were non-ST segment elevation myocardial infarction and unstable angina. All patients underwent emergency coronary angiography and percutaneous coronary intervention. Half of them were treated with drug-eluting stents and the other half with bare metal stents. The most frequent type of dissection was NIHBL Type E, and the right coronary artery was the most frequently compromised. SCAD is a rare cause of ACS; however, its identification has improved due to the availability of angiography and new complementary techniques. Regarding treatment, PCI seems effective with adequate long-term results. Introduction Spontaneous coronary artery dissection (SCAD) is a rare cause of acute coronary syndrome. As such, the incidence, pathogenesis, and treatment have yet to be clearly defined. The first reports were based on post-mortem studies of fatal cases. In the current literature, publications contain less than 50 patients, most of whom are either young peripartum women or women who have used oral contraceptives over long periods. In this study, we evaluated the characteristics of eight patients admitted to our institution with a diagnosis of SCAD by the virtue of the classification of the dissection, associated risk factors, clinical presentation, treatment performed, and long-term prognosis. In addition, we compiled a literature review to assess the fundamental aspects of this condition. Study methodology We evaluated 19,625 coronary angiograms between 2003 and 2012 from patients treated in Clinica Cardio VID, and Hospital Pablo Tobon Uribe, interventional cardiology centers. We collected the clinical characteristics, angiographic features, and treatment strategies and followed up at one year. We found a total of eight patients who were angiographically diagnosed with SCAD. In accordance with the National Heart, Lung and Blood Institute (NHLBI) scale, we defined a coronary dissection as a double lumen in an artery with a radiolucent flap (1). We conducted a demographic descriptive analysis of patients with SCAD. We determined the type of acute coronary syndrome with which the patient presented and documented the treatment received. Follow-up at one year was achieved in four of the eight patients (50%). Table 1 summarizes the clinical characteristics and treatment. Four of the patients were women (50%) and six of the patients were Caucasian (75%). The age range was between 28 and 57 years with a mean age of 43.5 years (50 for men, 37 for women). Among women, the most prevalent risk factor was postpartum, which was present in 75% of women. Prior use of contraception was reported in two of the women patients. No patients had traditional cardiovascular risk factors or previous heart disease, and none had any history of secondary coronary dissection either. The reasons for admission were non-STEMI (63%) and unstable angina (37%). All patients underwent emergency coronary angiography and received immediate percutaneous treatment. Four patients were treated with bare metal stents (50%) and four with drug-eluting stents (50%), all of which resulted in TIMI 3 flow. Table 2 indicates angiography characteristics. The right coronary artery was most frequently compromised (63%), and the majority had NIHBL Type E classification of dissections (88%). Only one patient had a severely impaired left ventricular systolic function with an ejection fraction of 10%. No patients died during hospitalization. Event-free survival and angina events were gathered from only four patients because of a loss of follow-up in four patients. Results Literature review SCAD is an uncommon cause of acute coronary syndrome and sudden death that is poorly understood and classically affects otherwise healthy and young population (2)(3)(4)(5). It is a separation between the layers of the coronary artery (intima or media), which creates a fold with free communication between the true and false lumen or an intramural hematoma causing blood flow obstruction. Furthermore, the rupture of the vasa vasorum may also generate vessel wall hemorrhage without communication with the lumen (2, 3). However, it is not clear whether the dissection or the hematoma happens first. Although often presenting as an ACS, the pathophysiology is completely different in the sense that SCAD does not involve atherosclerosis or plaque rupture, but is rather the result of the aforementioned mechanisms. This is corroborated by the fact that it is more common in young women without risk factors for atherosclerosis. The true prevalence is unknown; however, the advances in imaging techniques have led to new understanding of diagnosis and management, particularly with the use of IVUS, CT angiography, and optical coherence tomography (OCT) (2-4, 6, 7). The average age of incidence is 42 years, with about 80% of patients being females; 20-25% of the cases occur in the peripartum period and are more frequently diagnosed in the left coronary artery (4,8). It is also associated with collagen diseases. Interestingly, most cases are diagnosed at autopsy (5,7). In terms of classification, coronary dissection can be primary or secondary. Primary dissections occur spontaneously, whereas secondary dissections can be caused by an extension of an aortic root dissection, percutaneous coronary intervention, cardiac surgery, or thoracic trauma (5). The NHLBI classifies dissections based on angiographic appearance. Type A dissections are characterized by the presence of radiolucent areas within the vessel lumen during contrast injection with little or no persistence once the dye has cleared. Type B dissections present parallel tracts or a double lumen separated by a radiolucent area during the contrast injection with little or no persistence once the dye has cleared. Type C dissections have persistent contrast extravasation. Type D dissections are spiral luminal filling defects on vessels with complete but slow distal flow. Type E dissections are persistent filling defects with slow anterograde flow. Type F dissections result in total occlusion of the vessel (1, 5). The first reported case was described in 1931 in the autopsy of a 42-year-old woman (5). To date, no single publication has presented more than 50 cases. Although the first reports were from autopsy studies, the advent and availability of coronary angiography has enabled early diagnosis and evaluation of the possible pathophysiological mechanisms. The incidence of SCAD by angiography is highly variable, typically ranging from 0.07% to 1.1% of patients referred for cardiac catheterization (9, 10). Our study found a prevalence of 0.035% in a sample population of more than 19.000 patients. Risk factors for SCAD include African descent, age over 35 years, multiparity, hypertension, thrombophilia, diabetes mellitus, smoking, and pre-eclampsia. Our results had similar mean ages for men and women (38 and 48 years, respectively). Women comprised 57% of our sample population, which is a smaller proportion than previous studies (9,10). However, the proportion of our cases associated with peripartum was 70%, which is consistent with the literature. In our population, the most common clinical presentation is a non-STEMI. Contrary to previous studies, our study found that the circumflex artery was most commonly affected among women, and the RCA was most compromised among men, which is different from what other series have reported. Additionally, two risk factors that occur in our registry with a higher frequency than previous literature are Caucasian descent and hypertension (5). The pathogenic mechanisms for SCAD are still under investigation, and many of the hypotheses described are based on speculation or causal associations. The results of pathologic descriptions have come from autopsies of patients who had presented with sudden death due to cardiac arrest, and are now being complemented with the use of the aforesaid intravascular imaging techniques. Contrary to aortic dissections, SCAD is typically circumferential and located within the external third of the tunica media or between the tunica media and tunica adventitia which creates a false lumen, all of this in the absence of traumatic or iatrogenic causes. Furthermore, this lumen expands because of blood or clot accumulation, leading to the distal propagation of the dissection, compressing the real lumen, and resulting in myocardial ischemia. The most common conditions associated with SCAD are atherosclerosis and vascular changes during the peripartum period. One third of all SCAD cases occur in women in the peripartum period, possibly as a result of hormonal and hemodynamic changes that ensue during pregnancy and early after delivery. The peak incidence is within the first 2 weeks after delivery, and it may take up to six months to return to the prepregnancy baseline (5,11,12). Pregnancy increases cardiac output and increases plasma volume, which in turn increases ejection fraction and blood pressure. These increases lead to a higher shear force on the artery walls that can lead to intimal tears. In addition, hormonal and biochemical changes (decreased collagen synthesis and increased production of progesterone) weaken the middle layer of the vessel, thereby increasing the likelihood of SCAD during peripartum. After a patient has developed SCAD, pregnancy is not recommended (2,13). Other causes include systemic inflammatory conditions, such as periarteritis nodosa, lupus erythematosus, and eosinophilia. In the latter, the proposed mechanism is related to adventitial infiltration by eosinophils. This infiltration is described in up to 43% of SCAD cases and suggests a relationship between the eosinophilic infiltration of the cervix during pregnancy and eosinophil infiltration of the coronary artery adventitia (18). Cytotoxin release results in the lysis of the tunica media and damage to the arterial collagen that predisposes dissection. Some studies propose that eosinophils are also a part of the mechanism for peripartum cardiomyopathy; however, more studies are needed to support this hypothesis (14,15). Our study reported the peripartum period as an important risk factor present in 42% of the cases in our registry; however, this could be due to selection bias because all subjects were referred to a tertiary reference center. The smaller percentage of coronary dissections is due to spontaneous ruptures of atherosclerotic plaques, although the diagnoses of this type of SCAD have been increasing with the availability of IVUS (5). This atherosclerotic inflammatory process could lead to intramedial hemorrhage, which could further extend the dissection. Other conditions associated with SCAD are Marfan syndrome, Ehlers-Danlos syndrome, cystic medial degeneration, vasculitic processes, substances such as cocaine (by inducing systemic hypertension or coronary vasospasm), and other idiopathic causes in which the condition cannot be detected (5). The clinical presentation of SCAD is similar to that of acute coronary syndromes (4,12). Symptoms may present as unstable angina, acute myocardial infarction, ventricular arrhythmias, or sudden death. SCAD should be considered in differential diagnosis for a young patient with no risk factors, a postpartum woman with acute coronary syndrome, or a presentation of sudden death. Coronary angiography is recommended to rule out SCAD (5,12,16). The presentation of our patients who had SCAD was either non-STEMI (63%) or unstable angina (37%). Coronary angiography remains the most common method for clinical diagnosis; however, requires a high degree of suspicion (2,4). Particular care must be taken when performing coronary angiography in a patient with suspected SCAD because the procedure itself is a risk factor for increasing the dissection. A fluoroscopic diagnosis requires the presence of a thin longitudinal radiolucency representing the dissection flap, creating two or more lumens. This is exceptionally important because adequate differentiation of ACS due to SCAD from atherosclerosis is vital because the treatment approaches are completely different (2). Although angiography is the gold standard, it cannot be used to visualize the coronary wall; hence, its accuracy for diagnosis is limited (3). In this sense, complementary imaging techniques can aid in identifying the lesion (3). IVUS, for instance, can aid in identifying intramural hematomas without flap dissection, particularly those caused by atherosclerosis. It has been described to be diagnostically helpful for navigating the guidewire into the true lumen, despite angiography yielding a good appearance (7). Optical coherence tomography (OCT) is a new tool that provides a higher resolution to visualize the intimal tear and evaluate the length of hematomas, providing valuable insights on atherosclerosis and results of coronary interventions. In prospective study of OCT performed in 17 consecutive patients out of 5,002 undergoing coronary angiography, OCT was able to rule out the diagnosis of SCAD in 6 patients and confirmed the presence of SCAD in the rest (3). Other lower resolution tools, such as magnetic resonance imaging (MRI) and cardiac computed tomography angiography (CCTA) have been used more for monitoring purposes (17,18). Angiography allowed us to only document the dissection flap with the visualization of the true and false lumen (Fig. 1). Therefore, we may have underestimated the true incidence of SCAD because we did not have IVUS or OCT as diagnostic tools for most of our patients (Fig. 2). Currently, there are few treatment management guidelines for patients with SCAD. Treatment options include medical therapy, percutaneous coronary intervention (PCI), or coronary artery bypass surgery (CABS). Therapeutic decisions are based on the individual evaluation of each case. There are reports of cases in which non-interventional treatment adequately resolved SCAD. As can be seen in the algorithm proposed by Vrints et al. (5), CCTA or MRI can be used as imaging methods for monitoring SCAD (Fig. 3). Medical therapy is similar to acute coronary syndromes, including antithrombotic therapy with unfractionated or low molecular weight heparin, aspirin, clopi- dogrel and GP IIb/IIIa inhibitors, anti-ischemic therapy with beta-blockers, and nitrates (5,19). However, other studies warn about the use of antithrombotic therapy. Although antithrombotic therapy can help reduce the thrombus in the false lumen, allowing normal flow in the true lumen, doing so could increase the risk of bleeding, which expands the intramural hematoma and can cause the collapse of the true lumen. We believe that the use of anticoagulants is not recommended because fibrinolysis has been associated with extending the coronary dissection extension and increased mortality. In cases where the dissection causes significant compromised blood flow, percutaneous intervention or surgery should be considered. In our series, all patients were treated with percutaneous intervention, four with drug-eluting stents, and four with bare metal stents. Mortality varies greatly between studies, ranging from 48% to 82%. Recurrent dissections may occur in the months following the initial event. About 50% of patients develop a second episode of SCAD within 2 months (20). Those who survive the initial event have a survival rate of 80% at 25 to 30 months, and men (93%) have better survival rates than women (73%). This disparity can be attributed to the low rates of comorbidity found during peripartum. In our series, the free survival rate was available in four of eight patients, all of whom survived (100%). Conclusion SCAD is a rare cause of acute coronary syndrome and sudden death. Currently, its identification has improved due to the availability of coronary angiography and new complementary techniques, such as IVUS or OCT. This condition should always be considered in the differential diagnoses for young or peripartum women who present with acute coronary syndrome. Percutaneous stenting is an effective treatment with satisfactory long-term results.
2018-04-03T03:56:00.661Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "d001d4d6bd7b4be9038e78299bff9ddc20203639", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5152/akd.2015.5851", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d001d4d6bd7b4be9038e78299bff9ddc20203639", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
105751117
pes2o/s2orc
v3-fos-license
Volumetric Studies of Some Amino Acids in Aqueous 1,4- Dioxane Solution at 308.15K The Density measurement was carried out in aqueous solution of glycine, L-alanine, Lvaline, L-Leucine and L-phenyl alanine in 10% 1,4dioxane solution at 308.15 K. The values of apparent molar volume, limiting apparent molar volume have been evaluated from density data. These values are used for calculatating the number of water molecule hydrated (nH) to the amino acids. Transfer volumes at infinite dilution from water to aqueous 1,4dioxane solution have been also calculated. Group contribution to partial molar volumes has been determined for the amino acids. Transfer parameter have been interpreted in terms of solute-co solute interaction on the basis of co-sphere overlap model. All these parameters are related to type and extent of intermolecular interactions in binary liquid mixtures. All the results were interpreted in the light of ion -ion and ion – solvent interactions and of structural effect of solutes in solutions. INTRODUCTION One can notice that there is lack of volumetric data up to high concentration range of amino acids and electrolytes. Very less data are available for amino acids in 1,4-dioxane systems 8 . There is need of examining the effect of amino acids on the properties of electrolyte. So it is preferred to study properties of model compounds like amino acids instead of complex bio-molecules. In order to understand the effects of ionic species on amino acids in general, various properties of amino acids in aqueous 1,4-dioxane solutions are studied. MATERIAL AND METHODS Five amino acids namely glycine, Lalanine, L-valine, L-Leucine and L-phenyl alanine of highest purity were obtained from Sigma chemicals Co. Amino acids were dried in vacuum oven for 24 hrs on kept over P 2 O 5 in vacuum desiccators. 1,4-dioxane was refluxed and then distilled over sodium metal using a fractionating glass column. The middle fraction distilling at 373 K was collected for use. All the solutions were prepared on the molarity basis. The samples were weighted on a mettler balance having accuracy of 0.01mg. Water used to prepare solutions was obtained by distilling deionised water over alkaline KMnO 4 and it was thoroughly degassed prior to its use. The specific conductance of the water used was less than 0.055 × 10 -6 s cm -1 The densities of the solutions were measured using a single capillary pycnometer made up of borosil glass with a bulb of total volume of 8 cm 3 and capillary with internal diameter of 0.1cm was chosen [9][10][11][12][13][14][15] for the present work. The details pertaining to calibration experimental set up and operational procedure have been previously described [9][10][11][12][13][14][15] . An average of triplicate measurement was taken in to account. The reproducibility of density measurement was +-3 X 10 -5 g cm -3 . The temperature was constantly maintain by controlled temperature water bath (Gemini scientific instruments, Madras) having accuracy of ± 0.01 °C. In the present study, densities of these solutions are increasing with increase in concentration of amino acids. The plot of density with the concentration amino acids in 10% aqueous 1,4-dioxane are found to be linear in all the cases. The apparent molar volumes (V  ) were calculated from the density data using well-known expression 16,17,18 ... (1) Where  0 and  s are densities of solutions and solvent respectively. C is molarity and M is the molar mass of solute. The resulting values of apparent molar volumes (V  ) with the molar concentration (M) of the amino acids in 10% aqueous 1,4-dioxane at 308.15 K are reported in table. 1.1. Comparison with earlier results 19 shows that Values of V  increase with increase in temperature in aqueous 1,4-dioxane. It is also found that V  increases linearly with increase in size of alkyl V  side chain of the amino acids in aqueous 1,4-dioxane. It indicates that the solute-solvent interactions increases with increase in size of the alkyl side chain of amino acids and with the concentration of amino acids 205 . It is accordance with earlier results 19 The variation of apparent molar volumes with the square root of molar concentration can be represented by Mason's equation 20 . ... (2) Where 0 V  is the limiting value of the apparent molar volume (equal to the partial molar volume at infinite dilutions) and S v is the experimental slope. The Values of 0 V  and S v obtained by least square fitting of the V  values to equation 2 for various amino acids in aqueous 1, 4-dioxane, which are reported in table 1. Limiting apparent molar Volume ( 0 V  ) and the experimental slope have been employed to indicate the type of interaction 207-208 . The effect of temperature on 0 V  has been interpreted in terms of the solute-solvent interactions while that on experimental slope in terms of solute-solute and solute-solvent interaction 21,22 . The positive value of S v suggesting that the added electrolyte behave as a structure maker in the solvent while negative value of S v suggest the structure breaker capacity in the solvent. All the amino acids studied have positive 0 V  value in binary aqueous solution of aqueous 1,4-dioxane. It is also found that 0 V  increase linearly with the size of alkyl side chain of amino acids and increases with increase in temperature. It indicates that the co-solute-solvent interaction increase both on increasing temperature and the size of alkyl side chain of amino acids 23 . Since S v is related to solute-solute interaction. It is evident from table 1 that the values of the slope S v for all amino acids in aqueous 1,4dioxane are negative suggesting weak solute-solute interaction in the system. The experimental slope S v increases with increase in temperature, suggest that more and more solute is accommodated in void space left in packing the large associated solvent molecule and such enhance the structure of the solvent. However S v is found to decrease with increase in temperature of L-valine in aqueous 1,4dioxane, suggesting the decrease in solute-solute interaction with the rise in temperature indicating structure-breaking effect of L-valine in aqueous 1,4dioxane. It is found from table 4.6 that the values of values in case of L-valine L-leucine and L-phenyl alanine indicates the effect of hydrophobic parts. That is the interactions between the 1,4-dioxane and zwitter ionic center of amino acids increase with increase in temperature. For L-valine the interaction between non-polar group of L-valine and 1,4-dioxane are predominant. The overall effect is that the charged end group of glycine and L-alanine influence electrostatically the surrounding water molecule the so-called electrostriction 23 . In other word, the hydration co-sphere of NH 3 + 100 -. Which are more hydrated than that of aqueous 1,4-dioxane will be affected to greater extent than the later. The positive V  0 values results that the dehydration of solute and co-solute occurs more in case of glycine and L-alanine 23 . S. Li. et al., 24 also reported positive V  0 values for different amino acids from water to aqueous glucose solutions. Now to explain partial molar volume data, different models have been used. Franks et. al., 25 have shown that partial molar volume at infinite dilution of a non electrolyte is a combination of two factors by the following equation. ... (4) Where is V int the intrinsic molar volume of the non-hydrated solute V s is the contribution due to the interaction of the solute with water. Some workers 26,27 have suggested that the V int is made of the following type of contribution. ... (5) Where 1 V V is the Van der walls volume 28,29 and V void is the volume associated with void or empty space. For electrolyte zwitter ionic solutes, this equation was modified by Shahidi et. al . 26 to find contribution of one molecule to partial molar volume of a hydrophobic solutes as ... (6) Where V shrinkage is the volume due to shrinkage this is due to interaction of hydrogen bonding sites with water molecules. Assuming that V v,w and V void have the same magnitude in water and aqueous 1,4dioxane positive 0 V   values of glycine and L-alanine might arise from the decrease in V shrinkage in aqueous 1,4-dioxane. The interaction 1,4-dioxane with the zwitter ionic center of amino acids (glycine and Lalanine) reduces the effect electrostriction of water, thereby causing a decrease in V shrinkage . In other words some water molecule may be released as bulk water in presence of 1,4-dioxane. It brings about the increase in volume of the solvent 30 thereby the reducing the strong interactions between amino acids and water. This results in positive volume of transfer from water to aqueous 1,4-dioxane solution observed in case of glycine and L-alanine. Thus a positive 0 V   for glycine and L-alanine results from Values can further be rationalized by co-sphere overlap model developed by Gurney 31 and Frank and Evans 32 . According this property of water molecules in the hydration co sphere depend on the nature of solute molecule. When two solute particles come close enough such that their co-sphere over lap. Some of the co sphere material is displaced and this is accompanied by changes in the thermodynamic parameters. The interaction between aqueous 1,4dioxane and amino acids can be classified as follows 1) hydrophilic-ionic interaction occurring between zwitter ionic centers of amino acids and dipolar parts of 1,4-dioxane 2) Hydrophilichydrophobic interaction occurring between non polar parts of amino acids and hydrophobic parts of 1,4dioxane. According to the co sphere model in terms of solute-co solute interactions, hydrophilic-ionic group interaction contributes positively, whereas hydrophilic-hydrophobic group interaction contributes negatively to the 0 V   values. In case of glycine and L-alinine, the former type of interactions is predominant over the latter and for L-valine, Lleucine, L-phenyl alanine hydrophilic-hydrophobic group interaction are dominating over the hydrophilic-ionic group interaction. It may noted from A. Pal and S. Kumar's 33 findings that values of L alanine are less than those of glycine and L-valine in solutions. This is in line with the earlier conclusion drawn on the basis of volume of shrinkage that varies solute-co-solute interactions occur in these system which contribute to different extents, depending on the particular amino acids solution. The overall effect is that the solute -co solute interactions are predominant over the solute-solvent interaction as obtained in glycine and L-alanine . The hydration of solute molecule in water is explained on the basis of Frank and Wen 34 model of solute-solvent interaction, which pictures three different solvent interactions, which pictures three different solvents structure regions in the neighborhood of the solute. Just out side the molecule, there is layer of immobilized and compressed water as a result of electrostrictive and other attractive forces exerted the solute. The solute this is surrounded by slightly less compressed or "structure broken "region of water molecule distantly affected by these forces. The outermost layer is bulk water, which possesses the typical tetra coordinated hydrogen-bonded structure not affected by any of the above forces. Compressibility measurements indicate the changes in the first two layers of solvent around the solute molecule. In case of carbohydrate molecule, the water structure is slightly disturbed by the hydrogen -bonded network around the solute; this holds the water around the solute firmly, making the hydration layer even less compressible. The number of water molecule nH hydrated to the amino acids were calculated using the method given by 35,36,37 ... (7) Where From the computed values of n H, It is found that in all the concentration, each in aqueous 1,4dioxane molecule is closely bound and forms a complex in cluster organization with a fixed number of water molecules. Conclsion In summary, we have shown ion-amino acid interaction parameters from volumetric properties of glycine, L-alanine, L-valine, L-Leucine and L-phenyl alanine in 10% aqueous 1,4-dioxane at 308.15 K. The partial molar volumes of transfer ( 0 V   ) from water to aqueous have been calculated from measured quantities The more positive value of 0 V   for glycine and L-alanine indicates the dominance of the charged group  3 NH and COOwhile negative 0 V   values in case of L-valine L-leucine and L-phenyl alanine indicates the effect of hydrophobic parts. That is the interactions between the 1,4-dioxane and zwitter ionic center of amino acids increase with increase in temperature. For L-valine the interaction between non-polar group of L-valine and 1,4-dioxane are predominant. The overall effect is that the charged end group of glycine and L-alanine influence electrostatically the surrounding water molecule the so-called electrostriction 151 .Also from the computed values of n H, It is found that in all the concentration, each in aqueous 1,4-dioxane molecule is closely bound and forms a complex in cluster organization with a fixed number of water molecules.
2019-04-10T13:12:25.027Z
2007-12-28T00:00:00.000
{ "year": 2007, "sha1": "b6cea8a6fa5efe3fb8631ab43b6dfec85529cf1f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.13005/msri/040226", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9ebe0f51d9884a29758011f638fe7d2337ab6e2d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
53159332
pes2o/s2orc
v3-fos-license
Mixed-Ligand Metal–Organic Frameworks and Heteroleptic Coordination Cages as Multifunctional Scaffolds—A Comparison Conspectus Porous nanostructures and materials based on metal-mediated self-assembly have developed into a vibrantly studied subdiscipline of supramolecular chemistry during the past decades. In principle, two branches of such coordination compounds can be distinguished: Metal–organic frameworks (MOFs) on the one side represent infinite porous networks of metals or metal clusters that are connected via organic ligands to give solid-state materials. On the other hand, metal–organic cages (MOCs) are discrete and soluble systems with only a limited number of pores. Formation of a particular structure type is achieved by carefully balancing the donor site angles within the ligands as well as the nature and coordination geometry of the metal component. Years of research on MOFs and MOCs has yielded numerous types of well-defined porous crystals and complex supramolecular architectures. Since various synthetic routes and postsynthetic modification methods have been established, the focus of recent developments has moved toward the preparation of multifunctional systems that are able to mimic the structural and functional complexity of natural enzymes. This Account compares different strategies to prepare multifunctional MOFs and heteroleptic MOCs and gives a perspective on where to move forward. While the preparative toolbox for multifunctional MOFs is already quite mature, pore accessibility and substrate diffusion within the crystal have been identified as major challenges yet to be overcome. Only recently have a set of different strategies for the assembly of heteroleptic MOCs been developed. Such multifunctional cages can be formed from either partially protected or “naked” metal cations. Controlled assembly, producing single products rather than statistical mixtures, leans on assembly-dependent approaches making use of either steric effects or shape complementarity between the ligands. Further strategies include coordination-site engineering and hierarchical assembly of preformed components. The main challenge with heteroleptic, functional MOCs is to find a balance between the required dynamic assembly fidelity and the stability of the resulting system under operating conditions. If these limitations can be overcome in the future, chemists will be able to design multifunctional systems of similar activity and complexity as nature’s enzymes from simple and easily accessible synthetic building blocks. Major impacts on chemical sensing, small-molecule recognition and sequestration, drug delivery, and catalysis will be achieved by these materials. INTRODUCTION In recent years, the development of new materials and supramolecular architectures based on biology's principles of hierarchical assembly, combining covalent and non-covalent interactions and embedding a multitude of orthogonal functionality, has experienced increasing attention. 1−3 Natural evolution has tuned proteins to perform highly specific tasks such as molecular recognition, triggered signal transduction, and catalysis with high selectivity and turnover. Proteins constitute complex architectures with discrete pockets for the binding of substrates, chemical signals, and fuels. Enzyme pockets are typically asymmetric and highly functionalized with amino acid residues, giving rise to environments of specific shape, charge, and polarity. These discrete binding sites allow, for example, the selective recognition of small biomolecular signals, triggering consecutive processes. Specific redox and pH conditions significantly deviating from those of the surround-ing medium are often established. Enzymatic catalysis is driven by factors such as substrate preorganization, proximity of embedded catalytic sites or cofactors and substrates, and directing effects by the surrounding protein structure. Metalloproteins contain mono-or multinuclear metal centers, usually featuring heteroleptic coordination environments, in a catalytic compartment that is protected by the protein shell. 4 Synthetic chemists have been inspired by the structural and functional complexity of biocatalytic systems ever since their molecular features have been stepwise unraveled. Mimicking their capabilities by artificial constructs is regarded as a challenging aim. Thus, in parallel to the breathtaking progress in bioengineering and synthetic biology, fully human-made structures with bioinspired function and dynamics have been developed in the last decades, many of them belonging to the realm of bottom-up supramolecular chemistry (e.g., switchable rotaxanes and catenanes, unidirectional rotors, and molecular machines). 5 Modular and dynamic self-assembly, often based on metal cations and organic ligands, has been used extensively in the preparation of these and related structures with relatively moderate synthetic efforts. While approaches toward the preparation of monofunctionalized architectures (containing only one type of bridging organic element) are highly advanced, strategies toward the controlled implementation of multiple functionalities, thus representing a further level of complexity, are still in their infancy. This Account picks up selected examples of two subdisciplines of metallo-supramolecular systems, namely, metal−organic frameworks (MOFs) and metal−organic cages (MOCs, also termed coordination cages), in a comparative manner with a focus on rational assembly strategies toward multifunctional structures and future application potential. MIXED-LIGAND METAL−ORGANIC FRAMEWORKS In the past decade, the potential of MOFs as enzyme mimics has been investigated in detail. 6 MOFs are highly porous, heterogeneous solid-state materials providing channels and pores of a specific size that are available for the uptake of guests such as gases or small soluble molecules. They are usually built up from organic ligands, often with carboxylate or nitrogen donors, and multinuclear metal clusters. Typically, MOFs are prepared via solvothermal synthesis, where the organic building blocks and metal precursors are heated in a polar solvent such as DMF (Figure 1a). When complex metal clusters, such as Zr 6 O 4 (OH) 4 in UiO-66 (UiO = University in Oslo; Zr 6 O 4 (OH) 4 (BDC) 6 , BDC = 1,4-benzenedicarboxylate), are used as building blocks, a modulator (e.g., benzoic acid) that assists in preassembly of the cluster is added to the reaction mixture. After formation of the cluster, the modulator is exchanged with the main ligand, and the MOF crystal grows step by step. 7 Homoleptic MOFs have already shown high potential for applications such as gas storage, 8 chemical sensing, 9 drug delivery, 10 and catalysis. 11−13 In the context of enzyme mimicry, 6 the preparation of mixed-ligand MOFs with multiple functions lining the cavities is emerging as a highly promising approach toward the implementation of fine-tuned reactivity. Different strategies have been used to introduce multiple functional ligands. Most commonly, mixed-ligand MOFs have been prepared by solvothermal synthesis from ligand mixtures (Figure 1a). Yaghi and co-workers demonstrated that up to eight BDC ligands with different side functions can be introduced into MOF-5 (Zn 4 O(BDC) 3 ; Figure 1b). 14 All of the functions were distributed statistically over the whole crystal. Another strategy, leading to more ordered structures, involves mixing ligands with different topologies that are incorporated into specific positions. UMCM-1 (UMCM = U n i v e r s i t y o f M i c h i g a n C r y s t a l l i n e M a t e r i a l ; (Zn 4 O) 9 (BDC) 6 (BTB) 5 , BTB = 1,3,5-benzene-tri-4-carboxyphenyl), for instance, is constructed from tritopic BTB and ditopic BDC that are assembled in a predetermined relation ( Figure 2a). 15 Further examples have recently been summarized in a review by Yaghi and co-workers. 16 (Figure 2c). 17 First, the MOF is constructed from Zr 6 O 4 (OH) 8 (H 2 O) 4 clusters, each connected to eight Me 2 -BPDC ligands. The sterically bulky methyl groups force the two phenyl rings to adopt a perpendicular position relative to each other, resulting in a different structure than in the closely related UiO-67 (Zr 6 O 4 (OH) 4 (BPDC) 6 , BDPC = biphenyl-4,4′-dicarboxylate), 7 in which 12 BPDC ligands are connected to each Zr 6 cluster ( Figure 2b). In PCN-700, two open pockets are formed, which can be postsynthetically filled with BDC and terphenyldicarboxylate (TPDC), respectively. Other approaches include the initial preparation of two-dimensional MOF sheets that are subsequently connected by a second ligand into three-dimensional bulk compounds (Figure 3a). This strategy is similar to the layer-by-layer method, in which a MOF is sequentially grown on a substrate by alternating treatment with the metal precursor and ligands. 18 Both strategies allow the controlled introduction of different ligands that in principle can carry different functionalities. While small and robust functionalities such as amine groups can be introduced directly during solvothermal synthesis, more complex and labile functions are added by milder methods such as postsynthetic ligand exchange (PSE) or modification (PSM). 19 In both, the balance between diffusion and reaction rate determines the outcome (Figure 3b). Matzger and coworkers tested PSE on three commonly used MOFs that are all based on BDC linkers: MOF-5, UiO-66, and UMCM-8 Accounts of Chemical Research Article (Zn 4 O(BDC) 1.5 (naphthalene-2,6-dicarboxylate) 1.5 ). 20 The authors used the deuterated analogue BDC-d 4 for PSE and investigated the resulting samples by Raman spectroscopy. In all three cases, core−shell structures were formed, where the ligand exchange happened at the surface of the particle. Matzger concluded that diffusion of the carboxylic acid is very slow, directing ligand exchange to occur at the outer shell of MOF crystals. In contrast to this, Ott and Primetzhofer used Rutherford backscattering spectrometry to investigate the exchange of BDC-I (I = iodine) within UiO-66. 21 They found a homogeneous distribution over the whole crystal even after very short PSE times, indicating fast diffusion of the ligand and comparably slow exchange. The difference in the two observations is attributed to steric and electronic effects of iodine on the ligand exchange. In recent years, MOFs have been considered as enzyme mimics, as they possess defined pores and channels similar to those of proteins. Pullen et al. 22 utilized PSE to functionalize UiO-66 with [FeFe]-(dcbdt)(CO) 6 (dcbdt = 1,4-dicarboxylbenzene-2,3-dithiolate), a member of the family of Fe 2hydrogenase active-site mimics that are proton reduction catalysts ( Figure 4a). About 14% of the ligands were exchanged in the parent framework, indicating dispersion of the complex over the whole crystal. Incorporation of the catalyst yielded improved performance in photochemical hydrogen production in aqueous buffer solution with Ru-(bpy) 3 Cl as a photosensitizer compared with homogenous [FeFe]-(dcbdt)(CO) 6 in solution, which was attributed to stabilization of active catalyst species by the surrounding MOF. In a second study, an analogous complex, [FeFe]-(mcbdt)-(CO) 6 (mcbdt = 1-monocarboxylbenzene-2,3-dithiolate), was introduced to MIL-101(Cr)-NH 2 (MIL = Mateŕiaux de l'Institut Lavoisier; Cr 3 F(H 2 O) 2 O(BDC-NH 2 ) 3 ) via amide coupling at the BDC-NH 2 ligands (Figure 4b). 23 Improved performance in hydrogen production was observed in this system also. The main difference between UiO-66 and MIL-101 is the pore size (9 vs 29−34 Å, respectively). A direct comparison led to the conclusion that in MIL-101-[FeFe], all of the catalysts are in principle accessible and thus actively participate in hydrogen production, while in UiO-66- [FeFe] only the catalysts on the outer shell were available for reduction by Ru(bpy) 3 Cl. This study is prominent evidence that pore accessibility plays an important role in the application of MOFs. Accessibility strongly depends on substrate diffusion within the crystal as well as on the pore (window) size. Diffusion pathways increase with grain size, Accounts of Chemical Research Article resulting in increasing discrimination of pores that are further inside. Even in mixed-ligand MOFs, functional sites are nonidentical in relation to their position within the crystal (Figure 5a). It should be noted, however, that MOFs are often not perfect crystals and contain defects or cracks, which might influence the pore accessibility. Based on this, a recently developed strategy for improving substrate diffusion within MOFs is the construction of hierarchically porous MOFs, for example, through ligand labilization or use of a modulator. 24 Furthermore, a major challenge yet to be overcome is the difficulty of predicting the activity and selectivity of such systems. It is crucial to be able to study and understand the individual steps of these processes. A clear drawback of MOFs in this respect is their insolubility, which complicates the use of traditional solution-based methods such as NMR or advanced (transient) absorption spectroscopy. Both of these shortcomings may be tackled with small-size, soluble coordination cages, which are discussed in the next section. DESIGN PRINCIPLES FOR ASSEMBLY OF HETEROLEPTIC METAL−ORGANIC CAGES Metal−organic coordination cages represent the smallest possible MOF-like assemblies featuring a limited number of pores. 25 Metal-mediated assembly of homoleptic MOCs has already reached a high level of maturity, and structural characterization by NMR methods and single-crystal X-ray diffraction is straightforward. The preparation of such systems usually proceeds in the following manner: metal precursor and ligands are dissolved and heated until the desired cages have assembled as the thermodynamically most favorable products. Square-planar, diamagnetic palladium(II) has been used extensively, allowing cage assembly to be followed by NMR spectroscopy. In the case of most Pd-mediated assemblies, cage formation with nitrogen-donor ligands is finished after 1−24 h. 26 Heteroleptic coordination cages represent a new class of MOCs offering high potential for application in guest recognition, chemical sensing, and catalysis: the combination of a guest binding site with a second function such as chirality, a photosensitizing unit, proton or electron relays, or a catalyst may lead to complexity similar to that present in proteins. All of the components can be brought together in a modular, nonstatistical approach, allowing quick and easy tuning of the chemical environment in the cavity. Such systems not only allow the rational design and detailed examination of an outer coordination sphere around a functionality but also serve as model systems for larger MOFs and, merged with the latter concept, may in the future facilitate exploitation of advantages of both MOF and MOC chemistry. For these reasons, it is highly desirable to advance the methodology for the preparation, examination, and application of functionalized heteroleptic cages. General Aspects Numerous homoleptic cages have been prepared by means of metal-mediated self-assembly over the last decades. Within this overview, we mainly restrict the discussion to the use of banana-shaped ligands to prepare smaller M 2 L 4 cages as well as large M 12 L 24 spheres. 29 One successful strategy for obtaining heteroleptic cages is the hierarchical assembly of cis-protected metal centers (e.g., Pd(en) or Pt(PR 3 ) 2 ; en = ethylenediamine) with a suitable set of donor ligands. 30 On the other hand, also "naked" metal ions such as square-planar Pd(II) allow the rational formation of mixed-ligand Pd 2 L 2 L′ 2 cages when the Accounts of Chemical Research Article right combination of ligands is employed. 31 When Pd(II) ions and a mixture of two different bis-monodentate ligands are mixed, three potential outcomes can be expected: (1) narcissistic assembly leading to the formation of coexisting homoleptic cages, (2) formation of statistical mixtures of heteroleptic cages, or (3) assembly of a single heteroleptic species based on rational design. While the former two require further treatment and separation, the latter leads directly to a single desired heteroleptic product. In this context, the principle of integrative self-sorting arises, which is the nonstatistical preparation of a single heteroleptic cage product from a suitable mixture of metal source and ligands (or by mixing of homoleptic cage precursors). 32 In the following, different strategies for rational cage design based on integrative self-sorting are discussed (Figure 6). Templating Effects One approach to obtain multicomponent supramolecular cages can be the addition of guest molecules as templates during cage formation. Early examples of templated heteroleptic cage synthesis were shown by Fujita. In 2000, he utilized cisprotected Pd(II) together with two tritopic pyridine ligands. Assembly into homo-or heteroleptic cages was found to be in an equilibrium that could be influenced by the addition of different guests. 33 The same group exploited guest-templated synthesis of a heteroleptic prism from cis-protected Pt(II), tris(pyridine)triazine, and pyrazine. Large aromatic guests such as a triphenylene derivative allowed the selective formation of a multicomponent prism (Figure 7a). 34 More recently, Yoshiza-wa demonstrated the use of fullerene C 60 as a template for the formation of a heteroleptic cage (Figure 7b). 27 First, two homoleptic cages based on anthracene ligands with phenylene and naphthalene backbones, respectively, were prepared. While the larger cage could host C 70 and diethyl malonate-derivatized C 60 , the smaller cage was unable to host these guests. Mixing the two preformed cages in the absence of guest molecules led to the formation of a statistical mixture of heteroleptic cages. Addition of fullerene led to reorganization into one single species, PdL 2 L′ 2 in the cis form. It was concluded that C 60 shows the best host−guest interactions with the heteroleptic cage, thus yielding a large energetic contribution to its stabilization. Templating is a powerful strategy to form heteroleptic cages. As a drawback, however, the cavity is already filled with the template. Steric Effects and Ligand Interaction Hooley investigated the influence of steric bulk in the ligand backbone on the formation of heteroleptic cages with bis(pyridine) ligands (Figure 8a). 35 Three ligands with endohedral functions of increasing size were prepared and combined with the unfunctionalized derivative 8c. Unfunctionalized 8c and ligand 8d with the least sterically demanding functional group (NH 2 ) both form homoleptic cages cleanly when Pd(II) is added. Mixing both ligands and Pd(II) showed a complex NMR spectrum, indicating a statistical mixture of heteroleptic cages. Using 8a with bulkier trifluoroacetate in the endohedral position together with the unfunctionalized ligand allowed for the formation of a Pd 2 8a 1 8c 3 cage along with homoleptic Pd 2 8c 4 . Homoleptic cages with 8a were not observed. Crowley examined ligand interaction as a strategy to control heteroleptic assembly. He achieved clean cis-heteroleptic Pd 2 8e 2 8f 2 cages by installing amines at the 2-position of the pyridine donor ligands 8f (Figure 8b). 36 Formation of only heteroleptic cages was controlled by kinetic effects: hydrogen bonding between amines and α-hydrogens of the unsubstituted ligands stabilized the cis cage. Furthermore, the amines sterically hinder nucleophiles to attack Pd(II) and thus make the heteroleptic cage kinetically most favorable. Shape Complementarity of Ligands Li and Zhou 37 demonstrated the formation of heteroleptic structures by partial ligand substitution in preformed homoleptic cages. First, homoleptic cages based on dicarboxylic acid ligands and Cu(II) paddlewheel nodes were prepared. Subsequently, the cages were exposed to a dicarboxylate ligand with a longer backbone, leading to a mixed-ligand cage. More recently, Kitagawa showed that such structures can be directly obtained when a mixture of 5-(tert-butyl)isophthalic acid acid and azobenzene-3,3′-dicarboxylic acid is reacted with a Cu(II) source. 38 Fujita and co-workers used ligands of different length to study the formation of heteroleptic icosahedral spheres. 39 The authors found that the difference in size has to be significant in order to form clean heteroleptic spheres, such as in bis(pyridyl)benzene together with extended bis-(pyridylethynylphenyl)benzene. Each ligand individually forms a homoleptic M 12 L 24 cuboctahedral complex when it is reacted with Pd(II). Mixing the ligands 1:1:1 with Pd(II) in one pot results in the clean formation of Pd 12 L 12 L′ 12 . Clever developed a strategy based on geometric complementarity of the ligands. Acridone ligands (A) with inwardbent isoquinoline donors were mixed with phenanthrene-based ligands (P) bearing outward-bent pyridines and Pd(II) to form Accounts of Chemical Research Article cis-PdL 2 L′ 2 cages (Figure 9). 40 The concept could further be expanded to carbazole ligands (C). 28 On the basis of these initial results, the Clever lab is currently expanding the ligand scope in order to demonstrate the ubiquitous application of this approach. Coordination-Site Engineering Utilization of cis-protected metal centers as building blocks for the hierarchical assembly of heteroleptic cages has been explored extensively. For example, Stang has constructed prisms through the charge separation approach between Accounts of Chemical Research Article adjacent carboxylate and pyridine donors. 41 Cis-protected Pt(PEt 3 ) 2 (OTf) 2 was reacted with tri-or tetradentate pyridine ligands and sodium terephthalate to obtain multicomponent supramolecular prisms. The formation of heteroleptic structures was attributed to a preference to combine one negatively charged carboxylate and one pyridine at each metal center, leading to charge separation, in contrast to homoleptic assemblies ( Figure 10). Interestingly, the authors also showed Accounts of Chemical Research Article the transformation of preformed homoleptic supramolecular structures into the heteroleptic form upon mixing. Mukherjee investigated heteroleptic assembly based on cis-protected Pd with a mixture of imidazole and pyridine donors. 30b In 2005, Fujita demonstrated the side-chain-directed complementary assembly of a heteroleptic M 6 L 3 L′ 2 prism. A combination of tris(pyridine)triazine and bilutidinyl ligands was reacted with cis-protected Pd(II). 42 Sterically demanding methyl groups in proximity to the donor site in the latter ligand led to selective heteroleptic assembly (Figure 11a). A similar approach, but transferred to "naked" Pd(II), was recently utilized by Clever, who prepared acridone-and phenothiazinebased picolyl ligands from 5-or 3-ethynyl-2-picoline. 43 The respective ligands featured methyl groups pointing either inward (A i ) or outward (A o ). Using only acridone-based ligands for the formation of homoleptic cages resulted in either complex mixtures or bowl-shaped Pd 2 L 3 (CH 3 CN) 2 structures. The formation of clean Pd 2 L 4 cages was less favorable because of the sterically demanding methyl groups. However, a 1:1:1 mixture of A o , P i , and Pd(II) led to the distinct formation of one heteroleptic cage. Its identity as the cis-[Pd 2 A o 2 P i 2 ] stereoisomer was determined by density functional theory calculations and the X-ray structure of a model complex. On the other hand, mixing A i and P o resulted in a complex mixture of bowl-shaped Pd 2 A i 3 and an interpenetrated double cage from P i ligands, containing BF 4 − and Cl − ions. The main difference between the acridone and phenothiazine backbones is that the former one is flat while the latter has a bent geometry, the two having distinct influences on the steric preference around the metal center (Figure 11c). Hierarchical Assembly Costas and Ribas prepared A 4 B 2 tetragonal prisms based on hexa-aza macrocyclic Pd complex (A) and a tetra-anionic porphyrin ligand (B). Assembly is driven by the charge separation approach discussed above, using carboxylate donors on the porphyrin to coordinate to the Pd metallacycle. After assembly, the two porphyrins that contain Pd(II) or Zn(II) as the central atom serve as anchors for the encapsulation of functionalized guests. In the first example, Pd-centered porphyrin ligands were used to host a series of anionic π guests. 44 In a second study, Zn-centered porphyrin cages allowed the coordinative encapsulation of ligands with further open coordination sites to bind additional metals (Zn, Fe, or Cu) inside the cavity, both in solution and in the solid state. 45 Furthermore, together with Reek, Ribas and Costas encapsulated a Rh catalyst inside a molecular cage by coordination to the Zn porphyrins ( Figure 12). 46 The resulting supramolecular catalyst proved to be highly active and enantioselective for hydroformylation of styrene and its derivatives. SUMMARY AND PROSPECTUS In this Account, we have summarized a selection of strategies to access heteroleptic metal−organic systems. First, different approaches for the preparation of MOFs containing more than one type of ligand were examined. Several strategies have already been well-established, such as mixing ligands of different topology during solvothermal synthesis and the utilization of postsynthetic methods. On the other side, the self-assembly of heteroleptic MOCs has revealed a set of synthetic tools based on ligand backbone or donor-site engineering. While MOFs are infinite solid-state materials, MOCs represent finite and soluble coordination compounds. Accounts of Chemical Research Article Their different nature results in distinguished promises and challenges for future application. A great strength of MOFs is the combination of molecular building blocks with the properties of a solid-state material. With respect to applications in selective sequestration and catalysis, facile substrate/product separation along with possibilities for systematic molecularlevel materials engineering result. This argument has been stressed in almost every recently published article on applications of MOFs. A major concern that is often overlooked in this respect is the limited accessibility of pores that are located deeper inside the crystal. This is especially relevant in catalysis, were substrate diffusion pathways are affected by the grain size. Placement of functional groups within the crystal can be achieved statistically if the ligands have the same topology. Utilizing ligands with different topologies or making use of sequential linker installation enables incorporation of various functions in a controlled fashion. However, when the material is turned into action, diffusion discriminates against pores that are deeply buried. At the same time, it is difficult to distinguish functional sites spectroscopically and to determine their exact location, accessibility, and relative activity in the crystal. Also, for other applications we should raise the question of whether all of the pores are accessible and contribute to the overall function of the material. These drawbacks are clearly invalid for MOCs, which are substantially smaller than MOFs. In MOCs, substrate exchange mostly depends on the tunable kinetics and thermodynamics of the host−guest interaction. Furthermore, most MOCs are soluble, and therefore, solutionbased techniques allow their detailed investigation. The main challenge for MOCs in the future will be to find a good balance between control over assembly and stability of the cage under working conditions. For many of the discussed strategies, dynamic assembly plays a paramount role because all of the components coordinate and rearrange until the thermodynamic minimum is reached. When the system is put into action, kinetic stability is highly desired in order to ensure that the components do not disassemble. Future research should be directed toward the development of robust, heteroleptic MOCs and detailed investigations of mechanistic aspects of the assembly and performance of these systems. Ultimately, individual molecular cages could then selectively be transformed into larger MOF-like architectures by linking them postsynthetically. Notes The authors declare no competing financial interest.
2018-11-15T08:55:05.779Z
2018-10-31T00:00:00.000
{ "year": 2018, "sha1": "aaec9fedfcc49546545bbe10de55b6bbbef821fd", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.accounts.8b00415", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "aaec9fedfcc49546545bbe10de55b6bbbef821fd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
250593676
pes2o/s2orc
v3-fos-license
A Tailored Antithrombotic Approach for Patients with Atrial Fibrillation Presenting with Acute Coronary Syndrome and/or Undergoing PCI: A Case Series The combination of oral anticoagulants (OAC) and dual antiplatelet therapy (DAPT) is the mainstay for the treatment of patients with atrial fibrillation (AF) presenting with acute coronary syndrome (ACS) and/or undergoing PCI. However, this treatment leads to a significant increase in risk of bleeding. In most cases, according to the most recent guidelines, triple antithrombotic therapy (TAT) consisting of OAC and DAPT, typically aspirin and clopidogrel, should be limited to one week after ACS and/or PCI (default strategy). On the other hand, in patients with a high ischemic risk (i.e., stent thrombosis) and without increased risk of bleeding, TAT should be continued for up to one month. Direct oral anticoagulants (DOAC) in triple or dual antithrombotic therapy (OAC and P2Y12 inhibitor) should be favored over vitamin K antagonists (VKA) because of their favorable risk/benefit profile. The choice of the duration of TAT (one week or one month) depends on a case-by-case evaluation of a whole series of hemorrhagic or ischemic risk factors for each patient. Likewise, the specific DOAC treatment should be selected according to the clinical characteristics of each patient. We propose a series of paradigmatic clinical cases to illustrate the decision-making work-up in clinical practice. Introduction Atrial fibrillation (AF) and coronary artery disease (CAD) frequently coexist in the same patient [1]. It has been reported that CAD affects more than 20% of patients with AF [2]. Conversely, AF occurs in about 15% of patients with acute coronary syndrome (ACS), and one-third of those are newly diagnosed [3]. Remarkably, when AF and ACS are associated, or if percutaneous coronary intervention (PCI) is performed in a patient with AF, the antithrombotic strategy is more challenging. Indeed, although the combination of oral anticoagulation (OAC) and dual antiplatelet therapy (DAPT) is required to reduce the risk of both thromboembolic and ischemic events, this treatment leads to a significant increase in bleeding risk, up to four times higher compared with OAC alone [3]. A short period of triple antithrombotic therapy (TAT), consisting of OAC and DAPT, typically aspirin and clopidogrel, is recommended [4][5][6][7]. However, considering the increased mortality related to major bleeding [8], the optimal antithrombotic regimen should be carefully evaluated in order to reduce the risk of bleeding. Therefore, in this setting, the assessment of both ischemic and bleeding risk in each patient is needed to personalize a specific antithrombotic/antiplatelet regimen in terms of type, dosing, and duration in order to achieve a net clinical benefit. We propose paradigmatic clinical cases to illustrate decision-making workups in clinical practice. Patient 1 A 69 year old man with a history of diabetes, hypertension, and non-valvular atrial fibrillation (NVAF) in treatment with vitamin K antagonists (VKA) was admitted with acute anterolateral myocardial infarction with ST elevation (STEMI). Pre-treatment with 300 mg of aspirin and 300 mg of clopidogrel was administered. Invasive coronary angiography (ICA) showed 90% stenosis of the first proximal obtuse marginal artery and acute occlusion of the left anterior descending artery (LAD) (Figure 1). A two-stent strategy was necessary in order to treat the culprit lesion with the implantation of two polymer-free drug-eluting stents (DES) (Figure 2A). Blood tests showed 13.5 g/dL of haemoglobin, 167,000/mL of platelets, 0.7 mg/dL of creatinine (creatinine clearance (CrCl) was 58 mL/min), INR of 2.6, normal liver enzyme levels, and troponin I hs 62 ng/mL. On day 4, the patient received staged PCI and implantation of a DES in the proximal obtuse marginal artery ( Figure 2B). He had a CHA2DS2-VASc score of 4 and a HAS-BLED score of 2. OAC with 150 mg b.i.d of dabigatran was started on a background of low-dose aspirin and clopidogrel. The patient was discharged on day seven on triple therapy considering his high ischaemic risk due to clinical presentation (ACS and, in particular, STEMI) and other anatomical/procedural characteristics (bifurcation with two stents implanted, three lesions and stents implanted, total stent length > 60 mm). The treatment strategy was TAT for one month, followed by dabigatran and clopidogrel (DAT) for 12 months. He did not present recurrent ischemic events at the one-year follow-up. Patient 2 A 68 year old female presented with stable angina Canadian Cardiovascular Society (CCS) grading III and positive stress-echocardiography was referred to ICA. She had not well-treated hypertension, hypercholesterolemia, and chronic kidney disease (CKD). She also reported a history of palpitations and, 3 years before admission, an episode of melena. Laboratory data showed 10 g/dL of haemoglobin, 155.000/mL of platelets, 1.2 mg/dL of creatinine (ClCr was 35 mL/min), and normal liver enzyme levels. The transthoracic echocardiography (TTE) demonstrated left ventricular hypertrophy (LVH) in the presence of normal ejection fraction (EF), mild left atrial (LA) dilation, and mild mitral regurgitation (MR). She underwent ICA, which revealed a significant lesion of a large diagonal branch treated with PCI and implantation of one DES ( Figure 3). Patient 2 A 68 year old female presented with stable angina Canadian Cardiovascular Society (CCS) grading III and positive stress-echocardiography was referred to ICA. She had not well-treated hypertension, hypercholesterolemia, and chronic kidney disease (CKD). She also reported a history of palpitations and, 3 years before admission, an episode of melena. Laboratory data showed 10 g/dL of haemoglobin, 155.000/mL of platelets, 1.2 mg/dL of creatinine (ClCr was 35 mL/min), and normal liver enzyme levels. The transthoracic echocardiography (TTE) demonstrated left ventricular hypertrophy (LVH) in the presence of normal ejection fraction (EF), mild left atrial (LA) dilation, and mild mitral regurgitation (MR). She underwent ICA, which revealed a significant lesion of a large diagonal branch treated with PCI and implantation of one DES ( Figure 3). The patient was treated with aspirin and a loading dose of 600 mg of clopidogrel. On the same day, she developed palpitations, and the ECG showed AF with a high ventricular rate (Figure 4). A pharmacological cardioversion with intravenous amiodarone and subcutaneous enoxaparin for thromboembolic prevention was performed. After a few hours, there was a recovery of sinus rhythm (SR) but episodes of paroxysmal AF were observed in the following days. Both the CHA2DS2-VASc and HAS-BLED scores were 4. Aspirin was discontinued, and the patient was discharged with 75 mg/day of clopidogrel and 15 mg/day of rivaroxaban. At the six-month follow-up, there were no ischaemic and bleeding events; clopidogrel was discontinued, whereas the oral anticoagulant was maintained. The patient was treated with aspirin and a loading dose of 600 mg of clopidogrel. On the same day, she developed palpitations, and the ECG showed AF with a high ventricular rate ( Figure 4). A pharmacological cardioversion with intravenous amiodarone and subcutaneous enoxaparin for thromboembolic prevention was performed. After a few hours, there was a recovery of sinus rhythm (SR) but episodes of paroxysmal AF were observed in the following days. Both the CHA2DS2-VASc and HAS-BLED scores were 4. Aspirin was discontinued, and the patient was discharged with 75 mg/day of clopidogrel and 15 mg/day of rivaroxaban. At the six-month follow-up, there were no ischaemic and bleeding events; clopidogrel was discontinued, whereas the oral anticoagulant was maintained. Patient 3 A 64 year old man with hypertension, hypercholesterolemia, AF, and moderate CKD was admitted to our department with chest pain. Eighteen months prior, he had non-STEMI (NSTEMI) treated with PCI and implantation of a DES on LAD, ramus intermedius (RI), and right coronary artery (RCA). He had no liver disease. Medical treatment included warfarin, 100 mg of aspirin daily, 5 mg of bisoprolol daily, 5 mg of ramipril b.i.d., and 40 mg Patient 3 A 64 year old man with hypertension, hypercholesterolemia, AF, and moderate CKD was admitted to our department with chest pain. Eighteen months prior, he had non-STEMI (NSTEMI) treated with PCI and implantation of a DES on LAD, ramus intermedius (RI), and right coronary artery (RCA). He had no liver disease. Medical treatment included warfarin, 100 mg of aspirin daily, 5 mg of bisoprolol daily, 5 mg of ramipril b.i.d., and 40 mg of atorvastatin daily. Physical examination revealed irregular pulse and blood pressure (BP) of 130/60 mmHg with no signs of congestive heart failure (CHF). He had a respiratory rate of 16/min with a peripheral O 2 saturation of 97% upon examination. Auscultation did not reveal any abnormal breathing sounds, rales, or rhonchi. The cardiac examination revealed an irregular rhythm with no murmurs, gallops, or rubs. There was no peripheral edema. His laboratory investigations were as follows: 13 g/dL of haemoglobin, INR 1.8, 1.4 mg/dL of serum creatinine (CrCl: was 52 mL/min), and I hs 600 ng/mL troponin (normal < 0.04 ng/mL). Twelve-lead electrocardiogram upon admission showed AF at 68 b.p.m. without any specific ST-T segment changes. The LVEF was 48%. Both the CHA2DS2-VASc and HAS-BLED scores were 2. The patient underwent a transradial ICA, which showed the patency of previously implanted stents without any coronary stenosis ( Figure 5). Considering the presence of ACS and atrial fibrillation, the patient was switched from warfarin plus aspirin to 5 mg of apixaban b.i.d. plus clopidogrel. During the following days, the patient remained asymptomatic and was discharged on the fourth day in DAT for at least 6 months. No ischemic recurrences were reported at the 6-month follow-up. The patient reported minor bleeding (bruising and bleeding gums). Therefore, we decided to drop clopidogrel and continue with 5 mg of apixaban b.i.d. only. His laboratory investigations were as follows: 13 g/dL of haemoglobin, INR 1.8, 1.4 mg/dL of serum creatinine (CrCl: was 52 mL/min), and I hs 600 ng/mL troponin (normal < 0.04 ng/mL). Twelve-lead electrocardiogram upon admission showed AF at 68 b.p.m. without any specific ST-T segment changes. The LVEF was 48%. Both the CHA2DS2-VASc and HAS-BLED scores were 2. The patient underwent a transradial ICA, which showed the patency of previously implanted stents without any coronary stenosis ( Figure 5). Considering the presence of ACS and atrial fibrillation, the patient was switched from warfarin plus aspirin to 5 mg of apixaban b.i.d. plus clopidogrel. During the following days, the patient remained asymptomatic and was discharged on the fourth day in DAT for at least 6 months. No ischemic recurrences were reported at the 6-month follow-up. The patient reported minor bleeding (bruising and bleeding gums). Therefore, we decided to drop clopidogrel and continue with 5 mg of apixaban b.i.d. only. Patient 4 A 67 year old woman with prior myocardial infarction (MI) and hypertension was admitted to our coronary care unit (CCU) with a diagnosis of STEMI and acute pulmonary edema. She had NVAF treated with VKA and a history of hypersensitivity to aspirin. TTE showed left ventricular dilatation with a severely reduced left ventricular ejection fraction (LVEF 30%) and functional moderate mitral regurgitation. The twelvelead electrocardiogram (ECG) revealed AF and a left bundle branch block. The blood tests showed normal renal and liver function and no anemia. The ICA revealed severe multivessel disease with critical stenosis at the left main (LM), extending distally to the proximal LAD, and significant in-stent restenosis of the RCA. The patient declined coronary artery bypass graft surgery (CABG); hence, she underwent a successful re-PCI of RCA, and on day 6, she had staged PCI with DES implantation on the LM to the proximal LAD. Intravascular ultrasound (IVUS) was performed for optimizing the left main (LM)-LAD stent. She had a CHA2DS2-VASc score of 5 and a HAS-BLED score of 2. Given her high clinical and procedural ischaemic risk (ACS and multiple vessels disease with LM involvement) and the unfeasibility of treatment with TAT due to hypersensitivity to aspirin, the patient was discharged with 150 mg of dabigatran b.i.d. Patient 4 A 67 year old woman with prior myocardial infarction (MI) and hypertension was admitted to our coronary care unit (CCU) with a diagnosis of STEMI and acute pulmonary edema. She had NVAF treated with VKA and a history of hypersensitivity to aspirin. TTE showed left ventricular dilatation with a severely reduced left ventricular ejection fraction (LVEF 30%) and functional moderate mitral regurgitation. The twelve-lead electrocardiogram (ECG) revealed AF and a left bundle branch block. The blood tests showed normal renal and liver function and no anemia. The ICA revealed severe multi-vessel disease with critical stenosis at the left main (LM), extending distally to the proximal LAD, and significant in-stent restenosis of the RCA. The patient declined coronary artery bypass graft surgery (CABG); hence, she underwent a successful re-PCI of RCA, and on day 6, she had staged PCI with DES implantation on the LM to the proximal LAD. Intravascular ultrasound (IVUS) was performed for optimizing the left main (LM)-LAD stent. She had a CHA2DS2-VASc score of 5 and a HAS-BLED score of 2. Given her high clinical and procedural ischaemic risk (ACS and multiple vessels disease with LM involvement) and the unfeasibility of treatment with TAT due to hypersensitivity to aspirin, the patient was discharged with 150 mg of dabigatran b.i.d. and 90 mg of ticagrelor b.i.d, a more effective P2Y12 inhibitor than clopidogrel, for 12 months. She remains well at the 12-month follow-up. Patient 5 An 85 year old frail man with NVAF receiving treatment with 30 mg of edoxaban was admitted to our CCU for inferior STEMI. He underwent transradial primary PCI of the RCA with the placement of a DES. A dose loading of aspirin (250 mg i.v.) and clopidogrel (600 mg) was administered. Laboratory data showed 7.8 g/dL of haemoglobin, 80.000/mL of platelets, 1.8 mg/dL of creatinine (ClCr 30 mL/min), and normal liver enzyme levels. Stratification of both the risk of stroke and bleeding was performed (CHA2DS2-VASc score 4 and HAS-BLED score 2). According to the ARC-HBR criteria and PRECISE-DAPT score, the patient had a high bleeding risk (HBR). Accordingly, TAT with edoxaban, aspirin, and clopidogrel was limited to the periprocedural phase, and the patient was discharged after five days of hospitalization on DAT (edoxaban plus clopidogrel) for up to 6 months. He was closely followed up thereafter. At the monthly follow up, the hemoglobin values were stable or slightly rising. The patient remained asymptomatic. Therefore, clopidogrel was discontinued after 6 months. Discussion We report five cases of patients with AF presenting with ACS and/or treated by PCI to show the decision-making work-up in clinical practice regarding the choice of antithrombotic regimen. In this scenario, DOAC should be preferred over VKA because of their favorable risk/benefit profile, as recommended by the current guidelines. The choice of the optimal antithrombotic therapy (TAT or DAT) and the duration of TAT (one week or one month) depends on a careful evaluation of the individual patient's hemorrhagic and ischemic risk factors, as well as evaluation of the coronary anatomy profile and procedural complexity in order to identify patients who might benefit from prolonged TAT and those who might have an excessive risk of bleeding. The choice of specific DOAC and dosage represent the most important challenge in these patients, and should be based on clinical characteristics and hemorrhagic and ischemic risk (previous OAC therapy, frailty, renal function, presence of criteria for dose reduction, etc.). The different factors to consider when determining the optimal antithrombotic regimen for individual patients are summarized in Figure 6. Moreover, in Figure 7, we suggest a practical algorithm for the choice of antithrombotic treatment in patients with atrial fibrillation presenting with acute coronary syndrome and/or undergoing PCI. bleeding risk (HBR). Accordingly, TAT with edoxaban, aspirin, and clopidogrel was limited to the periprocedural phase, and the patient was discharged after five days of hospitalization on DAT (edoxaban plus clopidogrel) for up to 6 months. He was closely followed up thereafter. At the monthly follow up, the hemoglobin values were stable or slightly rising. The patient remained asymptomatic. Therefore, clopidogrel was discontinued after 6 months. Discussion We report five cases of patients with AF presenting with ACS and/or treated by PCI to show the decision-making work-up in clinical practice regarding the choice of antithrombotic regimen. In this scenario, DOAC should be preferred over VKA because of their favorable risk/benefit profile, as recommended by the current guidelines. The choice of the optimal antithrombotic therapy (TAT or DAT) and the duration of TAT (one week or one month) depends on a careful evaluation of the individual patient's hemorrhagic and ischemic risk factors, as well as evaluation of the coronary anatomy profile and procedural complexity in order to identify patients who might benefit from prolonged TAT and those who might have an excessive risk of bleeding. The choice of specific DOAC and dosage represent the most important challenge in these patients, and should be based on clinical characteristics and hemorrhagic and ischemic risk (previous OAC therapy, frailty, renal function, presence of criteria for dose reduction, etc.). The different factors to consider when determining the optimal antithrombotic regimen for individual patients are summarized in Figure 6. Moreover, in Figure 7, we suggest a practical algorithm for the choice of antithrombotic treatment in patients with atrial fibrillation presenting with acute coronary syndrome and/or undergoing PCI. Figure 6. Factors for physicians to consider when determining the optimal antithrombotic regimen for individual patients with atrial fibrillation and acute coronary syndrome or PCI. Figure 6. Factors for physicians to consider when determining the optimal antithrombotic regimen for individual patients with atrial fibrillation and acute coronary syndrome or PCI. In patients with ACS and NVAF, the safety and efficacy of DAT, consisting of a DOAC with a P2Y12 inhibitor, usually clopidogrel, have been specifically addressed in different randomized trials [9][10][11][12], reporting a significantly lower bleeding risk of DAT compared with TAT without increasing the major adverse cardiac events (MACE). However, these trials were drawn to assess the risk of bleeding, while they were underpowered in terms of the sample size and follow-up length in order to investigate the benefit on ischemic risk (i.e., stent thrombosis) [9][10][11][12]. Another major limitation of these trials was the mixture of stable CAD and ACS patients, as approximately half of the patients had ACS, whereas less than 15% had STEMI. Several meta-analyses investigated the risk of an ischemic event in the aforementioned trials, with different results. While two meta-analyses [13,14] reported a reduced risk of bleeding without a significantly increased risk of coronary thrombosis with DAT compared with TAT, others found a small but statistically significant increase in the risk of coronary events, such as stent thrombosis and MI [4,5,15,16]. In patients with a TAT regime, the use of the newer and more powerful P2Y12-receptor inhibitors, prasugrel and ticagrelor, has been discouraged based on safety concerns [6,17], as a greater risk of major bleeding compared with clopidogrel has been reported [18][19][20][21][22]. In the RE-DUAL PCI trial [12], ticagrelor was combined with dabigatran in 12% of patients, whereas this combination was less frequent in PIONEER-AF [9], AUGUSTUS [10], and ENTRUSTAF PCI [14]. In patients with ACS and NVAF, the safety and efficacy of DAT, consistin DOAC with a P2Y12 inhibitor, usually clopidogrel, have been specifically addres different randomized trials [9][10][11][12], reporting a significantly lower bleeding risk of compared with TAT without increasing the major adverse cardiac events (MACE). However, these trials were drawn to assess the risk of bleeding, while they underpowered in terms of the sample size and follow-up length in order to inves the benefit on ischemic risk (i.e., stent thrombosis) [9][10][11][12]. Another major limitat these trials was the mixture of stable CAD and ACS patients, as approximately half patients had ACS, whereas less than 15% had STEMI. Several meta-analyses investigated the risk of an ischemic event in aforementioned trials, with different results. While two meta-analyses [13,14] repo reduced risk of bleeding without a significantly increased risk of coronary throm with DAT compared with TAT, others found a small but statistically significant in in the risk of coronary events, such as stent thrombosis and MI [4,5,15,16]. In patients with a TAT regime, the use of the newer and more powerful P receptor inhibitors, prasugrel and ticagrelor, has been discouraged based on concerns [6,17], as a greater risk of major bleeding compared with clopidogrel has reported [18][19][20][21][22]. In the RE-DUAL PCI trial [12], ticagrelor was combined with dabig in 12% of patients, whereas this combination was less frequent in PIONEER-A AUGUSTUS [10], and ENTRUSTAF PCI [14]. Recent guidelines and consensus documents [6,7,17,23] recommend the use of D over VKA in DOAC eligible patients at the recommended dose for stroke preve (Class I) [6]. However, according to the results of PIONEER-AF PCI, a lower d Recent guidelines and consensus documents [6,7,17,23] recommend the use of DOAC over VKA in DOAC eligible patients at the recommended dose for stroke prevention (Class I) [6]. However, according to the results of PIONEER-AF PCI, a lower dose of rivaroxaban (i.e., 15 mg once daily) could be considered when used in combination with aspirin and/or clopidogrel (class IIb) [6]. Conversely, a reduced dose of apixaban (2.5 mg b.i.d) and edoxaban (30 mg o.d.) should not be used in the absence of drug-specific criteria for dose reduction. DAT with DOAC and single antiplatelet therapy, up to 12 months after one week of TAT, should be used as the default strategy according to the 2020 ESC guidelines [6]. In patients at high risk of bleeding, DAT should be shortened to 6 months, whereas in patients where the ischaemic risk (based on clinical, anatomical, or procedural characteristics) outweighs the bleeding risk, TAT should be continued for up to 1 month. We report five cases of patients with AF presenting with ACS and/or treated by PCI to show the decision-making work-up in clinical practice regarding the choice of the antithrombotic regimen. Our first case showed a patient with NVAF and ACS who underwent primary PCI with high-risk clinical and angiographic features for ischemic coronary outcomes who was at low risk of bleeding. According to current guidelines, in this case, we prescribed TAT with a DOAC and DAPT (aspirin plus clopidogrel) for 1 month following PCI. A loading dose of 300 mg of clopidogrel seems to be a reasonable choice in patients treated with VKA and with an unknown INR value, in order to reduce the risk of bleeding. Moreover, following the guideline recommendations that support the use of DOACs over VKA, as a combination therapy with antiplatelets, we switched from VKA to dabigatran. The 150 mg dose of dabigatran was chosen due to the patient's clinical characteristics (CHA 2 DS 2 -VASc score: 4; HAS-BLED score: 2; creatinine clearance > 50 mL/min). According to the current ESC guidelines, after an appropriate TAT treatment, this patient needed to be treated with dabigatran and clopidogrel for up to 12 months, followed by OAC with dabigatran as the chronic treatment [6]. In this regard, the recent AFIRE trial demonstrated the safety in transitioning to DOAC monotherapy without any antiplatelet agent beyond 1 year after cardiac revascularization in AF patients [24]. In the second scenario, we described a patient with high hemorrhagic and lower ischemic risk. The patient had a very high bleeding risk, not only due to her HAS-BLED score > 3, but also because she was female; moreover, she had anemia and a previous history of bleeding. On the other hand, she underwent a simple one vessel, single stent PCI with implantation of a new generation DES for stable CAD. Although there is no clear evidence that the periprocedural onset of AF in patients undergoing PCI is associated with a comparable risk of pre-existent AF, the current guidelines [6] recommend OAC according to the individual's thromboembolic risk (our patient CHA 2 DS 2 -VASc score: 4). Considering all of the aforementioned characteristics, our patient was discharged with short peri-procedural TAT followed by a short DAT (for up 6 months) to balance the hemorrhagic and ischemic risk. According to her high bleeding risk (previous hemorrhage), age, and reduced renal function (eGFR 35 mL/min), we prescribed a reduced dose of rivaroxaban. The third case described a patient with NSTEMI-ACS managed medically. According to registries, this challenging population accounts for almost one-third of the ACS population. In this regard, AUGUSTUS was the only DOAC-trial that included patients with medically managed ACS (approximately 23% of enrolled patients) [25]. The risk of bleeding with apixaban was 56% lower compared with VKA. Importantly, the risk of bleeding was 49% higher with aspirin compared with a placebo, without a significant difference in death and ischemic events. The results of this analysis support a DAT consisting of apixaban with clopidogrel for at least the first 6 months, which is the therapeutic option we used in our case. Moreover, differently from the other studies, the factorial design of AUGUSTUS demonstrated the greater safety of DAT with DOAC plus P2Y12 compared with the conventional TAT with VKA P2Y12, and aspirin is attributable to both the use of DOAC versus VKA and early aspirin discontinuation. The fourth patient represents a case to ideally be a candidate for TAT, in which TAT is not applicable for the history of aspirin hypersensitivity. In this post-ACS patient at high risk of coronary thrombotic and low risk of bleeding, a DOAC-based DAT with a more potent P2Y12 inhibitor, ticagrelor, could represent a reasonable treatment option (Class IIb in current guidelines) [23]. The REDUAL-PCI is the largest trial that assessed combination therapy with ticagrelor and DOAC. A prespecified subgroup analysis from the RE-DUAL PCI trial showed fewer bleeding events, even in patients treated with dabigatran, irrespective of the dose, plus ticagrelor, compared with those treated with warfarin TAT [26]. According to these results, we prescribed dabigatran as combination therapy with ticagrelor. The last case described an elderly patient with STEMI at high risk of bleeding who was already being treated with a correctly reduced dose of DOAC. In this setting, we used the ESC guideline's recommendation therapy consisting of TAT with clopidogrel during the in-hospital length of stay, followed by DAT with clopidogrel for a maximum of six months. Other strategies to avoid bleeding complications in all of the patients were low-dose aspirin (≤100 mg) usage and the routine use of proton pump inhibitors for gastric protection. Conclusions The management of patients with AF presenting with acute ACS and/or undergoing PCI who need a combination of anticoagulant and antiplatelet therapy remains a common and controversial issue in clinical practice, and there is no one-size-fits-all antithrombotic treatment for these patients. We sought to describe a practical approach to implementing the current guidelines and evidence-based data into real world clinical practice by reporting a series of paradigmatic clinical cases. As demonstrated in this case series, a patient-tailored approach is crucial for the management of antithrombotic therapy in the setting of AF and ACS or PCI.
2022-07-17T15:20:03.990Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "58c68f405d53566a3ecb91ce3ee212b663f7666d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/14/4089/pdf?version=1657870932", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b3be69bb7bbbccf0073af5922d90a30b2a96861", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
151635777
pes2o/s2orc
v3-fos-license
Disability Leisure: In what kind of activities, and when and how do youths with intellectual disabilities participate? The article examines what kind of activities youths with intellectual disabilities participate in during their leisure time, and when and how they participate. The analysis is based on qualitative interviews of ten youths with intellectual disabilities (aged 13-16 years) and their parents (N=20). The study reveals that intellectually disabled youths have the same preferences and wishes for leisure activities as their non-disabled peers. Both genders prefer sports and cultural activities. However, a closer examination reveals marginalisation of intellectually disabled youths from leisure activities organised for young people in general. In our society, the understanding that leisure activities are a private concern is based on the idea of the ‘normate’. The ‘normate’ emerges when we explore the social processes of participation that constitute otherness and systematically marginalise groups of people, here intellectually disabled youths, from organised leisure activities. Introduction Leisure activities are important for their content, but even more so because they are an arena for developing peer relations and social inclusion (Kampert and Goreczny 2007). However, children with intellectual disabilities seem to participate less in leisure activities with peers than typical developing children (Solish, Perry, and Minnes 2010), and their degree of participation decreases with age (Wendelborg and Paulsen 2014). While activity is the execution of a task or action by an individual, we understand participation as involvement in life situations (World Health Organization 2001). Leisure activities are typically activities in which individuals freely choose to participate during their spare time because they find such activities enjoyable (Majnemer et al. 2008). The benefits of participation in recreational and leisure activities are well documented, and we will now point at findings that stress the importance of leisure activities as phenomena regardless of age. Taking part in recreational and leisure activities provides opportunities for social interaction and promotion of friendships (Kampert and Goreczny 2007), and for learning and development (Øia and Fauske 2010). Furthermore, involvement in leisure activities provides opportunities to express oneself in different ways (Kolstad 2011), and to challenge one's existing identity (Devine 2004). For example, participating in cultural activities opens for presenting oneself as an artist instead of an intellectually disabled person (Høiseth 2012) or client (Gürgens 2004). Physical activities offer the opportunity to contribute to well-being, improved physical fitness and an increased perception of self-efficacy and social competence (Hutzler and Korsensky 2010). In other words, aspects of participation in leisure activities can contribute to enhancing the quality of life of people with disabilities (Badia et al. 2013). The importance of participation in leisure activities has also been acknowledged by the United Nations, in Article 30 of the rights of persons with disabilities, which highlights that persons with disabilities (2006) should be able to participate on the same terms as others in cultural life, recreation, leisure and sport. Additionally, in accordance with Article 12 of the UN Convention (United Nations 1990) on the Rights of the Child (UNCRC), children and youths' 'voices' and participation in matters concerning them is not just a model of policy-making, but a legally binding obligation (Lundby 2007). This article examines the leisure participation of Norwegian youth with intellectual disabilities. By asking both intellectually disabled youths themselves and their proxies (parents or foster-parents) what they do in their spare time, we obtained a balanced picture of their situation. When we see that intellectual disability is characterized by significant limitations in both intellectual functioning and adaptive behaviour, which cover many everyday social and practical skills (American Association on Intellectual and Developmental Disabilities 2014), we also find that the UNCRC has not been adequately implemented for these children, neither generally (Carpenter and McConkey 2012), nor when it comes to specific leisure activities. In 2003, Aitchison (2003) introduced a shift in the disability leisure research field with her article 'From leisure and disability to disability leisure,' developing a more integrated understanding of disability and leisure. Before this leisure studies belonged to the non-disabled sphere and disability studies had paid little attention to leisure. This shift led to an increase in research on disability leisure. The dominant focus of this research has been on the environmental dimensions of availability and accomodability in mainstream leisure activities (King et al. 2013), and this was also our point of departure. However, the importance of the dimensions' accessibility, affordability and acceptability emerged in our findings, thus expanding the two above-mentioned environmental dimensions of participation. Our findings are in line with Granlund (2009) andMaxwell (2012), who highlight the frequency and intensity of participation. They relate frequency and intensity to (1) availability (is it possible to act?), (2) accessibility (can I access the context?), (3) affordability (is it worth it in terms of available resources?), (4) accomodability (can the situation be adapted to my way of functioning?) and (5) acceptability (do I experience acceptance in the situation) (Maxwell 2012, 21). Leisure and barriers to participation Children and young people with intellectual disabilities participate in some activities in the community, but the number of social and recreational activities they attend are fewer compared to what their non-disabled peers attend (Solish, Perry, and Minnes 2010). According to Cowart et al. (2004), these children have the same desires and benefits from the same types of activities as other children. However, disabled youths' participation in leisure activities is reported to be somewhat contradictory. On the one hand, persons with intellectual disabilities take part more in passive, solitary activities compared to their non-disabled peers (Buttimer and Tierney 2005). On the other hand, they are members of voluntary organizations to a higher degree than their non-disabled peers, with the exception of sports organizations (Ødegård 2006). Another difference is that while the typical developing youth will take part in leisure activities together with peers, the intellectually disabled youth will participate more together with parents or other adults (Solish, Perry, and Minnes 2010). King et al. (2013) suggest that the reason for this difference is that the intellectually disabled are more dependent on support. This support might result in fewer opportunities to develop self-determination and independence. Furthermore, the type of school and the age of the youth are found to affect leisure participation. Intellectually disabled pupils attending regular schools are found to take part in more leisure activities than pupils in special schools (Badia et al. 2013;Wendelborg and Paulsen 2014), and the leisure segregation process seems to increase the further into childhood they have progressed (Wendelborg and Paulsen 2014). Persons with intellectual disabilities seem to encounter more comprehensive barriers to leisure than persons with other types of impairments (Molden and Tøssebro 2009). These barriers include expenses, insufficient resources to accommodate a person's interests, transport challenges and attitudes in the community (Reynolds 2002). Children with intellectual disabilities not only take less part in social activities, they also seem to have less friends than children without disabilities (Solish, Perry, and Minnes 2010). For example, many of them have no or only a few close friends, and spend very little time with friends outside of school (Solish, Minnes, and Kupferschmidt 2003;Oates et al. 2011). Furthermore, compared with their non-disabled peers, children and youths with intellectual disabilities participate more in social activities at home (King et al. 2013) and with adults (especially their parents) (Solish, Perry, and Minnes 2010), and with family and other persons with disabilities (Dolva, Kleiven, and Kollstad 2014). Even though more integrated school systems are found to have positive influence on the level of participation in leisure activities of children with disabilities (including children with intellectual disabilities) (Ullenhag et al. 2012), physical proximity in the community alone does not appear to ensure social inclusion in peer activities and interactions (Solish, Minnes, and Kupferschmidt 2003). When it comes to the participation in physical activities of persons with intellectual disabilities, the amount of research is limited (Ingebrigtsen and Petter Aspvik 2009). However, the research that has been conducted has found that many individuals with intellectual disabilities are highly inactive during their leisure time (Frey 2004), and not active enough to gain health benefits from the activities (Temple, Frey, and Stanish 2006). Children with intellectual disabilities seem to take less part in physical activities and rather more in recreational activities than children without disabilities (Umb-Carlsson 2008). King et al. (2013) suggest that this might be due to the fact that recreational activities are easier to get involved in and master, and that there are less external barriers when it comes to recreational than physical activities. Briefly summarized, existing research tells us that youths with intellectual disabilities participate less in leisure activities than their non-disabled peers. This article will ask the youths themselves and their parents about this and listen to their stories to examine: In what kind of activities, and when and how do youths with intellectual disabilities participate in Disability Leisure? As part of the study, we will compare this group's participation in leisure activities with the existing knowledge of the participation of Norwegian youth in general (Vaage 2013;NOVA 2014). Before answering the research questions, we will briefly present the Norwegian context and the research methods used. In 1975, the Norwegian Education Act for Special and General Education was merged with the School Act to make the comprehensive Educational Act for compulsory school. This conferred the legal right to pursue an education within their local school for all children, included children in need of special education services. This shift also made it possible for intellectually disabled children to stay in their family home with parents and siblings. Moreover, all central institutions for intellectually disabled people were closed down in 1991, and the responsibility for all kinds of servicesincluding leisure activitieswas transferred from the state to the local authority level. The leisure activities were to be provided by the public and non-governmental organizations (NGO's) (St. meld. nr. 45 2012(St. meld. nr. 45 -2013. However, in spite of the intentions behind integrated leisure activities, a number of studies have found that people with intellectual disabilities in Norway still participate more in segregated than in ordinary leisure activities (Kittelsaa 2008;Kolstad 2011;Söderström and Tøssebro 2011). Participants To answer the research questions we chose to use qualitative interviews. Ten youths with intellectual disabilities (six boys and four girls, aged 13-16) and their parents or guardians were interviewed (N = 20). For further information, see Table 1. The informants were selected through purposeful sampling, based on having an intellectual disability diagnosis (mild to moderate) and being able to apply spoken language. The latter is a consequence of the fact that qualitative interviews require verbal dialogue between the interviewer and the young person, and our lack of complete competence in augmentative and alternative communication. Due to this, we found it ethically and professionally wise not to invite youths without spoken language to participate in the study. The young people included in the study were from various parts of Norway, and were living in both rural and urban areas. The study was carried out in accordance with the National Ethical Committee for the Social Sciences, and was approved by the Norwegian Social Science Data Service. The 10 youths were recruited voluntarily and anonymously through educational and psychological counselling services and schools. Written consent was obtained from their parents or guardians. In addition, the participants were informed about the study as they were invited to give their consent as well. Five of the youths gave their written consent, while the other five consented orally (via and in agreement with their parents or guardians). All the 10 youths in the article have been given pseudonyms. Data and analysis The first author conducted, audio-recorded and transcribed the interviews. The participants were asked to describe the youths' leisure, for example, what they did, together with whom, where they did these things, what they enjoyed the most, things they wanted to do with their leisure time and so on. The interviews lasted from 20 minutes to two hours. The youths were interviewed either at home or at their school, while their parents/guardians were interviewed at home (with one exception). Often the youths replied in few words and rather short, but precise sentences. However, the interviews with the parents were more comprehensive. They were not conducted to validate the youths' interviews, but to get a broader picture of leisure activities. These interviews filled out the information given by the youth themselves. Both authors undertook the analysis and wrote this article together. The interviews have been analysed and interpreted according to hermeneutic principles whereas the parts can only be understood in reference to the whole, and the whole can only be understood in reference to the parts (Alvesson and Sköldberg 2008). We started by reading the transcribed interviews several times to identify meaning units. Next, we deconstructed the interviews into meaning units. Third we put the meaning units into a dialogue encompassing the totality of the interviews and turned them into analytical categories, a reconstruction. Fourth these categories were put together and constituted the text into a story (Kvale 1997). For example, when the youths and their parents talked about the activities they joined, these were categorized into frequency of activity, formal and informal activities, who took part in the activity with them and so on. In this dialogical process, we continuously moved back and forth between the data and the relevant literature, and between the parts and the whole (Alvesson and Sköldberg 2008). The interviews of the youths and their parents were analysed separately. We present our findings according to our research questions. Findings What kind of leisure activities do Norwegian youths with intellectual disabilities participate in? According to the youths, sports and cultural activities were the most valued activities. Typical statements were 'I love to swim' (Jenny), 'I like the club best' (Karen) and 'I play games on the computer' (Peter). Within sports and culture participation there was great variety: swimming, football, tennis, fitness-centres, riding, skiing, handball, and leisure clubs and musical activities. However, when the youths and their proxies described their activities we found that they differed from the leisure activities of their non-disabled peers (NOVA 2014;Vaage 2013). The intellectually disabled youths were mainly involved in informal and limited in formal activities. While formal activities are structured, involve rules or goals and have a coach/leader/instructor, informal activities often involve little or no prior planning (King et al. 2003). Many of the informal activities the youths were involved in took place at home, like playing on the computer, listening to music, watching TV, playing drums and so on. Other examples of home-based activities were cooking (Lisa), playing with dolls (Karen) or small figures (Adam), taking part in carpentry like building a garage (John) or repairing bikes and so on (Adam). The informants were also involved in informal activities that took place outside of the home, for example going fishing (John), cycling (Adam, John and Mark), swimming (John and Benjamin), shopping (Anna), skiing (John, Lisa and Mark), going to the cinema (David, Adam and Jenny), bowling (Peter), and visiting the library (Jenny), cafés (Anna), disco (Karen) and the local fire station (John). Furthermore, some of the girls (Jenny and Anne) described how they liked to dress up and put on make-up during their leisure time. Even if limited, some of the informants also participated in formal activities. Jenny sang in a choir and took part in 4-H meetings (a youth development organization), Lisa and Karen attended Christian youth clubs, Peter played on both the local football and handball teams, and Adam, Jenny and John were members of the local swimming club. Most of the informants spent quite some time on their computers, where this was the case more for the boys than the girls. David, Peter, Benjamin, John and Adam spent a considerable amount of time playing computer games, such as World of Warcraft. The girls were not that much into gaming, and when they did play, they were interested in other games than the boys, for example SIMS (Karen). Furthermore, the girls used the computer more for other activities, such as searching for information on the internet about their favourite band or TV programme, searching for fun videos (Lisa and Anna) or looking at their own photographs (Karen). When it came to the youths' participation in sports, a very clear pattern emerged. Participation in team sports was rare. David played football and handball, while Anna once a week assisted a relative who coached a handball team. The others participated in individual sports, such as swimming (John, Adam, Jenny, David and Benjamin), skiing (Lisa, John and Mark), cycling (John and Mark), tennis (David) or exercising at the gym (Benjamin). Some participated in their local mainstream teams (David and John), while others took part in teams organized especially for the disabled (Adam and Jenny). Four of the informants were involved in formal activities organized in sport clubs that occurred regularly and with coaching (David, Jenny, Adam and John). However, many of the informants were involved in informal sports activities that took part randomly and mainly together with their family or their support worker (Adam, Lisa, John, Mark, David and Benjamin). The organized sports of the youths took part once a week, except for David who attended different teams training several times a week. In other words, the youths were more involved in recreational activities than in activities that had a focus on improving skills and competitive abilities. Thus, whereas the interests and preferences of the youths may appear to coincide with those of Norwegian young people in general (NOVA 2014;Vaage 2013), the interviews with the parents/fosterparents revealed that most of the youths' leisure activities took place rather sporadically and usually were initiated and facilitated by grown-ups (eight of ten youths). For example, Anna loved outdoor activities (like boating trips, bonfires on the beach, visiting farms and so on), but these only took place every fourth weekend when she was in respite care. Moreover, social interaction with peers was especially scarce. According to the parents of eight of the youths, peers did not contact their daughter/son in their leisure time. Adam's mother said that: 'He can count on one hand the number of times he has had visits from "normal children" … . Adams leisure time mainly includes his siblings and a hectic family life'. Furthermore, David's mother describes how they try to facilitate for interaction between David and his peers: He's always allowed to bring somebody along when we go swimming, to the cinema, to spend the night, on trips up in the mountains and on holiday, but he very seldom gets invited back. / … / If he had not had his sports he would have been a lonely soul! However, Karen is an exception as she plays weekly with the girl next door. When and how do youths with intellectual disabilities participate in disability leisure? As mentioned above, leisure activities are typically those activities in which an individual freely chooses to participate during his or her spare time because such activities are enjoyable. However, our interviews revealed that the youths often did not get to decide what to do during their own leisure time. Even if they in principle had the opportunity to choose like everybody else, they often depended on other peers or grown-ups to pick out their leisure preferences through their facilitating for the activity, providing practical assistance, guiding them in the process and how to perform the activity and so on. Even though youths in general also need transport and assistance, this is rarely as extensive as the needs of the youths in this study. As Lisa's (14 years old) mother said: 'If we don't facilitate for the activities, then there will be no activities! / … / What you as a parent can manage is of paramount importance!' Or as John's father expressed it: '/ … / What he does we must mostly do as well.' The involvement of the parents was also required in the youths' leisure time at home. For example, when it came to having friends over, Lisa's mother described how this involved them as parents: 'They (Lisa and her friends) don't go down to her room, and things don't function by themselves. You have to be there … ' When it comes to social interaction, some parents (like Lisa's) on occasions like this actually seem to obtain a role as a friend as well as a parent. For example, when Lisa struggled with how to keep up the communication and interaction with the friend who was visiting, she was dependent on someone more competent to keep the activity going. Another example of how participation sometimes depended on the presence of others is when David went to the football pitch. Here his peers sometimes did not let him participate, saying things like: 'No! There's no place for you! The team is full.' When alone David did not know how to respond and ended up watching the others play, while, if his father came with him, the boys would let him join the game. In our study the more competent persons who facilitated the youths' participation were mainly grown-ups, parents or a support worker. However, there was one exception from this pattern. John went swimming together with a friend who was three years younger. They sometimes accompanied each other back and forth to the swimming lessons, and often John's friend make sure that John understood important information, for example what distance to swim at competitions and so on. The examples above show how much youths with intellectual disabilities depend on grown-ups in their leisure time, and how some parents experience that nothing in the youths' leisure time happens on its own. An important contextual factor that seemed to influence the youths' leisure time to a high degree was place of residence. Having relatives and friends of the family living nearby seemed to increase their social life. One reason for this was that they were connected to more people they could interact with. Lisa and John, for example, had aunts, uncles and cousins in walking or cycling distance who they could visit on their own. John also played outside with his younger cousins or on the computer with the older ones. Secondly, if the family had an extensive social network, this also seemed to protect the youths from bullying. For example, John's mother believed that there would have been greater chances of John being teased if they had not known his peers and their parents. Instead, she experienced that his peers to a certain extent took care of John, for example, by greeting and having a chat with him when they passed by. Thirdly, the geographic place of residence sometimes seemed to be of significance. For example, Karen lived in a very rural area, with only a few houses nearby and the next neighbourhood was driving distance away. According to Karen, her best friend was the girl next door (four years younger). They met and played every day during the summer holiday, about twice a week when at school, and sometimes spent the night at each other's place. The few kids living here played together across ages. Others, like Peter, lived in the middle of a town and did not know their neighbours. According to Peter's mother, he seldom went out because of teasing and bullying by the neighbour kids. Furthermore, Peter, Lisa and Benjamin rarely met schoolmates in their spare time. Going to special schools they lived quite far away from each other and had trouble meeting each other on their own without assistance. Another aspect that characterizes the social network of some of the youths in this study was how the same persons often fill a number of different roles in the youth's life. For example, Karen had respite care at her teacher's home, and Mark at his teaching assistant's home. Furthermore, another one of Karen's teachers organized the Christian club she attended, while Karen's neighbour was employed as her support worker. Not all the youths were pleased with this type of arrangement. For example, David's teacher was also employed as his support worker. His mother originally thought this would be a perfect match, as the teacher is very much into sports, just as David is. However, David is not very happy with the arrangement. For instance, he experienced a negative response from his peers when on the weekend he arrived at the beach together with his teacher, while his peers came there together, without any adults. In addition to the parents, the support workers seemed to be the most important facilitators of the youths' participation in leisure activities. Seven of the ten youths were granted support workers from 3 to 12 hours weekly, and these were often responsible for taking them to various activities. Many had experienced a rather frequent replacement of support workers, which was challenging as recruiting new ones so often was difficult. Even though support workers were employed and paid for by the local authority, sometimes this person 'seemed more like a friend' (as Peter's mother put it). For example, Peter's support worker spent much more time with Peter and his mum than he was paid for, and sometimes even went with them on trips abroad. Another characteristic (eight of the ten youths) was that they spent some afternoons and/or one or two weekends every month in respite care. This care was provided at the private home of another family or at small public respite care homes where they stayed together with other disabled children and young people. The possibility to take part in activities when in respite care seemed to vary a great deal. While some respite care centres offered a range of activities, others stated that there was a lack of resources to provide activities since the youths has such divergent care needs. Discussion Just like everybody elseor not? At first glance, the youths themselves seem to describe their leisure time in the same terms as any other Norwegian young person would (NOVA 2014;Vaage 2013). Both genders prefer sports and cultural activities, with boys spending more time at their computers and gaming, and girls taking part more in social and cultural activities. However, on closer examination we find a marginalization of intellectually disabled youths when it comes to leisure activities organized for youths in general. The UNCRC and the UNCRPD's ambition of giving 'voice' to youths and disabled people is to a certain degree not fulfilled. There are differences in becoming a frequent alpine skier in an organized skiing club and going downhill with one's parents a few times a year, and playing drums alone at home differs from being a band member with frequent and regular practice. We know that most youths aged 13-16 participate on a regular basis in organized leisure activities in Norway (NOVA 2014), while many of the intellectually challenged youths do not; their participation is more sporadic. Maxwell's (2012, 21) framework leads us to a discussion on how the participation of youths with intellectual disabilities unfolds in everyday life. First, we will look into the availability dimension of the youths leisure. Leisure activities were available where they lived, for example, youth clubs and swimming clubs. However, as Liza's mother stated: 'If we do not facilitate for activities, there will be no activities!' These youths need much more facilitation than their non-disabled peers. During their childhood, most parents have to facilitate for their participation. However, the children's independence increases with age and the young people in the age group we have studied want to put their parents in the 'back-seat'. The parental role changes from being a fellow participant in the activity, to being more a supporter, driver and financial supporter. Former research has illustrated a transition in childhood and adulthood when the child reaches the age of ±10 years (Ytterhus, Wendelborg, and Lundeby 2008;Ytterhus 2012). At this transition stage, parents of intellectually disabled children have to continue to provide the support usually given to younger children, while parents in general can phase out their practical involvement in their children's activities. There is a lack of practical support for the parents' participation and we have to question if the authorities have forgotten that these youths need help in making the leisure activities available to them. Furthermore, one should keep in mind that to the youth themselves it does matter who the facilitators are. Involving grown-ups as parents or teachers as support workers might hinder their possibility to participate on equal terms in peer activities. In other words, the activities exist but they are not accessible. For them, it is not possible to gain access to the context without practical support, and this does not only refer to providing transport, but to the performance of the activity itself. The intellectually disabled youths in our study need someone who can prompt their 'doing' of the activities, a facilitator. An important nuance that has to be added here is that in rural areas where everyone knows everyone, accessibility appears to increase. For example, some of the young people living in rural areas seem to interact more with people in their neighbourhood than the youths in our study who live in more urban areas. Even though the youths in more rural areas interact more with their neighbours, only a few of them have close friends. However, we should not underestimate the significance of these acquaintances. As pointed out by Granovetter (1973), weak ties can still be significant, for example, playing an important role integrating people into communities and creating social cohesion. Being greeted and having small talk with their neighbours no doubt increases the youths' feeling of belonging and being a member of the local community. Furthermore, these people might also function as gateways to new social opportunities. For example, when John had the possibility through a friend of his father to visit the local fire station, he got to know the local firemen and now regularly visits the fire station on his own. Consequently, the young people in our study who have a number of acquaintances have a more active leisure time than those depending on a little network primarily consisting of close family members. However, living in rural areas also appears to have disadvantages. Schoolmates often live quite far apart, which means that visits have to be arranged in advance. Moreover, the range of activities offered in rural areas is often smaller, and there are few activities especially designed for the disabled if the youths prefer to attend segregated activities. At the same time, more and more of today's social interaction takes place online (Easley and Kleinberg 2010), which could suggest that place of residence is less important. In our study, this might be the case when it comes to several of the intellectually disabled boys, as they spend quite an amount of time gaming with others on the internet. However, as mentioned above, the intellectually disabled girls we interviewed scarcely interact online to the degree girls in their age group will generally do. Bearing this in mind, place of residence might be more important to the girls than to the boys in today's online society. The lack of accessibility leads us to the next dimension of participation: affordability. Who has the power to decide if it is worth participating in the leisure activities in terms of available resources? This question has to be answered and negotiated by policy-makers and parents/guardians. Today parents are the ones who have to address this dimension as a private concern. In Norway, where both parents are employed, also when they have disabled children (Lundeby 2008), a question of practical logistics in the afternoons on the overall level becomes a barrier for leisure activity participation for intellectually disabled youths. Disabled youths have the legal right of access to a support worker. However, (i) this right is limited to the local authority's possibilities of recruiting one and (ii) if the youth wants to participate in an organized sports or music club. The support worker also needs to have some qualifications in the chosen activity to be able to facilitate the 'doing' of it. When parents do not have the time or energy, and the support worker (if recruited) does not have the skills necessary for the young person's preferred activity, informal activities with the family and at home easily might end up being the solution. Our findings do not say much about accomodability: to what extent the preferred activity can be adopted to the young person's way of functioning. However, if you are able to go cycling with family members or to play drums in the family home, it is probably also possible to make adaptations in these kinds of 'doings' in organized clubs and orchestras as well. The greatest challenges in accomodability seem to be with peers and attitudes and behaviour in the local environment. When Peter did not want to go outside because of bullying, we still have a fundamental and serious problem that has to be solved. Bullying and excluding attitudes are unacceptable behaviour, regardless of whether or not leisure activities are involved. Furthermore, according to our findings it might seem to be easier to adapt informal leisure activities to the youths functioning, than formal activities. As many of these are solitary, one risk that intellectual disabled youth are segregated from their peers in their leisure time. Our findings illustrate that intellectually disabled youths have the same preferences and wishes for leisure activities as their non-disabled peers. However, the findings also illustrate how the representation of 'leisure-activities' as a phenomenon are attached to the meaning of our bodies and mind. Youth with intellectual disabilities still seem to be more attached to their extraordinary aspects of their mind, than to their ordinary interests and preferences as young people belonging to contemporary cultural context. Nevertheless, our society's understanding of leisure activities as private concerns represented and attached to the meaning of body and mind illustrate how our society construct us as abled or disabled. This construction correspond with what the American professor in female studies, Rosemary Garland Thomsen named as the 'normate'. (Garland Thompson 1997, 8-9). The 'normate' concept becomes evident when we scrutinize the social processes of participation that constitute otherness and systematically marginalize groups of people, here preventing intellectually disabled youths from taking part in organized leisure activities. Disclosure statement No potential conflict of interest was reported by the authors. Notes on contributors Line Melbøe is an educationalist and an associate professor in the Department of Social Education at UiT, the Arctic University of Norway. She has published on disability and participation in reviewed academic journals and books. Borgunn Ytterhus is a sociologist and a professor in Health Science at the Norwegian University of Technology and Science, NTNU. She has published on childhood and disability for years both in reviewed academic journals and books.
2018-12-02T09:13:05.873Z
2017-12-03T00:00:00.000
{ "year": 2017, "sha1": "37779c956caa76ea02ee2a75dc877d54d7493950", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/15017419.2016.1264467", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "036c4f5e386049a0675e90479dd389defb199e62", "s2fieldsofstudy": [ "Sociology", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
263649532
pes2o/s2orc
v3-fos-license
A study to determine the prevalence of oxidised low-density lipoprotein in retinal venous occlusion in a population of West Bengal both coronary artery disease and retinal vein occlusion (RVO) because of atherosclerosis. However, there is no study to show the prevalence of ox-LDL in RVO till date. Aims and Objectives: This study aimed to find the prevalence of ox-LDL in RVO in a population of West Bengal. Materials and Methods: A 2-year prospective cross-sectional study of consecutive, unrelated adult patients, with a diagnosis of RVO, attending the outpatient department in a Medical College, was taken up for study. A pilot study was done to determine the expected prevalence of ox-LDL. Sample size was calculated based on the formula       n= d2 z2pq ( z =1.96, d =0.04, p =0.196, q =0.804, n= minimum sample size). ox-LDL was measured in a total of 512 subjects who were selected based on the inclusion and exclusion criteria. Results: In this study, 272 males (aged 50 ± 7.2 years) and 240 females (aged 46 ± 7.7 years) with RVO were screened for ox-LDL. Elevated ox-LDL levels were found in 142 patients out of 512 participants in this study (27.7%). Moreover, 102 cases (19.9%) were found to have both raised LDL and ox-LDL, whereas 40 RVO cases (7.8%) had only elevated ox-LDL among the study participants. 71.8% of 142 RVO cases with elevated ox-LDL levels also had raised LDL levels, whereas remaining 28.2% had normal LDL cholesterol levels. Conclusion: It is high time to look beyond the traditional lipid parameters such as ox-LDL cholesterol levels as a risk factor of RVO. This study proved that ox-LDL cholesterol is highly prevalent in RVO cases. Thereby, proper screening of ox-LDL is a must as a tool for risk reduction of RVO cases, especially in a population with normal LDL cholesterol levels. INTRODUCTION Retinal venous occlusion (RVO) and diabetic retinopathy were the two most common retinal vascular diseases. 1iabetes mellitus, dyslipidaemia, hypertension, hyperhomocysteinemia, and circulating antiphospholipid antibodies contributed to RVO. [2][3][4] Low-density lipoprotein (LDL) is made up of triglycerides, cholesterol ester, free cholesterol, phospholipids, and apolipoprotein B-100 protein. 5LDL particles can be oxidized by free radicals to form oxidized LDL (ox-LDL) which are unable to bind with the LDL-receptors present in liver, adrenal cortex, etc. 6 Hence, macrophages will engulf ox-LDL and thereby converting themselves to form foam cells that generate arterial wall inflammation leading to atherosclerosis. 7e oxidized LDL-induced atherosclerosis may be responsible for the RVO in the absence of other risk factors, as has been proved by our previous study. 8Still, there was not enough study to show the exact prevalence of ox-LDL in RVO in any Indian population. Aims and objectives This study aimed to find the prevalence of ox-LDL in RVO in a population of West Bengal. MATERIALS AND METHODS A Biochemical estimations Measurement of circulating ox-LDL was done by precipitation method. 91][12][13] Elevated ox-LDL and LDL level is considered with the value of ≥47.8 mol/L 8 and 100 mg/dL, respectively. Statistics Data were entered into Microsoft excel and presented as tables. RESULTS In DISCUSSION The association between circulating ox-LDL and atherosclerotic cardiovascular disease are well established. 14rpkovic et al., have identified the role of ox-LDL as a biomarker of cardiovascular diseases. 15ox-LDL by virtue of arterial wall inflammation, endothelial injury, expression of adhesion molecules, leukocyte recruitment and retention, as well as thrombus formation, [16][17][18][19] are responsible for the atherosclerosis which is also the major risk factor for RVO.A retinal arteriole and its corresponding vein share a common adventitial sheath.Thickening of the arteriole appears to compress the vein.This causes secondary changes; including venous endothelial cell loss, thrombus formation, and potential occlusion. 1 Our previous study has proved that ox-LDL is a risk factor in retinal vascular disease. 8ere was no study to assess the prevalence of ox-LDL in RVO in any Indian population; hence, our study is the first to find out high prevalence of elevated ox-LDL levels (27.7%) in RVO patients although only 19.9% of RVO cases had both elevated ox-LDL and LDL level but 40 RVO cases 2].Hence, screening of traditional LDL cholesterol is not enough to assess the risk of retinal vein occlusion. Limitations of the study The causal association of ox-LDL with different RVO etiologies (confounding factors) could not be ascertained hence future study is needed to comment on that. CONCLUSION It is high time to look beyond the traditional lipid parameters such as ox-LDL cholesterol levels as a risk factor of RVO.This study proved that ox-LDL cholesterol is highly prevalent in RVO cases.Thereby, proper screening of ox-LDL is a must as a tool for risk reduction of RVO cases, especially in a population with normal LDL cholesterol levels. Background: Oxidized low-density lipoprotein (ox-LDL) has been implicated in both coronary artery disease and retinal vein occlusion (RVO) because of atherosclerosis.However, there is no study to show the prevalence of ox-LDL in RVO till date.Aims and Objectives: This study aimed to find the prevalence of ox-LDL in RVO in a population of West Bengal.Materials and Methods: A 2-year prospective cross-sectional study of consecutive, unrelated adult patients, with a diagnosis of RVO, attending the outpatient department in a Medical College, was taken up for study.A pilot study was done to determine the expected prevalence of ox-LDL.Sample size was calculated based on the formula  .96,d=0.04, p=0.196, q=0.804, n=minimum sample size).ox-LDL was measured in a total of 512 subjects who were selected based on the inclusion and exclusion criteria.Results: In this study, 272 males (aged 50 ± 7.2 years) and 240 females (aged 46 ± 7.7 years) with RVO were screened for ox-LDL.Elevated ox-LDL levels were found in 142 patients out of 512 participants in this study (27.7%).Moreover, 102 cases (19.9%) were found to have both raised LDL and ox-LDL, whereas 40 RVO cases (7.8%) had only elevated ox-LDL among the study participants.71.8% of 142 RVO cases with elevated ox-LDL levels also had raised LDL levels, whereas remaining 28.2% had normal LDL cholesterol levels.Conclusion: It is high time to look beyond the traditional lipid parameters such as ox-LDL cholesterol levels as a risk factor of RVO.This study proved that ox-LDL cholesterol is highly prevalent in RVO cases.Thereby, proper screening of ox-LDL is a must as a tool for risk reduction of RVO cases, especially in a population with normal LDL cholesterol levels. Table 2 : The prevalence of raised LDL among 142 RVO cases with elevated ox-LDL Parameters Elevated ox-LDL level with raised LDL Elevated ox-LDL level with normal LDL LDL: Low-density lipoprotein, ox-LDL: Oxidized-low density lipoprotein
2023-10-05T15:29:40.293Z
2023-10-02T00:00:00.000
{ "year": 2023, "sha1": "75a2bd40981bc6ee32fe7ad9ff11f05dbefd888f", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/AJMS/article/download/54565/43720", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "436e04cc4c1fa587f5d63a5fe7db08a2e8139fd4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
252836653
pes2o/s2orc
v3-fos-license
Measures to Mitigate Sodium Valproate Use in Pregnant Women With Epilepsy Sodium valproate is a sodium salt of valproic acid. It is often used in the medical treatment of several conditions like epilepsy, bipolar disorder, mania, and migraines. This review debates whether the usage of valproic acid is appropriate in pregnancy. It also lists the various neonatal deformities and other teratogenic effects the said drug presents due to prenatal exposure to the drug and the implications of continuing drug therapy in certain situations. We should often weigh the outcomes and implement it only in conditions where its use is inevitable. It also includes the importance of awareness among middle-aged women with mental illness regarding the teratogenic effects of sodium valproate use and the relevance of discussion by physicians with patients regarding the usage of this drug despite being aware of the complications. It also explores other treatment options and modalities that can be used in the place of valproic acid for epilepsy and bipolar disorder in pregnant women and women of the reproductive age group, and how we can mitigate the usage of this drug by implementing various measures by referring to various guidelines present in different areas of the world. In summary, this article explores the numerous teratogenic effects sodium valproate presents in pregnancy, alternative medications, and treatment options instead of valproate. It also enumerates conditions where valproate use is necessary and how we can reduce and prevent the usage of valproate in pregnancy by opting for pregnancy prevention programs during valproate use and various other guidelines. Introduction And Background Sodium valproate is derived from the sodium salt of valproic acid. It is one of the most used antiepileptic drugs that are implemented today. Apart from its extensive use in epilepsy disorders, there are numerous clinical conditions where it is used. It is recently approved for various conditions that encompass medical therapy for indications like bipolar disorder [1]. It is also indicated for neuropathic pain, where small doses of the drug along with non-steroidal anti-inflammatory drugs have shown substantial results in the treatment of cervical and lumbar pain [2]. Extended-release (ER) divalproex sodium, which is a combination of valproic acid and its sodium salt derivative, has been indicated for the prophylaxis of migraine [3,4]. Seizure is a common phenomenon and is a condition of concern across all fields of medicine [5]. Epilepsy is a condition characterized by recurring seizures that are not provoked [6]. It is a neurological ailment found frequently in the population [7]. The occurrence of just one seizure does not substantiate the usage of antiepileptic drugs [8]. However, standard treatment involves prescribing various antiepileptic drugs according to the type and condition where epilepsy presents. For example, in generalized seizures, the medication of choice is valproic acid; however, it is administered at lower doses or not preferred in pregnancy [9]. Recent advances in the 21st century also suggest treatment options with neuromodulation techniques. Ablation using lasers has alleviated the frequency of seizures and epilepsy in a large population of those suffering from the condition [10]. A topic of relevance to this article is seizures observed in pregnancy. A point to note is that the frequency of seizures before and after pregnancy is less compared to that observed during pregnancy [11]. Physicians must also note the dangers present in a pregnant mother with epilepsy. Data from studies imply a 10 times greater mortality risk than mothers who did not have epilepsy [12]. The risks that the usage of anticonvulsant drugs poses compared to the risks of uncontrolled seizures are less dangerous [13]. Therefore, antiepileptic drugs are recommended despite being teratogens due to the hazardous nature of generalized tonic-clonic seizures to both the mother and fetus [14]. Some factors linked with seizures occurring in pregnancy are poor management and control of seizures previously, multidrug therapy with two or more drugs, and absence of treatment during pregnancy [15]. We must note that women with epilepsy should be evaluated and managed in an organized manner comprising many steps and may require combined effort and coordination with other specialties, including neurology and neonatology [16]. age group who are either not tolerant to it or to those who do not respond well to other medications that are prescribed for epilepsy (those who are more likely to suffer from puerperal psychosis or are more likely to relapse) [17] and to those who thoroughly follow a proper pregnancy prevention program. However, it is indicated as a first-line drug in women who have tested positively for HIV and are undergoing antiretroviral therapy along with lamotrigine [18]; the chief reason being that the rest of the drugs used for antiepileptic therapy may reduce the levels of antiretrovirals in the body by inducing p50 enzyme [19]. Therefore, women with epilepsy who are not resistant to other antiepileptic drugs are recommended alternative drug monotherapy with less dosage after considering its potential risks [20]. Sodium valproate in pregnancy Valproic acid is linked to a spectrum of deformities, which includes a 20 times risk of acquiring neural tube deformities such as lumbosacral meningomyelocele (spina bifida aperta) [21], improper development of the lip, and palate (cleft) [22]. Abnormalities of the heart (atrial septal defects) are also present and are thought to be caused due to valproic acid inhibiting the enzyme histone deacetylase [23]. Telecanthus is also observed (soft tissues of intercanthi are widened), the upper lip is lengthened, and the philtrum is widened [24]. It is a matter of concern because of its apparent high risk in the causation of a low intelligence quotient and other neurodevelopmental conditions like autism and attention deficit hyperactivity disorder (ADHD) [25]. According to a cohort study conducted in Denmark, there is an association between exposure of neonates to valproate and the chance of the neonate to present disabilities intellectually and a chance of both childhood milestones arising late and disabilities that present intellectually [26]. Another study demonstrated that valproate use by mothers had produced offspring who demonstrated a significant reduction in performance at school while comparing children who were not exposed to antiepileptic medication and those whose mothers had reported lamotrigine use [27]. Prenatal antiepileptic medication exposure causes retardation of growth in infants in comparison to neonates who are not exposed to medication [28]. Some of the probable mechanisms responsible for the causation of fetal valproate syndrome disorders include programmed cell death and degeneration of neurons seen in the brain of rats (prenatally exposed to valproic acid) during development and increased plasticity of synapses shown in the medial prefrontal cortex of these rats, and reduction in folic acid levels. Implying that antioxidant defensive mechanisms are not enough, the oxidative stress that follows might be liable for damage observed in the brain apart from valproic acid [29]. Recent studies show that genetic factors might have a significant link to teratogenicity. If abnormalities were seen in the family previously, it enhances the risk. There are high chances of the second child acquiring abnormalities (about 17-36%) in conditions where the first child had abnormalities due to antiepileptic drug use [12]. Figure 1 summarizes some common anomalies that were observed in mice fetuses based on a histochemical study on prenatal valproic acid exposure [30]. Prevention and reduction of sodium valproate usage in pregnant patients who suffer from epilepsy and measures taken to mitigate the use of valproic acid in pregnancy One thing that neurologists should be very cautious with and engaged in while caring for women who have epilepsy is considering the four concerns listed as follows: they should predict if there is a chance of the subject being pregnant, handle the risks of complications that are often present with the usage of antiepileptic drugs, and predict the risk of seizures and drug use during delivery and breastfeeding [31]. There are two hypothetical conditions where a female taking valproate will be in a quandary of resuming valproate use in pregnancy: the first one being if she is planning to be pregnant while undergoing sodium valproate therapy, and the second being if she becomes pregnant during therapy [32]. We can prevent the first condition by implementing a pregnancy prevention program comprising an evaluation that should assess her chances of conceiving before and during therapy. The female subject should be given information about the dangers valproic acid poses to their child who is yet to be born and the significance of proper contraceptive devices while on therapy. They should be analyzed by a specialist compulsorily every year, and they should also fill out a form that helps them to acknowledge the various risks present along with their uses. Drug packaging will have a pictorial presentation of the various dangers valproate presents in pregnancy. Pharmacists must describe the dangers it poses and provide a card that warns them regarding the same whenever they give the said medication to females of the reproductive age group [32]. Further guidelines to physicians regarding valproate use in pregnancy include those issued by the European Pharmacovigilance Risk Assessment Committee (PRAC) of the European Medicines Agency. They have advised stopping the usage of sodium valproate for treatment in pregnant women and of the childbearing age group due to the risk of acquiring various defects and neurodevelopmental conditions in offspring. It included the following suggestions: valproic acid should not be considered for treating epilepsy or bipolar disorder in either of the previously mentioned groups. They should only be considered when other medications are ineffective or cannot be tolerated by the subject. Effective contraception techniques must be implemented, and the beginning and the course of the medication must be handled by medical practitioners accustomed to treating such diseases. The doctors advising its use to the reproductive age group must provide comprehensive data about the various risks it presents to ensure that the decision is made after understanding said information and its complications. Females in valproic acid therapy should not cease their medication intake without referring their medical practitioner. It should be avoided in the treatment of migraines [25]. The second scenario may also present due to the failure of certain oral contraceptive drugs while on antiepileptic therapy with certain antiepileptic drugs, which are grouped into different categories: the first one being classical enzyme inducers, i.e., carbamazepine, phenytoin, and phenobarbitone increase the breakdown of estrogen and progesterone that are administered orally. Hence, various formulations like progestin-only pills, contraceptives that consist of estrogen and progestin combinations, and subdermal progestin pills prove ineffective. The second category includes antiepileptic drugs whose enzyme-inducing capacity depends upon the dosage; this includes topiramate and perampanel; enzyme induction is less potent than the first category. However, they cause a reduction in plasma concentration levels of local and oral contraceptives. Hence, these are not indicated for use along with these drugs. The third category includes antiepileptic drugs, which induce enzymes like felbamate and rufinamide (at higher doses), both decrease levels of hormonal contraceptives by increasing metabolism. However, enzyme inhibitors like sodium valproate do not interfere with the metabolism of oral formulations of estrogen and progesterone. Neutral antiepileptic drugs like levetiracetam and lamotrigine do not supposedly interfere with the breakdown of oral contraceptives [31]. We have to explore alternative treatments for epilepsy in pregnancy in place of sodium valproate to reduce its usage and its congenital morbidities. Other treatment strategies that can be implemented instead of sodium valproate can be evaluated by assessing the teratogenic risks of other antiepileptic drugs and opting for treatment options that pose fewer complications for the unborn baby. In the single-drug treatment of epilepsy, the highest incidence of significant congenital deformities is seen with sodium valproate, relatively moderate rates are seen with topiramate and phenobarbital, and an intermediate incidence is observed in carbamazepine, oxcarbazepine, and phenytoin. The lowest incidence is observed with levetiracetam and lamotrigine. According to the North American antiepileptic pregnancy registry, all the previously mentioned drugs have a higher risk than the internal control rate of 1.2%. An increased dosage of sodium valproate and topiramate is associated with even more incidence of significant congenital morbidities. Multidrug therapy, including medications like sodium valproate and topiramate, results in a higher incidence of significant morbidities [33]. Assessing risk patterns of other antiepileptics by referring to consistent research studies, the least number of malformations are observed with lamotrigine and levetiracetam, which include no or slight incidences of structural deformities, no organ-centric deformities except for an increased incidence of oral clefts in single drug therapy of lamotrigine, but this is not observed in all studies. Concerning the dose that must be administered, it is suggested to stick to the smallest dosage appropriate for properly managing seizures. However, proper monitoring should be done with said drugs because of the high rates of clearance of both drugs [34]. Hence, for treating epilepsy, these two are favored compared to the risks that the other antiepileptic drugs present. Some other strategies of mitigating sodium valproate use by other medications are demonstrated in Western Cape province, where they performed various measures to prevent valproate use by increasing the availability of alternative drugs that are comparatively safer options in pregnancy; these include lamotrigine, clobazam (for a short interval of time to cover the slow dose -escalation when lamotrigine therapy is started), and levetiracetam. However, in the case of levetiracetam, the restriction guidelines for the physician and the subject undergoing therapy are reduced. Other strategies include instructing registered physicians who prescribe these drugs by district specialists with extensive clinical knowledge on the side effects of various antiepileptic drugs present and a regimen that involves titration to aid females undergoing valproate therapy to transition to lamotrigine, issuing a provincial document that gives clinical guidelines to heads of units of neurology and psychiatry on the most efficient usage of drugs in women who have epilepsy and bipolar mood disorder. They also provide a risk acknowledgment form for completion by medical practitioners who prescribe medications and those women who have started their treatment with valproic acid every year after the first year of initiation of treatment [18]. The guidelines from the Centre of Perinatal Excellence and the National Institute for Health and Care Excellence UK include not providing valproate prescriptions to females who belong to the age group who can bear children or considering medication if a pregnancy prevention strategy is implemented. Women planning to conceive while on valproic acid therapy lessen valproate use for two to four weeks, in addition to a high intake of folic acid to follow up in the first trimester. In case of pregnancy, stop treatment that involves using valproate, consider risk factors, and prescribe other medical alternatives for anti-convulsive therapy with great care and seek advice from psychiatric consultants. Measures also include monitoring the infant closely and consulting a medical specialist in neonatology wherever possible. The primary reason for valproate use during pregnancy by patients is a lack of awareness about teratogenicity among pregnant women or women of reproductive age group, and this is demonstrated in Table 1 below, which is based on a study conducted in a group of 23 individuals [35]. However, irrespective of their awareness regarding sodium valproate, 20/23 participants knew that when they get pregnant while on valproate therapy, there are different steps to be followed accordingly, like changing medication and decreasing dosage. Therefore, this lack of awareness mandates discussion by the physicians with the patient regarding the use of valproic acid. Table 2 discusses the common questions posed by the patient regarding valproate use in pregnancy and how the doctor can answer these questions [31]. Therefore, in retrospect, we realize the importance of proper communication between physicians and patients regarding valproate use. It is mandatory to increase awareness and prevent the unnecessary use of valproate during pregnancy by letting the patient fill out an informed consent form by weighing the pros and cons of its use. Questions that the patient often poses A study in five European countries compared valproate prescriptions before and after implementing risk minimization measures in outpatient settings in 2014. According to this study, valproate initiation as second-line therapy differed across various countries, implying less effectiveness on specific grounds. However, the positive effects of these measures include a reduced number of valproate prescriptions in pregnant women and a reduced incidence of pregnancies exposed to valproate. On these grounds, the European Medical Association introduced extra measures to enforce the previous restrictions. Further studies are underway to determine the effectiveness of the new measures and to monitor valproate use in women with childbearing potential and pregnancies exposed to it [36]. In summary, for the management of a woman with epilepsy before, during, and after pregnancy, including when a woman with epilepsy is planning for pregnancy, we should choose the medication with the lowest risk of teratogenicity. We need to establish a baseline dose for each individual and titrate it to the lowest effective dose. We should prefer monotherapy over polytherapy. Medications like lamotrigine and levetiracetam are preferred compared to drugs like valproate. A high amount of folic acid is recommended in pregnancy, especially when using sodium valproate and enzyme-inducer antiepileptic drugs. During pregnancy, for stable seizures, a minimum of three visits are suggested. If not, then more frequent visits are required. We should monitor antiepileptic drug serum levels and adjust the dosage if there is a decrease in the serum levels or an increased frequency of seizures. Prenatal ultrasonographic organ screening is recommended during the 19th to 21st gestational week. Vaginal delivery is generally preferred. A cesarean section is usually recommended in cases with a high risk for seizures or poor seizure control during labor. In the puerperium period, drug monitoring is recommended in the first week to adjust the dosage. Sleep deprivation is expected in the postpartum period, and sleep deprivation is often correlated with increased seizures. A higher dose than the preconception period may be advised for this reason. Breastfeeding is highly recommended [37]. Conclusions This article summarizes the broad spectrum of uses of valproate and the various implications that valproate use and disuse present in the fetus and the mother. It also demonstrates why physicians should implore effective use of antiepileptic drugs and drugs for bipolar disorders in women of reproductive age group or those with reproductive potential by weighing outcomes in both mother and child and should refrain from valproate use unless or until necessary. Antiepileptic drugs that pose less harm in pregnancy, like lamotrigine or levetiracetam, are preferred instead. It is also essential to consider the mother's decision to resume treatment with valproate or other medication or reduce the dosage. However, it is the duty of the physicians who prescribe such medication to inform the mother about the risks it poses to the child. Physicians can reduce valproate use by implementing various programs such as the pregnancy prevention program in the UK (care should be taken not to use antiepileptic drugs that are enzyme inducers that promote oral contraceptive failure) and measures directing lamotrigine use as per guidelines followed in districts of Cape Town, South Africa (along with other measures), and various guidelines issued by the PRAC. It is imperative to determine and monitor the effectiveness of these guidelines and measures in pregnant women and women with childbearing potential. The article also adds a small note regarding the management of antiepileptic drug use before, during, and after pregnancy. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-10-12T15:44:21.165Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "319546ae2052dae12ef39f0e147fa233020e9755", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/106449-measures-to-mitigate-sodium-valproate-use-in-pregnant-women-with-epilepsy.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d89e24bf5ddf470df95c5216ec09e1cf73506948", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
51917324
pes2o/s2orc
v3-fos-license
Moving patient-oriented research forward: thoughts from the next generation of knowledge translation researchers Plain English summary As knowledge translation trainee participants, we report on the discussions that took place during the 2017 Knowledge Translation Canada Summer Institute. The theme of the institute was patient-oriented research and patient engagement in research. Trying to move knowledge into health care practice can be difficult. Including patients and families as members of the research team can help to overcome some of these challenges by producing more relevant research designs and results. However, in the absence of guidelines and best practices, it can be difficult for trainees and researchers to effectively engage patients and families in designing and conducting research. We detail how trainees and early career researchers are currently engaging patients in their research, the strengths and challenges of engaging patients in research, and lessons learned. These discussions have helped us to identify important areas where future training and guidance is needed to support trainees as patient-oriented researchers. Abstract Background Moving knowledge into health care practice can present a number of challenges for researchers. Including patients and families as members of the research team can help to overcome some of these challenges by producing more relevant research designs and results. However, many trainees and researchers experience difficulty in engaging patients and families in research effectively. Main body We report on the discussions that took place at the 2017 Knowledge Translation (KT) Canada Summer Institute (KTCSI). The theme of the KTCSI was patient-oriented research and patient engagement in research. We provide an important viewpoint on how trainees and early career researchers are currently engaging patients in their research, the strengths and challenges of engaging patients in research, and lessons learned. As the target audience of the KTCSI, we provide our thoughts on what is needed to support trainees and researchers to more effectively engage patients and families in research. Conclusion While many of the participants at the KTCSI are conducting patient-oriented research, practical guidance, resources and tools are needed to ensure the effective engagement of patients in research. These discussions have helped us to identify how to move forward as patient-oriented researchers and where future work and support is needed to achieve effective engagement. Background With the understanding that research findings often fail to change clinical and health system practice, [1] knowledge translation (KT) science has become a growing field aimed at improving the relevance of research and uptake of its findings in the health care system [2]. The Canadian Institutes for Health Research (CIHR) defines KT as "a dynamic and iterative process that includes the synthesis, dissemination, exchange and ethically sound application of knowledge to improve the health of Canadians, provide more effective health services and products, and strengthen the healthcare system [3]." In an effort to pool KT knowledge, expertise, and resources, a network of Canadian KT experts established KT Canada in 2009 to provide ongoing education, training, and support to facilitate and advance the use of evidence in health care. Each year, the KT Canada training committee identifies a priority topic area to explore during the KT Canada Summer Institute (KTCSI), an annual intensive workshop aimed at building KT skills and networking capacity for KT research trainees and early career investigators [4]. In response to the CIHR Strategy for Patient-Oriented Research (SPOR) [5] and the demand for capacity-building in this area, the topic area chosen for the 2017 KTCSI was patient-oriented research. Patient-oriented research refers to "a continuum of research that engages patients as partners, focuses on patient-identified priorities and improves patient outcomes. This research, conducted by multidisciplinary teams in partnership with relevant stakeholders, aims to apply the knowledge generated to improve healthcare systems and practices [5]." The tenets of patient-oriented research are in direct alignment with the principles of integrated knowledge translation (iKT), in which researchers and stakeholders engage in a collaborative model of research to enhance the relevance of their findings [6]. Attendance at the KTCSI is competitive -45 participants submitted applications in 2017, and capacity was capped at 36 participants to facilitate interactivity in sessions. Participants represented 17 unique institutions in Canada and 3 institutions in the United States, including universities, government, hospital research institutes, and funding bodies. A range of university departments were represented, including medicine, nursing, health behaviour, health sciences, psychology, human development, biomedical sciences, and rehabilitation sciences. A total of 15 faculty and facilitators, including two patient advisors, participated in the KTCSI and brought a diverse range of expertise and experience with patient-oriented research. The two patient advisors represented independent patient advisory networks and were active in their respective provincial SPOR units. They were present throughout the KTCSI, co-presented with other faculty members and offered mentoring sessions to participants. This paper presents an overview of the 2017 KTCSI, including patient engagement activities, lessons learned, and future directions for the next generation of patient-oriented researchers. In this article, we provide our trainees' perspective of the KTCSI and its relevance to the conduct of patient-oriented research by post-graduate trainees and early career investigators. We hope this provides readers with important insight into one of the many perspectives on patient engagement in research and helps to further the dialogue regarding training capacity in this area. Methods As a junior facilitator at the KTCSI, AB was responsible for co-leading interactive sessions which sought real-time feedback from participants in targeted areas of patient-oriented research. This included identifying stakeholders with whom they were working, stages of research in which they had involved patients, the extent of patient engagement in their research, and their perceived strengths and challenges related to patient engagement in research. Poll Everywhere© software was used to facilitate report-back and consolidate findings. Findings were analyzed using descriptive statistics provided within the Poll Everywhere© platform, which was also used to generate word clouds of nominal data provided by participants. Word clouds provide a visualization of words reported, with words that are most frequently reported being biggest in size. Participants also completed final evaluations of the KTCSI where they were asked to rank each session (speaker, format, content) on a five-point scale from poor to excellent and provide additional written information regarding the most and least useful sessions, relevance of the KTCSI, and other general comments. The next section will present findings from participant report-back sessions and final evaluations. Findings from interactive participant sessions Who should be engaged? Participants identified a number of stakeholders that they felt were important to have on their research teams to support patient-oriented research. The majority of the discussion centered on participant confusion regarding involving patients as the targets of research versus engaging patients as members of the research team. The lines between involvement in the research and engagement in the research process were felt to be nebulous, and this discussion proved to be an important springboard over the course of the KTCSI. Not surprisingly, many participants (22%) saw opportunities to more fully engage patients during all stages of their research to become truly patient oriented. Additionally, a number of other stakeholders were seen as important to ensure that research evidence was effectively translated into patient care, including health care providers, family members, decision-makers, community groups, other researchers, health care organizations and funding agencies. What are the strengths of engaging patients in research? Participants were asked to list and discuss what they perceived as the strengths of engaging patients in their work, which were discussed further in small groups. Participants put forth a number of strengths and opportunities they felt contributed to effective engagement (Fig. 1). Improved relevance of both the research being conducted and its findings were overwhelmingly seen as the greatest strengths of engaging patients. Similarly, participants saw the value in producing research outputs that are "designed by them, for them" and are tailored to the needs of the end-users. While other studies have identified improved recruitment and retention as an important strength of engagement, participants did not identify this as an important advantage [7]. Participants also saw the importance of improving patient-oriented research capacityboth for the researchers who are engaging patients and the patients who are being engaged on the research team. The opportunity to share knowledge and expertise from the perspective of a researcher and as a patient was seen as beneficial to improve the overall research experience [8]. What are the challenges of engaging patients in research? Participants identified a number of potential challenges to engaging with patients as partners in research. This included challenges associated with identification and recruitment of patients as research team members (Fig. 2). Perceived challenges included finding representative patients, locating patients when the population or geographic size might be small, and targeting patients and families that are from hard-to-reach socioeconomic segments. Participants struggled with the logistics and feasibility of keeping patients engaged throughout a research project, as well as how to address issues such as overcoming stigma for vulnerable populations and potential miscommunication between researchers and patients. Overall, the lack of guidance, resources and training on achieving effective patient engagement in research was seen as a significant challenge that needs to be addressed as we move forward with our research careers [9]. At what stages of research are you currently engaging patients? With the understanding that patient-oriented research seeks to involve patients in all aspects of the research process, from identifying research questions to interpreting and disseminating research findings, we sought to explore how participants were engaging patients in their work. There were high levels of patient engagement in selecting the research design (16%), conducting the research (32%), and interpreting the results (24%). However, very few were engaging patients in defining the research problem (12%), formulating hypotheses (8%), reporting study findings (12%) and disseminating research findings (4%). No participants had involved patients in literature review/synthesis work. These findings align with existing evidence indicating that most research engages patients during the preparation and early stages of execution of a research study (e.g., study design, recruitment), and less commonly during data collection, analysis and translation [7]. To what degree are you engaging patients in research? Finally, participants were asked to report on the extent to which they were engaging patients in their research, referring to the International Association for Public Participation (IAP2) Public Participation Spectrum [10]. This spectrum outlines levels at which patients and the public can engage in research, from low-level (e.g. keeping participants informed) to high-level (e.g. patient-initiated and patient-led research) engagement. Interestingly, levels of engagement varied, with most currently engaging patients at the "involve" level (38%), by working directly with patients throughout their research, and at the "collaborate" level (33%), through true partnership with patients to produce research findings. By comparison, the engagement literature suggests that patient and public involvement in health research and policy development is concentrated at the "consult" and "involve" levels of the spectrum [11]. A few participants identified their engagement work at the "inform" and "consult" levels, which were felt to be congruent with the types of research they were conducting. To date, none of the participants had engaged with patients at the "empower" level, whereby patients lead the research and assume responsibility for final decisions. Lessons Learned & Moving Forward Three key learnings emerged at the conclusion of the 2017 KTCSI, which offer a direction for future work and guidance in this field. Firstly, we noted confusion from KTSCI trainees around the concepts of patient-oriented research and patient engagement and the traditional roles that patients play as targets and participants of health research. Participants indicated on their final evaluations that discussions regarding what patient engagement encompasses and the science versus the practice of patient engagement were the most applicable and useful for their current research. The session highlighting the differences and similarities between patient-oriented research and the concept of patient engagement elicited important discussion. Patient-oriented research, while encompassing and championing patient engagement throughout the research, also incorporates concepts related to broader stakeholder input, multidisciplinary research and application to practice [5]. Others have suggested that patient-oriented research is the intersection of patient engagement and knowledge translation [12]. By the end of the KTCSI, there remained lingering questions regarding what constitutes patient engagement. For example, does a focus group eliciting patient feedback on a proposed intervention constitute patient engagement? What if it leads to prioritizing the next research question? What is the difference between patients participating in research and patients partnering in the research process? Many approaches to research that purport to engage the community use the language of patient involvement and engagement, but do not meaningfully involve them in the research process. This could include, for example, capturing the patient voice using qualitative methods (such as focus groups), which then informs the research findings rather than subsequent study design and conduct. It is clear that this has created a great deal of confusion for trainees. As such, greater standardization of terminology in training programs to uphold a robust definition of patient engagement as partners throughout the research process, rather than participants in research, is needed. We recommend that health professional schools adopt the definition put forth by the CIHR that upholds that patients actively participate in the identification of research priorities and questions and in the design and undertaking of research projects [13]. Further, participants expressed concern that patient engagement may become a catch-all term, or buzzword, under which all patient-related research falls, regardless of the type and extent of engagement. This is partly due to the rise of granting competitions under the SPOR umbrella and internationally, which require patient engagement throughout the research process. Without clear guidance, including resources and tools to support this work, effective engagement could be compromised. Continuing these conversations is important, and while it may never be black and white, defining clear boundaries is necessary to support KT trainees and early career researchers moving forward. Secondly, challenges related to the recruitment and retention of patients and families, as well as decision-makers, as partners in research led to a broader discussion of the importance of building and sustaining relationships with stakeholders as we develop our research programs [8]. Many faculty and facilitators commented on the importance of having reciprocal relationships in place prior to funding calls and the important role relationship building has on engagement. Although navigating these relationships can be difficult, it is encouraging to see so many trainees and early investigators doing it as "business as usual." While senior researchers may view this as a new way of doing things, many of us at the beginning of our careers have been trained in environments where patient and stakeholder engagement is fundamental to the research we do. That being said, we discussed significant challenges in recruiting, establishing, and maintaining research partnerships among diverse patient and family stakeholders, and vulnerable populations, in particular. Although optimal engagement strategies in such populations remain unclear, acknowledgement of this concern highlights the careful consideration participants are giving to patient partner identification and the implications of the resulting research. To help address this challenge, we recommend trainees utilize established patient networks, such as Patient Advisors Network and Patient Voices Network, to engage patients before they begin their research projects. These networks follow an established process to identify and link patient partners and researchers, as well as providing important guidance on how to sustain and strengthen engagement throughout the research process. This approach not only reduces the onus on trainees to establish relationships on their own, but also contributes to the establishment of the patient engagement community throughout Canada. Many participants also recognized the tendency to engage 'professional patients' (e.g., those who may be patient advisors on a number of research projects) and identified the need for greater efforts to partner with patients and families from underrepresented populations in patient-oriented research. These may include individuals who are stigmatized by their health condition, the elderly, persons with chronic conditions, those from lower socioeconomic areas, those who speak English as a second language, and visible minorities, to name a few. When we fail to include representative patients, not only are we failing to identify important research priorities, but our research results and KT products will also fail to reach and impact these populations. It is imperative that trainees and early career researchers endeavor to recruit more inclusive voices to add to our current dialogue and to establish effective ways of inviting these populations to the table. Although the KTCSI included two patient advisors throughout the program, participants suggested that greater patient and stakeholder presence, as well as informal opportunities to interact, would be helpful for future patient engagement sessions. Finally, although SPOR has highlighted inclusiveness as a guiding tenet for conducting and evaluating patient-oriented research, we encourage trainees and researchers to evaluate and share their methods for recruiting underrepresented populations to help inform best practices in this area. Finally, trainees and early career researchers perceived a critical gap in the availability of resources and guidance on how to engage patients in research. Participant evaluation comments indicated that many are now considering new ways to include patients during all stages of the research process; however, uncertainty regarding practical ways to do this remained. For example, how does one find representative patients? How can we invest in capacity building? What type of remuneration should be offered? Is it always appropriate to engage patients throughout the research process? When is engagement a burden? Current dialogue on patient engagement in research suggests that patients should be engaged in all stages in the design and conduct of research, and in effect become co-researchers. However, this poses significant challenges for both patients and research teams, including protracted timelines, financial and human resources, and potential conflicts between desired patient and research outcomes. Trainees and early career researchers at the KTCSI identified many challenges with engaging patients, such as the feasibility and logistics of engagement, as well as compensation for patient's time, and these cannot be overlooked as important areas of evidence building and syntheses. A lack of best practice guidance on conducting patient-oriented research has been reported in the literature, [7] as has a lack of practical tools for engaging patients and families [9]. In the absence of such tools, we recommend following the guidance of the Institute for Patient and Family Centered Care (IPFCC) in creating open and honest communication between trainees, researchers and patients engaged in their work regarding what level of engagement is appropriate for each patient and each research project [14]. Use of the IAP2 framework can help shape these discussions and provide an objective and standardized way to define how patients were engaged [10]. Improved evaluation of patient engagement strategies and outcomes was also identified by participants as an important area for future research and to establish an important evidence base to support KT science [15]. Conclusions The next generation of KT researchers considers engaging in patient-oriented research as a means to advance the translation of research findings into practice. While many of the participants at the KTCSI are conducting patient-oriented research, practical guidance, resources, and tools are needed to ensure the effective engagement of patients in research. Many of the issues identified by KTCSI participants were not KT specific, but rather speak to the need for more universal guidance on engaging patients in research. It is, however, important for KT trainees and early career researchers to be at the forefront of evidence creation and synthesis in this field. This will help advance the science of iKT and ensure that KT products remain relevant and responsive to the needs of patients and other end-users. The KTCSI provides an important venue for trainees, early career researchers, and KT Canada faculty members to come together and discuss important issues such as patient engagement. We feel it is important to highlight the experiences and challenges faced by trainees and early career researchers as actors in shaping the future of patient engagement in research. The opportunity afforded by the KTCSI for novice researchers to share their experiences with patient engagement and contribute to best practices supports future collaborations between patients and the research community. We encourage KT Canada faculty to continue these discussions with their trainees and uphold patient engagement as a priority area. We also encourage KT trainees to continue to share their learnings with one another through participation in events like the KTCSI and through knowledge exchange opportunities like the KT Canada Seminar Series. Finally, we encourage SPOR support units across Canada to provide ongoing training and guidance regarding patient-oriented research, and specifically patient engagement, to trainees through workshops and patient advisor mentoring, and to build capacity in this area by providing targeted trainee funding opportunities to conduct this work. Having these support structures in place will help trainees to incorporate, share, and grow engagement best practices in their current and future work. Establishing and meeting trainee needs to conduct high quality and rigorous patient oriented research will, in turn, contribute to the advancement of patient engagement science.
2018-08-02T11:06:38.027Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "80ecb365b14fda14e63507a75d7a4f56ff0ad97f", "oa_license": "CCBY", "oa_url": "https://researchinvolvement.biomedcentral.com/track/pdf/10.1186/s40900-018-0110-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80ecb365b14fda14e63507a75d7a4f56ff0ad97f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
210671442
pes2o/s2orc
v3-fos-license
Numerical investigation of optimized piled raft foundation for high-rise building in Germany Piled rafts have been used as a foundation system for high-rise buildings worldwide in different soil conditions, e.g., in soft to stiff clay as well as in medium-to-dense sand. Piled raft is currently used not only to control the foundation settlement but also to minimize the required raft thickness to reach the most optimized foundation design. The purpose of this study is to investigate the behavior of piled raft as a foundation system for Frankfurt over-consolidated clay based on the well-monitored Messeturm building in Germany. The numerical tool used in analysis is Plaxis 3D finite element software with hardening soil material model. The piled raft foundation behavior will be evaluated based on the total settlement, differential settlement and the pile skin friction. Based on this study, it was found that the chosen foundation system “plied raft foundation” for Messeturm was an optimized solution for the proposed building. Introduction In general, when constructing a low-rise building on a bearing layer of soil, shallow foundations can be used, but if the building is a high-rise building and contains a number of basements, raft foundation is normally chosen to support the entire structure. But in case of weak subsoil, raft on piles foundation system is used to transfer the load to deep bearing layers, and in case of a building founded on deeply extended non-bearing layer, it is a waste and an uneconomical solution to use long piles to reach the bearing layer. In this case, the piled raft foundation system is considered to be one of the most economic foundation systems for these projects which are in the zone between the raft (relatively cheap) foundation system and raft on piles (very rigid and expansive) foundation system. Piled raft is a composite foundation system that combines the bearing capacity of both the raft and the piles together, and its behavior depends on the complex interaction between pile-soil, pile-raft, raft-soil and pile-pile. The piled raft foundation may be a good alternative solution; one of the most benefits of piled raft foundation is that there is no need to satisfy geotechnical bearing capacity; only the structural capacity is required as mentioned by [1]. Piled raft coefficient (α L ) is defined as the ratio of load carried by the piles to the applied total load; when this coefficient is equal to zero, it is ideal raft foundation; when it is equal to 1, it is conventional raft carried by the piles. Moreover, we can define another important coefficient (α S ), coefficient of piled raft settlement reduction factor which is equal to settlement of piled raft to settlement of traditional raft; when it is equal to 1, it is a raft foundation, while when it is equal to zero, it is the conventional raft on piles. The piled raft foundation ranges between 0.0 and 1.0 for both coefficients (α L and α S ) as shown in Fig. 1. Previous investigations Kawabata [3] used the boundary element method (BEM) to investigate the piled raft foundation without considering the slipping behavior between the piles and soil. Clancy et al. [4] used the finite element method (FEM) for modeling of the piled raft. The raft was presented by a 4-node quadrilateral plate bending element, and the piles were modeled as beam elements. He discussed the effect of meshing by using the reduced integration concept to improve the numerical results. Ta and Small [5] studied piled raft in a layered soil. The piles and soil were simulated by numerical method called finite layer to represent the piles in the layered soil. The load distribution along the piles in the layered soil is affected by layers relative thickness and stiffness. El-Mossallamy [6] studied numerically the piled raft in over-consolidated clay using two well-monitored buildings in Frankfurt: Messeturm and Westend Buildings. His method of analysis is a mixed technique where the raft is modeled using FEM, while the piles are modeled by BEM. The optimum design was achieved when the piled raft coefficient is between 0.4 and 0.6. He found that the load taken by each single pile in the piled raft system depends on pile position, raft stiffness, on the configuration of the applied structural loads and on the load level. Russo et al. [7] classified the piled rafts to small and large piled rafts. The settlement problem is mainly associated with large piled rafts as well as the differential settlement. They considered also the nonlinearity of piled raft system in their investigations. Prakoso and Kulhawy [8] used Plaxis finite element program but as 2D plain strain model. The pile dimension-to-raft width ratio has a big effect on both settlement and differential settlement. De Sancits et al. [9] stated that the piled raft problem needs a 3D model to get an optimum design method, while 2D plain strain cannot be accurate enough. The bearing capacity block failure is accepted only when the width of pile group is the same as the raft width. Reul and Randolph et al. [10] used finite element program ABAQUS to study the behavior of piled raft via 3 buildings: Messeturm, Westend and Torhaus. The maximum settlement by piled raft can be reduced to be 51-63% compared with the unpiled one. They found that the calculated piled raft coefficient by numerical analysis is higher than the measured values. Mendonca et al. [11] used the mixed technique in which the interaction between piles-soil-raft is taken into account, and their results match well with the finite element method. Reul [12] used the finite element method to model the piled raft in over-consolidated clay. He found that the reduced settlement can be achieved by using longer piles rather than using a higher number of piles. Navak et al. [13] investigated the load-settlement behavior using 3D finite element using two well-monitored buildings: Westend in Frankfurt clay and Urawa in Japan; both existed in overconsolidated clay. He compared the FEM with BEM. For the first building, the results are in good agreement. Balakumar [14] used ANSYS finite element program to study the performance of piled raft in sand. He found that the piles are fully mobilized without any failure observed. He found also that the piles load share decreases when settlement level also increases. Oh et al. [15] used finite element (Plaxis) and finite difference (FLAC) methods to study the behavior of piled raft in sandy and clayey soil. They found that the maximum settlement depends mainly on the number of piles and the pile spacing. The differential settlement was reduced if the raft thickness increases. Sandeep et al. [16] stated that if the piled raft coefficient is initially high, then it decreases with stress increase; also, he stated that the differential settlement increases when Poisson's ratio increases. Katzenbach et al. [17] presented a 3D finite element model to simulate piled raft in a layered soil; he stated that the load carried by the raft is increased by increasing the pile spacing. Omar El-Kadi [18] used finite element Midas GTS software to study the performance of piled raft using embedded pile concept to simulate the piles. He found that the embedded pile cannot be used to get the failure load and C-phi reduction technique cannot be used to predict the capacity of pile group as well as the piled raft precisely. Elarabi [19] adopted the finite element program Plaxis 3D Foundation to investigate the applicability of piled raft in soft clay under undrained condition. He found that by increasing the pile spacing in case of piled raft will lead to a larger settlement. Amr [20] used DIANA finite element method to investigate the piled raft in Port Said soft clay using soil profile consisting of fill with thickness of about 1 m, followed by sand and silty sand with thickness of about 12 m underlain by a large extended Port Said soft clay; the foundation level is on the upper sand layer. El-Wakil [21] used finite element method to simulate piled raft laboratory models; he concluded that by increasing the pile length, a better performance is obtained rather than by increasing number of the piles. S. Mil [22] used Plaxis 3D finite element program to investigate the behavior of piled raft in stiff clay. He found that by increasing the pile spacing-to-diameter ratio more than 6 will increase the settlement significantly as discussed in [23]. Verification of the numerical model The verification of the presented numerical model will be compared via two cases; the first case is a simple single pile developed by Katzenbach [24,25] as a conceptual verification, while the other case is Messeturm building, made by Sommer [26] and El-Mossallamy [6]. Both cases were founded on over-consolidated Frankfurt clay, so it is important firstly to state the main properties of Frankfurt subsoil formation. Frankfurt clay Frankfurt subsoil consists of sand and gravel layers up to 10 m from the ground surface underlain by over-consolidated clay to a large depth, followed by limestone as shown in Fig. 2. Frankfurt clay was over-consolidated by land formation with a value of the previous vertical stresses ranging from 500 kPa to about 2500 kPa [27]. The properties of Frankfurt clay are shown in Table 1. These results are based on different samples taken from the site of Main Tower building. For Frankfurt clay, El Mossallamy [6] reported that the difference between the short-term and long-term conditions is minor, and for a single pile, the effect is about 6.0% to 15.0%, while for piled raft case, the effect of consolidation may reach 30%. Single pile First numerical task has been done as a conceptual verification based on Katzenbach finite element work [24]. He studied a single pile embedded in Frankfurt clay numerically and compared the effect of the raft on the pile performance. The modeled pile was a bored pile of 30 m long with diameter 1.50 m. The dimension of circular raft was 12 m in diameter and has 1 m thickness (Fig. 3). The computations were carried out using the finite element program Plaxis. A 2D-axisymmetric model with triangular (15 nodes) is used to examine the behavior of a freestanding pile and pile raft in clay soil under vertical axial loading conditions. An exemplary finite element mesh is shown in Fig. 4. Close to the pile, a very fine discretization is used to ensure accurate results. The model dimensions of a width equal to 20 times the pile diameter or equal to 5 times the raft width and a depth equal to 2.5 times the pile length were chosen to ensure that the numerical results are not affected by the boundary conditions. For the numerical modeling, a linear elastic material behavior for concrete was assumed with the parameters To account for the nonlinear soil behavior, elasto-plastic material behavior was considered for the soil elements. A hardening soil material model was adopted for the numerical modeling. This material model is considered as an advanced model for soil simulation, where the elastic deformation is represented by three input values instead of one value as in the case of the Mohr-Coulomb Model. The input moduli are the triaxial loading modulus (E 50 ) [Eq. (1)], the triaxial unloading modulus (E ur ) (Eq. (2)) and the oedometer modulus (E oed ). The difference between the input soil moduli is shown in Fig. 5. where E 50 = reference stiffness modulus at confining pressure (p ref ) 100 kPa; E ur = reference unloading stiffness modulus at confining pressure (p ref ) 100 kPa; C ′ and ′ = effective shear parameters; m = factor which represents the stress level dependency of the stiffness; its value ranges between 0.4 and 1.0 according to the type of the soil. The soil stiffness in this model depends on the stress level and on the stress path; moreover, it takes the unloading and reloading behavior of the soil into consideration. The yield surface is not fixed, but it can be expanded due to straining; moreover, the dilatancy can be cut off and the failure criterion is adapted according to the Mohr-Coulomb failure criterion. (1) The model takes into account both shear and compression hardening [28,29] (sometimes it is called a double hardening model); shear hardening occurs due to primary deviatoric loading, while compression hardening occurs due to isotropic loading and primary compression. The hardening soil parameters used in analysis are listed in Table 2. The numerical calculation is divided into several steps. In the first step, the initial stress state is generated by consideration of soil elements only. Afterward, the soil elements located at the pile position are removed and replaced by pile elements (wished in place), and the pile/raft own weight and contact conditions are activated. Finally, the prescribed settlements are applied on the top of pile or to the raft. In Katzenbach study [24], each case was investigated under three different settlements. For the free-standing pile case, the three displacement values were 0.005D (7. For the second case, Fig. 7 shows the distribution of skin friction along the pile length. Under smaller loads (up to 6 MN), the skin friction increases linearly with the depth. Under the higher loads (12 MN), the peak of skin friction will be localized directly under the raft (up to 97 kN/m 2 ) due to the high applied pressure. Figures 8 and 9 show the verification of the numerical model done by the authors and the numerical study done by Katzenbach [24]. Figure 8 shows the comparison regarding the skin friction along the pile shaft for the free-standing Fig. 9 shows the skin friction distribution for the piled raft case. Form Fig. 8, it is obvious that at the low-stress level (up to 7.50 mm), the skin friction is almost constant along the pile length. By increasing the prescribed settlement from 7.5 mm to 15 mm, the mobilized skin friction increases and reaches its maximum value at the pile tip. In the last stage, by increasing the pile settlement from 15 mm to 150 mm, a sufficient relative displacement between the pile and the soil will mobilize both cohesion and friction to its maximum value. The effect of connecting raft to the pile is presented in Fig. 9. Under the prescribed settlement, the settlement caused by the raft reduces the relative displacement between the pile and the surrounding soil, especially in the upper domain near the raft application. For higher prescribed settlement, an additional deformation has been produced, and its value depends on the used constitutive laws, which is the reason for the massive skin friction in the upper part. In the lower part, the relative displacement decreases with depth. There is a very good agreement between our results and the previous numerical results reported by Katzenbach, under small displacement up to 15 mm. But a relatively small deviation is observed especially for 150-mm settlement for the second case due to the higher contact stress under the raft compared with the same settlement in the free-standing pile due to raft contribution. Based on these results, the numerical model has been verified and can be adopted for further investigation cases. [24,25] diameter bored piles with 1.3 m diameter, and the pile lengths differ according to its position in the raft. Case study (Messeturm) The bored piles are distributed in three rings; in the inner ring, the piles length is 34.90 m; in outer ring, the piles length is 26.90 m, and in the middle ring, the piles length is 30.90 m. The pile spacing varies from 3.5*D up to 6*D according to their position in the raft. The piles are concentrated mainly under the central core of the Messeturm to be near the heavy loads coming from the core to reduce the straining action in the raft and also to control the differential settlement. The foundation level of the building is located 14 m below the ground surface; the total weight of the building is 1880.0 MN with average stress on the raft of 544.0 kPa, while the uplift force is about 276.0 MN. A simplified calculation approach was used for a preliminary design of the foundation system to calculate the raft size and pile distribution. The total pile loads were assumed to depend on the mobilized skin fiction and carry 55% of the total load. The behavior of the foundation was monitored during the construction period and for more than 7 years after the finishing of construction by means of geodetic and geotechnical measurements using 12 instrumented piles, 13 contact pressure cells, one pore pressure cell and three multi-point borehole extensometers as shown in Fig. 10. The results of the field measurements indicate that the load-bearing behavior of the Messeturm piled raft foundation has been optimized. However, the design concept assumed that the piles would reach their ultimate bearing capacity by the settlements caused by the structural load; thereafter, transferring any additional load increments to the raft could not be proved by field observations. The measured pile loads show that much higher skin friction has been mobilized than that determined for a single isolated bored pile. Numerical model The computations were carried out using the finite element program Plaxis 3D. A 3D-numerical model with 10-noded elements is developed to investigate the behavior of a piled raft foundation for Messeturm founded on Frankfurt clay and compared with numerical results developed by Sommer [26] and El-Mossallamy [6]. The finite element mesh is shown in Fig. 11. Close to the piles, a very fine discretization is used to ensure accurate results. The chosen model dimensions in x, y and z are 300.0, 300.0 and 90.0 m, respectively. These dimensions have been chosen to be large enough to minimize the effect of the model boundaries on the numerical results (see Fig. 11). Also, the model depth was taken the maximum of 2.5 times the length of piles or the depth were the increase in stresses of the building becomes smaller than 20% of the overburden pressure. The boundary conditions were applied as follows: 1. The ground surface is free in all directions. Due to Frankfurt clay nature as mentioned before, the clay was overloaded by heavy loads with average stress ranging between 500.0 and 2500.0 kPa, so the Over Consolidation Ratio (OCR) ranged over specific values and it was difficult to find out the exact value of OCR without performing a consolidation test. There are many correlations, which relate OCR with several parameters, for example K Ooc = K onc (OCR) m reported by Meyerhof 1976 [32] where m = 0.5, and the OCR for Frankfurt clay, in this case, is 1.50. For the presented study, several values of OCR will be investigated using values above and below 1.50 (see Table 3). Another technique for defining the over-consolidation pressure is adopted in the hardening soil model which is the pre-overburden pressure (POP) technique instead. The difference between both of them is illustrated in Fig. 12. For the raft, a linear elastic material model was adopted, and the model assumes a linear relationship between the stress and the strain. This material model needs two parameters: the modulus of elasticity (E) and Poisson ratio (υ). For the raft, E = 30,000 MPa, υ = 0.167, while for piles, E = 22,000 MPa, υ = 0.20. The piles were modeled as "embedded elements." In this case, the piles do not have a "real" volume or a "real" interface. However, a virtual elastic zone is created by assigning an equivalent pile diameter within the material data set of the embedded pile. This virtual elastic zone disregards the plastic behavior of the soil within the zone and approaches the actual volume pile behavior. On the other hand, due to the "virtual" volume and interface, evaluation of the effect of strength reduction factor (R inter ) cannot be realized. R inter is taken as rigid (1.0) with the assumption that the interface does not have a reduced strength with respect to the strength in the surrounding soil. The steps of the numerical model are summarized as follows: 1. Firstly, in-situ stress state or it is also called primary stress conditions is applied. In the step, only the own weight of the soil domain will be activated. 2. Then, the soil excavation up to a depth of 8 m below GL will be done. This was modeled by deactivation of the soil element from ground surface up to 8.0 m. Keep in mind that the excavation sides were kept in equilibrium by supporting them horizontally in x-or in y-direction. 3. In the third step, the installation of piles as embedded elements was done by activating the beam elements and the contact conditions along them in the three different rings (inner, middle and outer). 4. Consequently, a further excavation up to 14 m below ground level was done. This step is necessary in order to construct the concrete piled raft. Then, the own weight of the raft and the interface elements between the raft and subsoil have been activated taken into consideration the uplift water pressure. This means the own weight will be reduced by almost 60 kN/m 2 . 5. Finally, the vertical loading representing the own weight of the high-rise building is out on the upper surface of the raft. Figure 13 shows the verification of the hardening soil model results with four different OCR values 1.25, 1.5, 1.75 and 2.0. The numerical results show that the curve with OCR value equal to 1.50 is nearest to the measured one, as the total settlement is about 12.27 cm compared with 12.0 cm measured value under the same vertical loading, but it is also shown that the effect of OCR value on the behavior is minor. Table 3 summarizes the results of hardening soil model using the previous OCR values. When the OCR slightly increases, this will reduce the pile share (α L ) due to the increase in soil stiffness, which will increase the soil share, and consequently, the pile load coefficient (α L ) will be reduced from 57% to 54%. Numerical results In the case of using pre-overburden pressure (POP) of 700.0 kPa, the reached settlement in this case was 113 mm, which is near to the obtained value by OCR that is equal to 1. 50. Figure 14 shows the differential settlement between the center of raft and its edge as a function of the applied vertical stress on the raft. It can be noticed that for the three presented results (numerical molding of the authors, measured values and the results reported by El-Mossalamy [6]), the differential settlement increases with the applied vertical stress. Our numerical results show a good agreement, but it is slightly less than observed values; 14 Vertical stress-differential settlement curves this means that the numerical model for the piled raft has more stiffness than the reality. Figure 15 shows the settlement contour after application of the total vertical load. Under the own weight of raft, the vertical settlement was almost constant with a maximum value of 2.4 cm, by applying the total vertical load on the raft, a higher increase in the vertical settlement of the piled raft system up to 12.0 cm (OCR = 1.50). Figure 16 shows the skin friction distribution for a selective pile in the middle ring. The embedded pile element in Plaxis 3D gives directly the skin friction as force/ unit length, and then, it can be transformed manually to skin friction along the pile length. The results in Fig. 16 show a good agreement with the previous results obtained by El-Mossallamy [6]; both of them show comparable distribution up to a depth of 20.0 m. The maximum skin friction was almost 130 kN/m 2 , which is comparable with the measured value of 140.0 kN/m 2 . This confirms the applicability of the numerical model to predict the behavior of piled raft in over-consolidated clay. Conclusions This article presents the results of numerical analysis of the piled raft foundation for over-consolidated clay using 3-D finite element analysis. Firstly, the numerical model has been verified by comparing our results with a previous numerical study reported by Katzenbach for free-standing pile and for a pile connected to circular raft. The comparison shows a very good agreement between the results under small loading conditions. But a relatively small deviation was observed for a higher vertical loading. Then, a finite element model of Messeturm piled raft foundation was developed. The hardening soil material model is used to simulate the over-consolidated Frankfurt clay; moreover, the embedded beam element concept was adopted for the piles as it is a quick tool to simulate piles under service load but not to predict precisely the ultimate pile load capacity. The numerical results show a good agreement with measurement made for the foundation for the "Messeturm" regarding the raft settlement and skin friction distribution along the piles. Finally, this study indicates that piled raft foundation concept has significant advantages in comparison with conventional foundation systems and can be considered as optimized solution for high-rise building founded on overconsolidated clay.
2020-01-17T15:32:31.524Z
2020-01-17T00:00:00.000
{ "year": 2020, "sha1": "9a8cdd8efdfaee38b26f45a3245918ac3144c42d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41062-019-0258-4.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "9a8cdd8efdfaee38b26f45a3245918ac3144c42d", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
6140980
pes2o/s2orc
v3-fos-license
Reducing length of stay for acute diabetic foot episodes: employing an extended scope of practice podiatric high-risk foot coordinator in an acute foundation trust hospital Background To enhance the acute management of people with diabetic foot disease requiring admission, an extended scope of practice, podiatric high-risk foot coordinator position, was established at the Great Western Hospital, Swindon in 2010. The focus of this new role was to facilitate more efficient and timely management of people with complex diabetic foot disease. The aim of this project was to investigate the impact of the podiatric high-risk foot coordinator role on length of stay, rate of re-admission and bed cost. Method This study evaluated the difference in length of stay and rate of re-admission between an 11- month pre-pilot period (November 2008 to October 2009) and a 10-month pilot period (August 2010 to June 2011). The estimated difference in bed cost between the pre-pilot and pilot audits was also calculated. Inclusion criteria were restricted to inpatients admitted with a diabetic foot ulcer, gangrene, cellulitis or infection as the primary cause for admission. Eligible records were retrieved using ICD-10 (V9) coding via the hospital clinical audit department for the pre-pilot period and a unique database was used to source records for the pilot phase. Results Following the introduction of the podiatric high-risk foot coordinator, the average length of stay reduced from 33.7 days to 23.3 days (mean difference 10.4 days, 95% CI 0.0 to 20.8, p = 0.050). There was no statistically significant difference in re-admission rate between the two study periods, 17.2% (95% CI 12.2% to 23.9%) in the pre-pilot phase and 15.4% (95% CI 12.0% to 19.5%) in the pilot phase (p = 0.820). The extrapolated annual cost saving following the implementation of the new coordinator role was calculated to be £234,000 for the 2010/2011 year. Conclusions This audit found that the extended scope of practice coordinator role may have a positive impact on reducing length of stay for diabetic foot admissions. This paper advocates the role of a podiatric high-risk foot coordinator utilising an extended scope of practice model, although further research is needed. Background Diabetic foot disease is characterised by peripheral arterial disease, peripheral neuropathy, ulceration, infection, joint deformity, joint destruction and amputation [1,2]. Management of the diabetic foot may include regular outpatient consultation; frequent presentation to accident and emergency; extended antimicrobial therapy; prolonged hospitalisation and emergency amputationall of which have a significant personal and financial impact on the individual and society [3]. The inpatient management of the diabetic foot is of equal importance to outpatient management as patients admitted for an acute diabetic foot condition are particularly vulnerable to poor outcomes, with emergency management often necessary. The length of stay for patients with diabetes can be prolonged [4][5][6], with various factors compounding the difficulty in resolving foot complaints. If investigations, interventions, consultations and care planning are not coordinated during an inpatient stay by appropriately skilled and experienced health professionals, it is our belief that length of stay is extended, re-admission is more likely, and poorer clinical outcomes expected. Over the past 20 years, evidence has accumulated in support of the multidisciplinary team model for the prevention and management of diabetic foot complications in the outpatient setting, although little focus has been given to the inpatient setting [7][8][9][10][11]. Within the clinical management of diabetes, there is recognition that coordination of care for inpatients is fundamental, however this is yet to be formally recognised within clinical guidelines. The National Health and Medical Research Council in Australia has produced guidelines which identify that there is a general need for improved coordination and multi-disciplinary care planning, however these documents fail to provide any specific detail on inpatient management [12,13]. Similar recognition has occurred in the United Kingdom (UK). The National Institute of Clinical Excellence (NICE) has sought to address this gap in diabetic foot management with the publication of their 2011 diabetic foot management guidelines [14]. The NICE guidelines recommend that inpatient management of the diabetic foot requires particular attention. Specifically, the NICE guidelines strongly recommend that one health professional should be responsible for coordinating inpatient care between specialists, the patient and other health professionals. The guideline advocates that this pivotal professional oversee the multidisciplinary coordination of care, schedule relevant interventions and investigations, and ensure appropriate and timely discharge planning. In the UK and Australia, a medical practitioner would commonly fill the role described by the NICE guidelines, as many of the responsibilities would be outside the scope of practice for a registered podiatrist working in the public health sector. Functions and procedures such as admission of patients requiring inpatient care; requesting and interpreting complex radiological imaging; and surgical debridement are currently beyond the scope of practice for podiatrists employed in the public health system in Australia. Internationally, there are many examples of professions seeking and acquiring pathways to extended scope of practice [15][16][17][18]. The adoption of the nurse practitioner model is one such example where nurses have successfully acquired an extension to their scope of practice through the acquisition of higher academic qualifications and vocational training [19]. The example of podiatric surgery in Australia and the UK is another case where advanced degree qualification and vocational training advanced the scope of practice for podiatrists, further integrating podiatry into the existing medical model [20]. Within the international podiatry community, there is a long held acknowledgement that high-risk foot care is a specialist branch of the profession. In the absence of a regulated postregistration career path option for this discipline, there remains no official recognition of this role via specialist registration in Australia or the UK [21]. With this background in mind, the development of the diabetic foot team at Great Western Hospital, Swindon, UK has been evolving since 2004. Initially the team consisted of a vascular surgeon, endocrinologist, a podiatric surgeon, and part-time diabetes podiatrist. Patients were managed by the individual intra-disciplinary teams with ad-hoc cross referrals made to other specialists as deemed necessary. Over time the team recognised that there were many inadequacies with this clinical structure and sought ways for improvement. A weekly multidisciplinary ward round was commenced in 2009. The team also identified that in order to facilitate improved coordination and efficiency of inpatient management, a diabetic foot coordinator was required full time, five days per week. It was envisaged that this role would facilitate appropriate and timely investigations and management in an attempt to enhance patient care. The Great Western Hospital was keen to recruit into the coordinator role a podiatrist with formalised postgraduate training. A Masters level postgraduate qualification (i.e. a specialist podiatry-related Masters degree); training and experience in advanced imaging, haematological assessment and interpretation; and experience in managing the diabetic foot were identified as desirable selection criteria. The job description was essentially a hybrid of the UK National Health Service job description for a podiatric surgeon, although surgical training was not an essential criterion for the new position. In August 2010, The Great Western Hospital initiated the new position, which was held by a podiatric surgical registrar who had several years of hospital high-risk foot experience. This new position was on-call during office hours five days a week. The aim of this pilot position was to evaluate the benefits of having a podiatric coordinator to oversee the management of patients admitted with acute diabetic foot complications. The extended scope of practice role description of the podiatric highrisk foot coordinator can be found in Table 1. The Great Western Hospital aimed to create a position that would bring together and enhance the performance of the existing specialist team. It was intended that the role would provide the best possible coordinated care to the patient, free of professional scope of practice barriers, which render many multidisciplinary teams less effective. The aim of this project was to evaluate a new model of care involving an extended scope of practice podiatric high-risk foot coordinator. Methods The study was an audit designed to evaluate the effectiveness of the podiatric high-risk foot coordinator position. Effectiveness was measured by a change in length of stay for patients admitted with acute diabetic foot complications before and after the implementation of the podiatric highrisk foot coordinator position. The authors acknowledge that length of stay is only one variable that may be associated with a modified model of care, so to complement the length of stay data, re-admission and cost data were also collected and analysed for comparison between the two data collection periods. Ethics approval was not required to undertake this study as it was classified as a clinical audit. A retrospective medical record audit was performed at Great Western Hospital Primary Care Trust between 1st November 2008 and the 1st of October 2009 to determine the average length of stay for eligible diabetic foot admissions for the pre-pilot period. Inclusion criteria for the audit were restricted to inpatients admitted with a diabetic foot ulcer, gangrene, cellulitis or infection as the primary cause for admission. Eligible records were retrieved using ICD-10 coding via the hospital's clinical audit department. The ICD-10 codes used to identify eligible records are displayed in Table 2. Nine months after the commencement of the podiatric high-risk foot coordinator position (2nd August 2010 to 30th June 2011), the pilot audit cycle was completed. Pilot period data was identified using the same inclusion and exclusion criteria as the pre-pilot period data. Pilot period data was retrieved from a unique database established by the podiatric coordinator for the purpose of this audit. Measuring the quality of inpatient diabetes care is difficult and a variety of outcome measures have been previously used including quantitative and qualitative measures. Length of stay is a quantitative outcome measure that is well established, routinely measured and considered to be of economic importance. Length of stay has been previously utilised to measure the impact of new service delivery models in diabetes care [4][5][6] and has achieved acceptance as a reasonable surrogate to assess quality of care. For this reason we chose to use length of stay in this study as our primary outcome measure of success. Length of stay was defined as the duration of a single episode of hospitalisation, and no re-admission within 48 hours of discharge. Length of stay was calculated by subtracting the day of Table 1 Role description of the extended scope of practice podiatric high-risk foot coordinator Primary role of the podiatric high risk foot coordinator Specific roles of the podiatric high risk foot coordinator Patient admission. • Selection of admitting discipline and processing. Inpatient management. • Requesting and interpreting haematological analyses. • Requesting and performing deep tissue samples for microbiology and histopathology. • Requesting and interpreting radiological imaging including plain x-ray, ultrasonography, CT, CTPet and MRI. • Requesting and interpreting specialised vascular imaging including MRI angiography and duplex ultrasonography. • Coordinating inter-specialist, nursing and allied health referral. • Liaison with microbiology for antibiotic management. • Participating in multidisciplinary vascular and endocrinology team meetings. Emergency and prophylactic surgery. (Not essential for the role, but advantageous) • Coordinating and performing surgical procedures. • Requesting and arranging peri-operative care in conjunction with junior medical staff; including sliding scales, blood and platelet transfusions etc. Discharge planning and outpatient management. • Outpatient podiatric surgery clinics for assessment and planning of elective prophylactic diabetic foot surgery and ongoing management of acute and chronic Charcot foot complications. admission from day of discharge. Patients entering and leaving hospital on the same day were allocated a length of stay of one. Re-admission was defined as patients admitted for another episode of diabetic foot infection after 48 hours but no more than six months post initial discharge. The readmission needed to be directly associated with the location of the preceding foot infection. Statistical analyses must account for the grouping of the data due to multiple responses from patients both within and between sampling periods. Such grouping may violate the assumptions of classical statistical inference that underlie, for example, t-tests for continuous data and chi-square for dichotomous data. Consequently, we have applied the extensions of generalised linear model theory [22], modelling that permits the comprehensive incorporation of both continuous and dichotomous data to mixed-effects models [23], which permit grouping of the data. Statistical testing in mixedeffects models is carried out by likelihood-ratio tests with a chi-square distribution, where p-values are quoted throughout; p-values of ≤0.05 were considered statistically significant. The estimated difference in cost between the pre-pilot audit and the pilot audit were calculated using the average bed cost of UK£250.00 per bed day provided by the Great Western Hospital Finance Department. Results There were 34 episodes identified in the initial audit that met the inclusion criteria. Seventy-five episodes met the inclusion criteria for the pilot period. The average length of stay for the pre-pilot phase was 33.7 days compared with 23.3 days in the pilot phase (mean difference 10.4 days, 95% CI 0.0 to 20.8, p = 0.050). The extrapolated annual cost saving following the implementation of the podiatric high-risk foot coordinator and the resulting reduction in length of stay was calculated to be UK£234,000 for the 2010/2011 year. Table 3 provides a breakdown of the figures used to calculate this cost saving. Discussion Length of stay is reported to be a reasonable surrogate to measure quality of hospital diabetes care [4][5][6] and was used in the current study to assess the impact of the new podiatric high-risk foot coordinator role. Our audit identified a significant reduction in length of stay for diabetic foot admissions following the introduction of our new model of care. Our study findings are consistent with previous studies, which have used length of stay to measure the impact of new models of care for inpatient delivery of diabetes care. Flanagan et al. reported a significant reduction in average length of stay of inpatients with diabetes over a five-year period following the introduction of a specialised inpatient diabetes team at their hospital [4]. In the Flanagan et al. study length of stay was reported to have reduced most significantly for emergency admissions. This is a useful comparison with the results of the current study, which also involved acute diabetic foot admissions. Re-admission of patients with diabetic foot complications is a well-recognised phenomenon. While critics may suggest that reducing length of stay will increase the rate of re-admission, there is little evidence in the literature to support this claim [24]. Our study found that there was no increase in re-admission rate among cases included in this study. The authors acknowledge that readmission is a complex issue with many potential confounding variables and that there is a need for more controlled studies before an association can be confidently made between length of stay and re-admission. Both the outpatient and inpatient costs attributed to diabetic foot complications are high, however it is well acknowledged that inpatient care contributes the greatest cost due to the intensity and complexity of care at this level [25]. Stockl and colleagues estimated that 77% of the overall cost associated with the treatment of diabetic foot ulcers included in their study was attributed to hospitalisation [26]. Daultrey and colleagues report that length of stay in the National Health Service is increased for patients with diabetes due to the complexity of the disease and the lack of specialised inpatient care [27]. The same study reports that increased length of stay contributes to 80,000 excess bed days per year across England. Consistent with previous findings [26,27], our study found that a cost saving of £234,000.00 per annum was associated with a reduction in the length of stay. We acknowledge that our results cannot be directly compared with previous studies, as we were unable to identify previous published research that measured an association between bed cost and new models of care such as the podiatric high-risk foot coordinator. We can, however, suggest that our new model of care has had an indirect impact on bed cost by achieving a reduction in length of stay. We appreciate the importance of appropriate management in the outpatient setting, however we wish to emphasise the crucial and frequently neglected role of coordinated care at the inpatient level. Diabetic foot infection requiring admission is considered a medical emergency and may be accompanied by a threat to a limb or life [28][29][30][31]. An acute diabetic foot admission requires rapid and well-coordinated care to ensure a successful clinical outcome for the patient [32]. Previous authors have identified the variability in the assessment and management of patients admitted for acute diabetic foot complications [33]. Lawrence et al. [34] found that even simple recognition of previous amputation and inspection of pulses are often not recorded in medical records. The need for expeditious and high-level clinical care for inpatients was one of the primary motivations for the Great Western Hospital to initiate the new role described in this article. The hospital recognised that it was necessary for the new role to have a coordinating responsibility and be equipped with a broad scope of practice to facilitate the most timely and effective care of inpatients. The literature supports the role of a single professional to coordinate inpatient care. There is evidence in the literature that a podiatrist led team is successful in reducing amputation rates [35,36]. The high-risk foot coordinator is, however, only as effective as the members of the group. All group members must be allowed to make contributions and perform the needed functions of the group free of inhibitions and professional boundaries. An effective coordinator must be a good leader with the skills to collate opinions and prepare a mutually agreeable care plan without any one discipline attempting to dominate the team. We found that the success of the inpatient team relied on having the podiatric high-risk foot coordinator present at the multidisciplinary team meetings as well as reviewing patients at the time of admission and coordinating the ward rounds. With this level of involvement the coordinator was able to facilitate key investigations and interventions promptly without delay. Opinions from the various nonteam specialists were also more easily obtained due to a change from traditional methods of communication which relied on referral forms that could take days to be answered. We found that the surgical skills the incumbent podiatrist brought to this new role were beneficial but not essential to the success of the position. We would like to emphasise that the results achieved in this audit were not dependent on the surgical skill set of the podiatrist who filled the role, as the number of procedures performed by high-risk foot coordinator was small. A consultant podiatric surgeon performed the majority of the surgical work during the audit period. We believe that any future development of the new role described in this paper, needs to be underpinned by structured post-graduate training. In the past, innovative specialist areas in podiatry relied on charismatic individuals to form relationships and 'gentlemen's agreements' [21]. However, the authors of this paper and other prominent commentators on this topic advocate that the acute diabetic foot demands an appropriately trained and recognised, expert care provider. A Masters level qualification supported by internships in medicine, pharmacology, microbiology, surgery and radiology combined with independent prescribing qualifications would be a suitable benchmark to establish. The authors acknowledge that there are a number of significant limitations of this study. The pre-pilot period data was collected retrospectively while the pilot data was collected prospectively. The pre-pilot data is not as robust as the pilot data due to the errors associated with retrospective data analysis. The authors acknowledge this and every effort was made to match the data between the two cohorts to ensure the most reasonable comparison could be made. Cross-referencing of patients identified in the retrospective medical record audit was performed by manually searching the patient record to confirm that the eligible ICD-10 code was recorded correctly. In addition, a mathematical assumption for the distribution of both samples was made regarding the independence and identical nature of the two sets of data when calculating the average length of stay. The authors also acknowledge that there are seasonal variations in acute diabetic foot presentations to hospitals. The seasons varied between the pre-pilot and pilot data collection periods in this study, although there were ten months in common and therefore, it is unlikely that this issue had a significant impact on the results. The period of data collection varies between the pre-pilot and pilot periods by one month. While every effort was made by the authors to collect one calendar year of data for each group this was not possible due to administrative requirements of the hospital. The data collection period for the pilot period was influenced by funding and reporting requirements of the hospital. The difference in sample size between the pre-pilot and the pilot periods is noted by the authors and two explanations are made for this variation. The unreliable nature of retrospective data audit is a likely reason for lower numbers in the pre-pilot group despite the best efforts of the authors to identify as many eligible records as possible. An additional explanation is the occurrence of the 'honeymoon phenomenon' that often occurs with the establishment of new services whereby referrals initially increase with increased awareness among hospital staff of the benefit the service can bring. It is also reasonable to assume that patients were identified and coded more accurately on commencement of the podiatric high-risk foot coordinator in the emergency department of the hospital. This study has revealed some interesting findings, which we believe would benefit from further investigation. Future parallel-group randomised trials would be of value to evaluate the true impact of high risk foot coordinators on the outcomes we have measured in our study. Any future prospective research on this topic would benefit from an expansion of the outcome measures being evaluated. In addition, we believe that the complex nature of the diabetic foot condition lends itself to further exploration using mixed method research. Qualitative data examining the patient perspective of different models of diabetic foot care would be an invaluable contribution to further research on this topic. Qualitative enquiry would compliment and expand on quantitative methods, which are well acknowledged to be limited in the depth and breadth of knowledge they deliver to the researcher. Conclusion This study supports the recommendations made by NICE that a coordinator for acute diabetic foot admissions is a valuable asset to the hospital diabetes team. The study found that the implementation of the new coordinator position at the Great Western Hospital was associated with a reduced length of stay of diabetic foot admissions during the study period. The coordinator role also ensured timely and appropriate discharge planning, which should in turn, prevent future admissions and unnecessary major amputations. We believe that there is a demand for specialisation in the high-risk foot, which can be achieved with the establishment of a podiatric high-risk foot coordinator role. Although further research is needed, the authors anticipate that the results of this audit and the ideas discussed in this article may generate debate on this topic.
2016-05-04T20:20:58.661Z
2013-12-11T00:00:00.000
{ "year": 2013, "sha1": "bf26572314f7f3a82fe3f4b85f5b38fda5916682", "oa_license": "CCBY", "oa_url": "https://jfootankleres.biomedcentral.com/track/pdf/10.1186/1757-1146-6-47", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a67823413cb7643341363e806ffe44c198c5df5b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15050911
pes2o/s2orc
v3-fos-license
Period-luminosity and period-luminosity-colour relations for Mira variables at maximum light In this paper we confirm the existence of period-luminosity (PL) and period-luminosity-colour (PLC) relations at maximum light for O and C Mira variables in the LMC. We demonstrate that in the J and H bands the maximum light PL relations have a significantly smaller dispersion than their counterparts at mean light, while the K band and bolometric PL relations have a dispersion comparable to that at mean light. In the J, H and K bands the fitted PL relations for the O Miras are found to have smaller dispersion than those for the C Miras, at both mean and maximum light, while the converse is true for the relations based on bolometric magnitudes. The inclusion of a non-zero log period term is found to be highly significant in all cases except that of the C Miras in the J band, for which the data are found to be consistent with having constant absolute magnitude. This suggests the possibility of employing C Miras as standard candles. We suggest both a theoretical justification for the existence of Mira PL relations at maximum light and a possible explanation of why these relations should have a smaller dispersion than at mean light. The existence of such maximum light relations offers the possibility of extending the range and improving the accuracy of the Mira distance scale to Galactic globular clusters and to other galaxies. INTRODUCTION Miras are long period variable stars lying on the asymptotic giant branch (AGB) of the HR diagram with periods in the range 100 to 700 days. Their masses lie between 0.5M⊙ and 3M⊙, and their K band mean absolute magnitudes lie in the range −5 < MK < −7 (c.f. Wood 1995). Whitelock (1995) reviews a number of reasons for the astrophysical importance of Mira variables, highlighting in particular their suitability as distance indicators -a fact which makes them useful tracers of galactic structure. The use of Miras as distance estimators relies upon the existence of periodluminosity (PL) and period-luminosity-colour (PLC) relations at mean light, which may be calibrated with nearby stars whose distance is otherwise known and then applied to more remote objects to estimate their distance. In e.g. Feast et al. (1989, hereafter F89) PL and PLC relations were derived for a calibrating sample of about 50 oxygen-rich (O) and carbon-rich (C) Miras in the Large Magellanic Cloud (LMC), using time-averaged mean J, H, K and bolometric magnitudes. F89 found that the O Miras displayed a welldefined relations in the K band, and also based on J, H and bolometric magnitudes, but with a larger dispersion in these latter three cases. For the C Miras F89 confirmed the existence of a PL relation in the K band. These derived relations were then applied to determine distance moduli to a number of galactic globular clusters. work comes from Kanbur & Hendry (1996, hereafter KH), who derived V band PL and PLC relations at maximum light for a sample of Cepheids in the LMC, previously published in Martin, Warren & Feast (1979). KH outlined specific physical reasons why the use of Cepheid maximum light relations might be preferable to those at mean light, developing the earlier theoretical work of Simon, Kanbur & Mihalas (1993). In a similar manner, in this paper we derive maximum light PL and PLC relations for Mira variables and discuss a possible physical justification for their existence. In particular we consider a physical explanation for the smaller observed dispersion of maximum light relations when compared with the corresponding relations at mean light. For completeness we also consider PL and PLC relations at minimum light and compare them with their counterparts and mean and maximum light. A number of authors (c.f. Sandage 1958, Madore & Freedman 1991, KH) have discussed the theoretical justification for the existence of a PL and PLC relation for Cepheid variables, deriving the so-called pulsation equation, where P , M, L, and Te are the period, total mass, equilibrium luminosity and effective temperature respectively of the star and Q is a slowly varying function of stellar parameters. Cepheids occupy an instability strip of finite width in the HR diagram. If a similar situation holds for Miras (Feast 1989, Wood 1990, Shibahashi 1993, then this equation can be used to explain the existence of PL and PLC relations for both Miras and Cepheids since it assumes only the periodmean density theorem and the Stefan-Boltzmann law. Mira and Cepheid PL relations arise from the collapse of equation (1) over the variables log M and log Te. In the case of Miras, however, the equilibrium luminosity is a strong function of the core mass (Shibahashi 1993). Assuming that the equilibrium luminosity is close to the mean luminosity over a pulsational cycle, both the range of core masses and total masses therefore contribute to the scatter in a PL relation at mean light for a Mira of a given period. We discuss the effect of metallicity on the scatter of the PL relation in section 4 below. If it is the case that that is, the equilibrium photospheric radius of the star is approximately equal to the photospheric radius at maximum light, then, following essentially the same reasoning as in KH, we can use the period-mean density theorem and the Stefan-Boltzmann law to write log P + 1 2 log M − 3 4 log Lmax + 3 log Tmax = log Q where Lmax and Tmax denote the luminosity and temperature at maximum light. In the case of Cepheids, Cox (1974) provides good evidence that equation (2) is a reasonable assumption. Support for the validity of equation (2) in the case of Mira variables is given in Wood (1995) and references therein. Assuming equation (2) to be valid for Miras, equation (3) can then be used to justify theoretically the existence of PL and PLC relations at maximum light for these stars, as a result of collapsing equation (3) over the variables log M and log Tmax. In the Cepheid case, Simon, Kanbur & Mihalas (1993) showed that at maximum light the range of photospheric temperatures is significantly smaller than the range of effective temperatures at mean light. Motivated by this work KH suggested that Cepheid PL and PLC relations at maximum light could have significantly smaller dispersion than at mean light -a result which was investigated in detail in KH. A similar effect may not be present for Mira variables, but another advantage of the use of maximum light is nonetheless apparent for Miras. Their pulsations, like Cepheids, are envelope phenomena -energy modulation and amplitude limitation occurring in the outer envelope. The maximum luminosity depends on the envelope mass as well as the core mass. In equation (2), on the other hand, the equilibrium luminosity is strongly dependent on the core mass (Shibahashi 1993). Thus the quantities P , and M in equation (1) have dependencies on both the envelope and core mass, whereas L is dependent strongly on the core mass. All the quantities in equation (3) have dependencies on the core and enevlope mass. We conjecture that, even if the range of Tmax were no different to the range of T eff , this situation could lead to Mira PL and PLC relations at maximum light to have smaller dispersion than rheir counterparts at mean light. Further work is needed to examine this proposition. This paper is organised as follows. Section 2 describes the LMC calibrating data, and the methods used to derive PL and PLC relations and evaluate their statistical significance. In sections 3 and 4 we present our results for PL and PLC relations respectively, which are then discussed further and compared with those of F89 in section 5, highlighting some important consequences for the use of Miras as probes of galactic structure. Finally, in section 6 we present our conclusions and possibilities for further study. DATA The data used in this study were taken from Glass et al (1990), which was also the primary reference for the analysis of F89. These data consisted of multi-epoch observations at a number of wavelengths of a large sample of O and C Miras in the LMC. O Miras are oxygen-rich objects whilst C Miras are carbon-rich; the classification of each star as a C or O Mira can be made from spectral type (if known) or from colour measurements and we adopt the same classifications as those published in F89. In addition we adopt the Mira periods as given in Glass et al (1990); since the average number of epochs of observation for each star in the Glass et al. study was more than eleven, with good phase coverage, it is unlikely that the published periods are subject to any significant uncertainty. The data for the O and C Miras are summarised in Tables 1 and 2 respectively. These Tables list the star name, its period, taken from Glass et al (1990), and the mean, maximum and minimum magnitudes at J, H, K and bolometric wavebands. All magnitudes in Tables 1 and 2 have been corrected for extinction following F89, assuming AJ = 0.06, AH = 0.03 and AK = 0.02. We adopted as the maximum and minimum magnitude simply the maximum and minimum observed value (or interpolated value in the case of bolometric magnitude) reported in Glass et al (1990). Mean magnitudes were calculated as the average of the max-imum and minimum observed (interpolated) magnitudes, which was also the definition of mean magnitude adopted for the relations derived in F89. Multi-epoch observations were available in Glass et al. (1990) for 48 of the 49 Miras studied in F89; in the case of the star 'GR13' only the mean magnitudes published in F89 were available. Using these data we carried out linear regression fits to PL relations of the form and where m denotes apparent J, H, K and bolometric magnitude, corrected for extinction, as appropriate and a and b are constants. As will be clear from Figures 1 to 6 in Section 4, in most cases the existence of a tightly correlated PL relation at mean and maximum light was immediately evident from a scatterplot of apparent magnitude against log period. Notwithstanding this, we considered it instructive -particularly for the more marginal PL relations -to determine quantitatively the statistical significance of including a log period term in each of our regression fits. In order to do this we applied the same statistical test which was introduced in KH, and which is described in detail in the appendix of that paper, involving the partial multiple correlation coefficient, ρ, of the regression (c.f. Graybill 1976). If ρ equals zero then the log period term makes no contribution to a reduction in the dispersion of the fit and is effectively redundant. For each regression we computed the sample value of ρ, denoted byρ. Under the null hypothesis that the true value of ρ is equal to zero thenρ 2 has an F distribution (c.f. KH). We also carried out fits to PLC relations of the form and where (J − K) denotes dereddened colour, and also to the corresponding equations for (J −H) colour. We defined maximum and minimum colour as the colour at the phase at which respectively the maximum and minimum magnitude was observed. We applied the same statistical test as for the PL relations to determine the significance of adding the colour term in each PLC relation. We compared the fitted relations obtained using mean, maximum and minimum magnitudes as defined above with those derived using magnitudes calculated from a first order fourier fit to the light curve of each Mira. In all cases we found no significant difference in the slopes, zero points and dispersions of the fitted relations at either maximum or mean light. The same conclusion regarding the robustness of mean magnitudes was reached in F89, where mean values obtained from averaging the maximum and minimum magnitudes were compared with the average of maximum and minimum intensities and also with the results of fourier fits to both magnitudes and intensities. The robustness of mean and maximum PL relations to the choice of definition for mean and maximum light is in complete accordance with the results of Hendry, Kanbur & Clarke (1997, in prep.), in which we investigate the statistical properties of various different estimators of mean and maximum light -including those adopted here and those derived from fitting low order fourier series -as a function of number of sampled phase points, phase coverage, light curve shape and limiting magnitude. PL RELATION RESULTS The results of our regression fits to equations 4, 5 and 6 are presented in Tables 3 to 6 and illustrated in Figures 1 to 6. Column 1 in each Table lists the type of regression fiti.e. to mean, maximum or minimum apparent magnitude. Column 2 indicates the type of Mira sample used: C Miras only, O Miras only or both Mira types (denoted 'O+C'), and column 3 gives the number, n, of Miras in each sample. Note that for the mean relations we used the full sample of 29 O Miras, identical to that used in F89, while for the minimum and maximum light relations we used the sample of 28 O Miras for which phase information was available. Columns 4 to 7 give the fitted values of the zero point, a, and slope, b, of the relations with their associated standard errors, σa and σ b . Column 8 indicates the dispersion, σ (in magnitudes) of the regression fit and column 9 gives the percentage root mean square error, ∆, of the corresponding distance indicator which one would derive from the PL relation, i.e. ∆ ≃ 46.1σ%. Finally, column 10 gives the value ofρ, the partial multiple correlation coefficient computed for the sample data and column 11 indicates the probability (denoted by 'Prob') thatρ 2 be equal to (or greater than) its computed value under the null hypothesis that the true value of ρ is equal to zero. Figures 1 and 2 show scatterplots of mean and maximum magnitude respectively against log period for the oxygen-rich Miras. The fitted regression lines in each waveband are also drawn on the plots. Figures 3 and 4 show the corresponding scatterplots for the carbon Miras in our sample and Figures 5 and 6 show the scatterplots for the composite sample of C and O Miras. It is evident from Tables 3 to 6 and from Figures 2, 4 and 6 that statistically significant PL relations clearly exist at maximum light in almost all of the cases considered -a fact which is supported quantitatively by the values of the partial multiple correlation coefficient, which are generally different from zero at a very high level of significance. The only clear exception to this trend is the case of the J band maximum light PL relation for C Miras, which we discuss further below. For the O Miras the K band PL relation has the smallest dispersion at maximum light -about 20% smaller than for the relations in the other wavebands. The maximum light relation for the composite sample of both types of Miras also has the smallest dispersion in the K band. Similar behaviour was found in F89 for the mean light relations, which we also confirm here. It is also clear from Tables 3 to 6 that the dispersion of the minimum light PL relations is considerably larger than that of both the mean light and maximum light relations in all cases. We comment on this in section 5 below. Comparing all of our results for mean light relations with those of F89, we see that our fitted coefficients, standard errors and dispersions are in excellent agreement in all cases. We can see from Tables 3 to 6 that the dispersions of the maximum light PL relations are smaller than those of the corresponding mean light relations in every case considered, with the sole exception of the 'O+C' relation for bolometric magnitudes. Note also that the standard errors of the fitted regression coefficients are also consistently smaller for the maximum light relations. To assess the statistical significance of this result requires some care, however. We cannot simply apply a standard ratio-of-variance F test (c.f. Graybill 1976) to the data since such a test assumes that the variances are statistically independent. This condition is clearly not satisfied here, as the residuals of our maximum and mean light PL relations are likely to be highly correlated. A failure to account for this correlation would result in underestimating the significance of the reduction in dispersion. We tackle this problem numerically, first computing the correlation coefficient of the mean and maximum light residuals and then -with this correlation coefficientgenerating a large number of Monte Carlo simulations to estimate the probability density function of the sample ratio of mean to maximum light dispersion, under the null hypothesis that these dispersions are equal. We then determine the statistical significance of the observed reduction in dispersion by considering the the extent of the tail of our estimated probablility density function in the standard manner. The results of applying this significance test are given in Table 7. The first and second columns indicate the waveband and type of Mira sample under consideration, the third column indicates the ratio, R, of the variance at mean light to the variance at maximum light. The fourth column gives r, the sample correlation coefficient of the residuals at mean and maximum light, and the final column indicates the probability (denoted by 'Prob') of obtaining as large (or larger) a value of R under the null hypothesis that the true value of the ratio is equal to unity -i.e. the dispersion at mean and maximum light is identical -and the true correlation coefficient of the mean and maximum light residuals is equal to r. We can see from Table 7 that the ratio, R, is greater than unity in all cases except that of the 'O+C' relation for bolometric magnitudes. The reduction in scatter is least significant for the K band Miras -i.e. the tight PL relation already displayed at mean light is not improved as much by the use of maximum light as in the other wavebandsbut is still non-negligible. A significant reduction is seen for both the H and J band relations. Although the dispersion at maximum light is slightly larger than at mean light for the 'O+C' bolometric relation, the increase in dispersion is not statistically significant. Table 7 illustrates the importance of accounting for the correlation between the mean and maximum light residuals: the reduction in dispersion for the C Miras in the J band is marginally smaller than that in the H band, but is marginally more significant because the J band residuals are more highly correlated. As mentioned above, the J band PL relation for C Miras at maximum light is seen from Figure 4 to be essentially flat. This is confirmed in Table 3, where we see that the fitted co-efficient of log period is only -0.02, consistent with zero, and the partial multiple correlation coefficient is not significantly different from zero. A similarly flat relation is seen in Figure 3, for the J band relation at mean light -as previously reported by F89. The existence of an H, K and bolometric magnitude PL relation, at both mean and maximum light, for the C Miras is somewhat more convincing in Figures 3 and 4: all have significantly non-zero partial multiple correlation coefficients and regression coefficients of log period. If we consider only those C Miras with periods greater than 250 days, however, then the H, K and bolometric relations are considerably flatter -i.e. the longer period Miras are more consistent with having constant absolute magnitude. We comment further on this in the next section. Finally, F89 found that the slope of the bolometric magnitude PL relation at mean light was shallower for C Miras than for O Miras. Our results confirm this conclusion and indicate that it is also true -and indeed is considerably more pronounced -at maximum light. PLC RESULTS F89 also presented evidence for the existence of PLC relations at mean light, for both the carbon and oxygen Miras based on (J − K) colours. By considering the correlation of the PL relation residuals with (J − K), F89 found evidence of a significant mean colour term for the O Miras at all wavelengths, but for the C Miras the colour term was highly significant only for the J band PLC relation, and was found to be marginal for the K band and bolometric relations. F89 also showed that, where significant, the colour term was intrinsic and could not be attributed to differential reddening. In this paper we have derived PLC relations at mean, maximum and minimum light using both (J − K) and (J − H) colours. We present the results of our regression fits to equations 7 to 9 in Tables 8 to 10 below, with the corresponding results for (J − H) colours in Tables 11 to 13. The columns of these Tables are as in Tables 3 to 6, with two additional columns giving the fitted value and standard error of the colour coefficient, c. Note that in the case of the PLC relations with (J − K) colours we do not present the K band results since it is straightforward to show that these are trivially related to those at J band: i.e. aK = aJ , bK = bJ and cK = cJ − 1. Moreover, one may also show that the dispersions of the J and K band PLC relations, and the standard error of the coefficients, are identical. The J and H band PLC relations based on (J − H) colour are similarly related in a trivial way. The PLC results presented here correspond to the 'Method a' case presented in F89, i.e. ordinary least squares with the errors in the magnitudes. These solutions do not account for the effect of correlated errors on apparent magnitude and colour excess -an effect treated in detail by e.g. Caldwell & Coulson (1985). Following F89 we conclude that this effect is negligible for these data, since the extinction in the J, H and K bands is very small. Our results at mean light using (J − K) colours are, as expected, in complete agreement with those of F89. Moreover our conclusions concerning the significance of the colour term are concordant with F89, and indeed are reinforced by considering the value of the sample multiple correlation co-efficient,ρ, and associated P value. Clearly the magnitude of the colour term itself, compared with its standard error, also gives an indication of its statistical significance. On this basis, we can see that there is good evidence for a (J −K) PLC relation at mean, maximum and minimum light in almost all cases considered. The only exceptions are the bolometric PLC relation for the C Miras (for which the P values listed in Table 10 are at least several orders of magnitude larger than in other cases) and the K band PLC relation for the C Miras, which -although not listed in the Tables -can be seen to have a marginal colour term at mean, maximum and minimum light by considering the cJ coefficients and using cK = cJ − 1 as noted above. The colour term is highly significant for the J band relations, slightly less significant for the H band relations and least significant (although still clearly present) for the bolometric relations. For the results using (J − H) colours there is clear evidence of a significant colour term for the J band 'C' and 'O+C' relations and -in view of the values of cJ -also for the H band 'C' and 'O+C' relations. The colour term is also significant for the bolometric PLC relation with the 'O+C' sample. In all other cases, however, there is no evidence for a significant colour term -i.e. the addition of (J − H) colour does not significantly reduce the dispersion of the PL relation. It is clear from Tables 8 to 13 that in all cases the dispersion of the PLC relations at minimum light is considerably larger than at mean and maximum light, for both (J − K) and (J − H) colours. Comparing the dispersion at mean and maximum light, however, we see that our results are somewhat more ambiguous than was the case for the PL relations considered in the previous section. For (J − K) colours the dispersion at maximum light is in fact slightly larger than that at mean light for the O Miras with J band, H band and bolometric magnitudes, and also for the 'O+C' sample bolometric relation. In the remaining five cases the dispersion at maximum light is smaller than at mean light. What is noteworthy, however, is the particular success of the J and H band maximum light relations for the C Miras: these are the two cases for which the dispersion at mean light is largest, and the reduction in dispersion at maximum light is found to be about 20 − 30%. For the PLC relations with (J − H) colour, where the colour term was not significant the ratio of the maximum to mean dispersion was very similar to that for the corresponding PL relations -i.e. the dispersion of the maximum light relation was generally found to be comparable to, or slightly smaller than, that at mean light. This is not surprising since in these cases the fitted PLC relation shows no statistically significant difference from the PL relation. For the cases where a statistically significant (J − H) colour term was found, on the other hand, the maximum light relations scored two notable successes: the dispersion of the J band relations for the 'C' and 'O+C' samples was reduced by almost 50% and 25% respectively. Note that, as for the (J − K) relations, the largest reduction in dispersion at maximum light occurred for the PLC relations with largest dispersion at mean light. Note also that for the C Miras the J band PL dispersion at maximum light is already smaller than the corresponding PLC relation at mean light with (J − H) colour, while the maximum light PLC relation reduces the dispersion by almost another factor of two. Tables 14 and 15 list the results of applying to the fitted PLC relations the test, introduced in the previous section, to determine the statistical significance of the reduction (or increase!) in scatter between mean and maximum light. The columns are as in Table 7 above. Table 14 gives the results using (J − K) colours while Table 15 is for (J − H) colours. The results of Tables 14 and 15 confirm that in the cases where the dispersion at maximum light is greater than at mean light (∆ < 1) the increase in dispersion is never significant at less than the 15% level, while the reduction in dispersion at maximum light is in several other cases significant at the 1% level. DISCUSSION The principal result of this paper concerns the existence of Mira PL relations at maximum light and the fact that in all cases these relations display less scatter than the corresponding mean light relations, a reduction in dispersion which is statistically significant in the J and H bands. This result is apparent not only in the values of the dispersion derived for the mean and maximum light relations, but also is suggested by the behaviour of some of the outliers in the scatterplots of magnitude against log period. In Figures 3 and 4 for example, there are a small number of outliers, with log periods of around 2.5, in the J and H band mean light relations which are in much better agreement with the fitted regression line at maximum light. We also find that in several cases for the C Miras the dispersion of the maximum light PLC relation is significantly smaller, by up to 50%, than at mean light. It is obviously important now to ask what is the most likely source of the reduction in dispersion which we have observed at maximum light. Figures 7 and 8 show plots of dereddened (J − K) colour against log period at mean (a) and maximum (b) light, for the C and O Miras respectively. (Note that 'maximum (J − K)0', as indicated on the axes of the plots in Figures 7 and 8, in fact means the dereddened colour at the phase of maximum light in the J band, which need not be the same as the maximum observed value of J-K colour, although the difference is likely to be quite small). Similarly Figures 9 and 10 show plots of dereddened (J −H) colour against log period for the C and O Miras respectively. Whilst the properties of Mira PLC relations at maximum light will be the focus of future work, we note from these plots that the scatter in (J − K)0 at a given period is very similar at mean and maximum light. This suggests that the range of effective temperatures at given period will not be greatly different at mean and maximum light, as was also claimed by Feast (1995). In the light of our discussion in Section 1, we therefore conjecture that the smaller dispersion of maximum light PL relations compared with mean PL relations is primarily due to the fact that both maximum luminosity and period depend on the total mass, whereas equilibrium luminosity -and hence mean luminosity -depends strongly on core mass. If the above explanation is correct then -insofar as a considerably larger reduction in dispersion is found in the J and H bands for the C Miras compared with the O Miras -one might suppose this to be due to there being a larger difference between the range of core masses and total masses for C Miras compared with In comparing the values of the regression coefficients obtained in the PL fits at mean and maximum light, the general trend which one observes is as follows. For the C Miras the zero point, b, and slope, a, are found to be smaller and more shallow (i.e. less negative) respectively for the maximum light relations in all wavebands. Based on the standard errors of the regression coefficients this systematic difference appears to be quite significant, although we have not carried out a specific statistical test of this hypothesis. For the O Miras, on the other hand, precisely the converse is the case: the zero point and slope of the PL relations at maximum light are found to be respectively larger and steeper (i.e. more negative) than at mean light in all wavebands. Aside from investigating whether the use of the maximum light relations derived in this study lead to distance estimates significantly different from those determined using Mira PL relations at mean light (c.f. F89, Whitelock 1995, Feast 1995, it would be interesting to investigate if any systematic difference in the slope and zero point of maximum light relations can be explained in terms of our existing knowledge of the physics of Mira variables. We will address this problem further in a future paper. In the case of Cepheids it was shown in Simon, Kanbur & Mihalas (1993) that both maximum and minimum light occur as the star is passing through its equilibrium radius. If this were also true for Miras, then equation (3) might also be valid at minimum light, but with Lmax and Tmax replaced by Lmin and Tmin. As we commented above, the Mira PL and PLC relations at minimum light were in all cases found to have a larger dispersion than at mean light. In view of our discussion in Section 1, perhaps one reason for this result is that minimum luminosity is dependent on the total mass of the star in a different way to maximum luminosity; in other words it may be the case that equation (3) -in its equivalent form -does indeed hold at minimum light but that when we collapse the equation over the variables log M and log Tmin the resultant PL relation has a larger dispersion than at either maximum or mean light. We intend to investigate further the properties of PL relations at minimum light in future work. Whilst the mean and maximum light PL relations which we have derived here clearly have practical use in terms of in terms of distance estimation, the relations involving bolo-metric magnitudes are also very interesting for the purpose of better understanding stellar pulsation and evolution. Indeed, the fact that we have established bolometric PL relations for both C and O Miras directly supports the validity of equation (3). Finally it is important to comment explicitly on the practical application of the results of this paper -the use of maximum light PL relations for Miras as distance indicators. Aside from the advantage of the small reduction in dispersion which our analysis in this paper has identified, the use of maximum light relations in distance estimation can also be justified on the grounds that one can extend their application to greater distances before the effects of luminosity selection bias become important. In Hendry, Kanbur & Clarke (1997, in prep.) we examine in detail the robustness of PL relations derived for Miras detected close to an apparent magnitude limit, and find that -as one approaches the magnitude limit -the measurement of mean light becomes biased, and is subject to a increasingly large root mean squared error, substantially more quickly than does maximum light. Moreover, for a range of different light curve shapes we find that the identification of maximum light simply with the brightest observed phase point (as was the definition adopted in this paper) remains a robust and reliable estimate of maximum light as one approaches the magnitude limit -provided one has of the order of ten or more sampled phase points -and is certainly considerably more robust than the identification of mean light with the average of the observed magnitudes. This work suggests that maximum light PL relations can easily be constructed without recourse to exhaustive observing programmes and can therefore prove useful in extending the range and reliability of Mira-based distance indicators. It would seem to us, therefore, that a priority for future work is to establish the existence of Mira PL relations at maximum light in different stellar environments, such as the SMC and Galactic globular clusters, and to test the uniformity of such relations. Wood (1990) has used equations for the position of AGB in the HR diagram together with the period-mean density theorem to obtain a pulsation equation similar to equation (1), but also incorporating a metallicity dependence. This work is discussed in Feast (1995), which suggests that the available evidence indicates little variation in the mean light Mira PL relations at K or bolometric magnitudes in environments with a range of different metallicities. There is no reason to believe why any metallicity dependence of equation (1) would act differentially between mean and maximum light, although of course any possible effect should certainly be checked observationally. In any case, such a metallicity gradient with environment -if present -would have no bearing on our discussion of the relative dispersion of mean and maximum light PL relations in this paper. Some examples of the recent application of distance indicators based on Mira PL relations at mean light include the following. In F89 Mira distances were determined to Galactic globular clusters, thus providing an absolute calibration of RR Lyrae and horizontal branch stars. On the other hand, Whitelock (1995) and references therein used Mira PL relations at mean light to study the dimensions and kinematics of the disk, halo and bulge of the Galaxy. CONCLUSIONS In this paper we have demonstrated the existence of PL relations for Miras at maximum light in the J, H and iK bands and for bolometric magnitudes. Our results were based on analysis of a sample of oxygen-rich and carbon-rich Miras in the LMC, as previously studied in F89. In the J, H and K bands the PL relations at maximum light have a smaller dispersion for the oxygen-rich Miras than for the carbon rich Miras, while the converse was found to be true for the PL relation based on bolometric magnitudes. We have shown that for the J and H bands the Mira PL relations at maximum light have a significantly smaller dispersion than their counterparts at mean light. Our results also are suggestive that C Miras with periods in excess of 250 days have constant mean and maximum absolute magnitude. Based on similar reasoning to that outlined in KH, we present a theoretical justification for the existence of such maximum light relations. The crucial assumption made in this justification is that the photospheric radius at mean light is roughly equal to the photospheric radius at maximum light, for which there exists some evidence. Amongst other factors -including metallicity and temperature -the dispersion at given period in a mean light PL relation is influenced by both the range of core masses and the range of total masses found in Miras. However, at maximum light we suggest that -amongst these same other factors -the dispersion at given period is influenced only by the range of total masses, and that it is this fact which is responsible for the smaller dispersion of maximum light PL relations which we have observed. In Section 5 we have outlined a number of topics for future work, but in summary it seems clear that the main direction of future work should be the study of larger samples of Miras, in order to investigate the prevalence, uniformity and reliability of maximum light relations in other environments. The relative robustness of maximum light relations when the corresponding mean light relations are pushed close to an apparent magnitude limit makes their application in external galaxies an important and exciting possibilityparticularly with the installation of J and H band filters in the NICMOS camera on the newly refurbished Hubble Space Telescope. We are confident that Mira PL relations at maximum light can become a powerful tool for galactic and extragalactic astronomy. acknowledgements The authors thank Patricia Whitelock for supplying the Mira observations of F89 in a convenient electronic form, and Shaun Hughes, Tom Lloyd Evans, Norman Simon and Dimitri Mihalas for useful discussions. The authors also thank the anonymous referee for useful comments. MAH acknowledges the PPARC, for the award of a Personal Research Fellowship. MAH and SMK ackowledge the use of computer facilities supported by the STARLINK project. DC acknowledges the assistance of Mrs. Margaret Morris in analysing the data.
2014-10-01T00:00:00.000Z
1997-04-14T00:00:00.000
{ "year": 1997, "sha1": "4f10815c8c6a39127417ff3bf262d6a50cf8b281", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/9704124", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f02955806d3d8a24c238060d5f5c9aabf4658efa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
224769139
pes2o/s2orc
v3-fos-license
Surgical and Systemic Treatment of Hereditary Breast Cancer: A Mini-Review With a Focus on BRCA1 and BRCA2 Mutations Hereditary breast cancer accounts for 5%–10% of breast cancer cases. The majority of familial cases have been linked to germline mutations in BRCA1 and BRCA2 genes, though other high penetrance susceptibility genes have also been identified through genomic testing advances. Optimal surgical treatment for these patients, who are of a younger age, has several challenges as it usually involves aggressive therapeutic and risk reducing interventions. At the same time, the therapeutic armamentarium for BRCA1/2 mutation carriers apart from platinum salts, has been enriched with the addition of poly-ADP ribose polymerase (PARP) inhibitors with promising outcomes. In this review we provide a succinct and comprehensive overview of the surgical and systemic treatment options for patients with BRCA1/2 mutation related breast cancer and an update on the most recent systemic treatment advances. INTRODUCTION Breast cancer (BC) is the most common female malignancy, with more than 2 million cases being diagnosed world-wide annually (1). Hereditary syndromes account for approximately 5-10% of the cases and are associated with the presence of germ-line mutations. The majority of hereditary breast cancer cases result from mutations in BRCA1 and BRCA2 genes, whereas the rest have been linked to less frequent germline mutations in other high penetrance genes such as TP53, STK11, PTEN, CDH1, and PALB2, as well as moderate penetrance genes like ATM and CHEK2 (2). Both BRCA1 and BRCA2 are tumour suppressor genes encoding proteins involved in homologous recombination repair (3). Pathogenic variants in both genes affect 1 in 400 persons in the general population and 1 in 40 in the Ashkenazi Jewish population. They get inherited by an autosomal dominant pattern and carry a lifetime cumulative breast cancer risk of 72% for BRCA1 and 69% for BRCA2 (4). This review will focus on the surgical and systemic treatment of hereditary breast cancer with a particular focus on BRCA1 and BRCA2 mutations. Surgery on Locoregional Disease The optimal surgical treatment for operable BC in BRCA1/2 mutation carriers depends on several factors and remains a topic of debate. Although breast conserving surgery (BCS) is the preferred surgical treatment for early stage disease in sporadic breast cancer, its oncological safety in BRCA1/2 mutation carriers has not been extensively studied. A meta-analysis of 10 studies, demonstrated a significantly higher risk for ipsilateral breast recurrence (IBR) in BCRA1/2 mutation carriers compared to non-carriers following BCS at a median follow up greater than 7 years, but no difference for shorter follow up periods (5). The risk for contralateral breast cancer was also found to be increased in BRCA1/2 mutation carriers (5). Although BCS is associated with higher IBR risk compared to mastectomy in BRCA1/2 mutation carriers, no difference was found between the two treatment options for overall survival, breast cancer death, or distant recurrence (Table 1) (5-8). Data from a meta-analysis indicate that the risk of IBR in BRCA1/2 mutation carriers who have undergone BCS was found to be reduced with adjuvant chemotherapy (RR 0.51, 95%CI 0.31-0.84) and oophorectomy (RR 0.42, 95%CI 0.22-0.81) (5). BCS could be considered a safe and reasonable option for BRCA1/2 mutation carriers but this should be discussed on an individual basis and further factors need to be taken into account. These include patient's understanding of the increased risk for an ipsilateral new primary breast cancer with all potential emotional implications, as well as their ability to undergo appropriate breast surveillance. International guidelines recommend that early breast cancer patients carrying mutations in moderate penetrance breast cancer susceptibility genes, should be offered BCS if appropriate. However, patients carrying TP53 germline mutations should avoid BCS followed by radiation as they are at high risk of developing radiation induced malignancies such us angiosarcoma (13). Risk Reducing Mastectomy The term "risk reducing" has been deemed more appropriate than "prophylactic" in recent times as no mastectomy can remove all breast tissue. Several studies demonstrated a reduction in the risk of breast cancer by~95% in BRCA1/2 mutation carriers who underwent bilateral risk reducing mastectomy (BRRM) in combination with oophorectomy and by~90% in those with intact ovaries (14)(15)(16)(17). A recent systematic review confirms the benefit of BRRM in reducing both incidence and mortality from breast cancer in high risk patients, such as BRCA1/2 carriers, but calls for rigorous prospective studies due to methodological flaws of the existing literature (18). Data for contralateral risk reducing mastectomy (CRRM) for patients who have had breast cancer in one breast are less conclusive as existing studies show a reduction in the incidence of contralateral breast cancer but no definitive survival benefit (Table 1) (9-12, 18). For high risk patients such as BRCA1/2 mutation carriers, international guidelines recommend RRM with appropriate counselling on risks and benefits. When assessing the risk for developing contralateral breast cancer (CBC) the following factors need to be taken into account: age at diagnosis of primary breast cancer, family history, ability to undergo indicated surveillance imaging, prognosis from this or other malignancies, comorbidities and life expectancy (13,19). RRM cannot completely eliminate the risk of breast cancer and can have a negative impact on body image and quality of life due to potential complications such as multiple surgeries, chronic pain, sexual dysfunction and poor cosmetic outcomes (20). Women considering this procedure should be well informed and weigh the risks and benefits compared to other alternatives such as risk reducing bilateral salpingo-oophorectomy, chemoprevention and intensive screening. For women who wish to avoid or delay RRM, MRI-based breast screening is a reasonable option (19,21). For patients who undergo RRM, skin sparing mastectomy with or without preservation of the nipple-areolar complex has been found to be a safe option for BRCA carriers while achieving better cosmesis (22,23). There is a lack of data in the existing literature on the risk for CBC in breast cancer patients carrying mutations in cancer susceptibility genes other than BRCA1/2. Limited data exist for the CHEK2 1100elC frameshift mutation which is associated with a 3-fold increase in the risk of CBC (24). Decisions on CRRM for patients with moderate risk mutations should not be extrapolated from existing data on BRCA1/2, but should be balanced on several factors (age at diagnosis of primary breast cancer, family history, ability to undergo surveillance imaging) and involve appropriate patient counselling (13). Risk Reducing Bilateral Salpingo-Oophorectomy Risk reducing bilateral salpingo-oophorectomy (rrSBO) is recommended for female BRCA1/2 carriers who have completed childbearing and should be completed by age 35 to 40 for BRCA1, 40 to 45 for BRCA2 carriers or earlier as per patient's relevant family history (25). It has been demonstrated that rrBSO reduces the risk for ovarian cancer by 80% and allcause mortality by 68% in female BRCA1/2 carriers (26,27). The beneficial effect of rrBSO on breast cancer risk reduction has also been assessed but current data are less conclusive. Some prospective studies confirmed that rrBSO reduces BC risk for both BRCA1/2 carriers (25,28). However, a large case-control study showed a benefit for rrBSO only for BRCA1 carriers when performed before the age of 40, while a more recent study identified a benefit only for BRCA2 carriers when performed prior to 50 years old (29). Oophorectomy has been associated with a significant decrease in the risk of IBR and CBC (5). SYSTEMIC TREATMENT Germline mutations of BRCA1 and BRCA2 genes lead to the decreased capacity of the cell to repair double strand breaks (DSBs), as they are key elements of the homologous recombination (HR), one of the two main mechanisms of DSB repair (30,31). This formed the basis for the development of new therapeutic strategies and the development of novel treatments for this specific breast cancer patient subgroup ( Table 2). Platinum Salts Since the introduction of cisplatin in the 1970s, platinum compounds have been the cornerstone in the treatment of various tumour types. Platinum agents form intra-strand adducts by binding with the purines leading to DSBs. This triggers various repair mechanisms including that of homologous recombination (HR) (41). Consequently, cells with HR deficiency can be particularly sensitive to platinum compounds (42,43). In a small phase II open label study, 20 BRCA1 mutation carriers with metastatic breast cancer (mBC) received cisplatin 75 mg/m2 on a 3-weekly basis with 35% achieving partial response and 45% complete response with acceptable toxicity profile (44). In the Phase II TBCR009 trial, 86 previously treated triple negative mBC patients received either cisplatin or carboplatin. Response rates in the BRCA1/2 mutation carrier patient subgroup were significantly higher compared to the total study population (54% versus 26%) (45). The triple negative breast cancer trial (TNT) was the largest trial examining the role of platinum compounds in the treatment of triple negative and BRCA1/2 mutated mBC patients. In this Phase III study, 376 mBC patients were randomised to receive first line chemotherapy with carboplatin or docetaxel. In the BRCA1/2 mutation subgroup the overall response rates were higher for the carboplatin group (68% vs 33%). Similarly, PFS was also improved in the BRCA1/2 mutation carriers who received carboplatin (6.4 vs 4.4 months) (32). The use of platinum compounds has also been assessed in the neoadjuvant setting. In 2010, Byrski et al. reported a pathological complete response (pCR) rate of 83% for women with BRCA1 positive BC treated with neoadjuvant cisplatin (33). This was further echoed in the findings of a single arm study including 107 BC patients carrying BRCA1 mutation who were treated with 4 cycles of neoadjuvant chemotherapy with 61% achieving pCR (46). In GeparSixto, a phase II randomised trial, triple negative stage II-III breast cancer patients were given anthracycline and taxane based neoadjuvant chemotherapy with or without carboplatin (47). In a secondary analysis, BRCA1/2 mutation carriers did not gain any additional benefits in terms of pCR with the addition of carboplatin (65.4% vs 66.7%) with similar impact on DFS. On the contrary, carboplatin conferred significant improvement in response rates to non-carriers (34). In the phase II CALBG 40603 trial, although the addition of carboplatin to NACT achieved superior pCR rates in patients with II-III triple negative BC, an improvement in long term survival outcomes was not demonstrated (48). Results from the recent randomized Phase II INFORM trial, demonstrated that in BRCA1/2 carriers with HER2 negative stage I-III BC, neoadjuvant single agent cisplatin did not achieve better pCR compared to doxorubicin and cyclophosphamide (AC) (35). All things considered, the use of platinum compounds as part of neoadjuvant chemotherapy does not clearly improve the rates of pCR in breast cancer patients carrying BRCA1/2 mutations. PAPR Inhibitors The concept that some genes can be "synthetically lethal" has been well known since early preclinical studies. In order for two genes to be synthetically lethal, both have to carry mutations leading to cell death. As a result, the targeting of one gene, combined with a known genetic mutation could be a tempting field for the development of new anticancer drugs (49). Under this scope, the inhibition of single strand (SS) DNA repair with the use of the enzyme poly (ADP) ribose polymerase (PARP) inhibitors, in combination with known homologous recombination (HR) deficiency, can result in cell death (50). Over the past 6 years multiple PARP inhibitors have been approved for the treatment of ovarian cancer (51). Olaparib is the PARP inhibitor which has been studied more extensively in breast cancer patients with BRCA1/2 mutations. In the early phase clinical trial olaparib showed efficacy in advanced solid tumors with 22 patients having breast cancer and 9 of them being BRCA1/2 mutant (52). In a proof of concept trial 54 pretreated metastatic breast cancer patients with BRCA1/2 mutation were treated with olaparib 400 mg twice daily (BD) or 100 mg BD. On the 400 mg BD arm, overall response rate was 41% and 22% in the cohort of 100 mg BD with acceptable toxicity profile (53). In another phase II basket trial 62 women with advanced breast cancer received olaparib. ORR was achieved in 13% of patients and stable disease for more than 8 weeks was observed in 47% (54). The ORR was lower in patients with previous exposure to platinum compounds suggesting that there is cross-resistance with PAPR inhibitors. In the randomized open label phase III OlympiAD trial, olaparib 300 mg BD monotherapy was compared with standard chemotherapy (eribulin, capecitabine, gemcitabine) in 302 patients with metastatic, HER2 negative, BRCA1/2 related breast cancer. All patients had received anthracycline and taxane based chemotherapy. Median progression free survival was significantly improved for the olaparib arm (7 months vs 4.2 months). The response rates were 59.9% for the olaparib group and 28.8% for the chemotherapy group (36). Of note, olaparib was not compared to cisplatin or carboplatin. Talazoparib is a potent PARP inhibitor which has been studied for the treatment of BRCA1/2 mutation related breast cancer. In the early clinical trial, talazoparib showed promising activity in BRCA1/2 mutation related solid tumors including patients with breast cancer (55). EMBRACA was a phase III open label clinical trial, which randomised 431 metastatic breast cancer patients with germline BRCA1/2 mutations to talazoparib or physician's choice chemotherapy. Median PFS was significantly improved in the talazoparib arm (8.6 months vs. 5.6 months) (37). ABRAZO was a phase II, trial assessing the efficacy of talazoparib in germline BRCA1/2 mutant breast cancer patients with previous response to platinum-based chemotherapy or in patients with 3 or more previous lines of cytotoxic treatment and demonstrated promising anti-tumour activity (56). Talazoparib has also been tested in the early breast cancer setting. After the promising results of a feasibility study in which 2 months of neoadjuvant treatment with talazoparib before the initiation of standard neoadjuvant chemotherapy, showed median decrease in tumor size of 88% (57), a separate pilot study was organized. Twenty patients with germline BRCA1/2 mutant HER2 negative breast cancer received 6 months of neoadjuvant treatment with talazoparib before proceeding with surgery. Pathological complete response was achieved in 53% of the patients with acceptable toxicity (58). Another PARP inhibitor, rucaparib has been evaluated for the treatment of patients with metastatic breast cancer. In a phase II, openlabel, multicentre trial of rucaparib in BRCA1/2 mutation carriers with advanced breast or ovarian cancer, the range of dosing schedules, safety and tolerability were assessed. The treatment schedule included intravenous and subsequently oral rucaparib. In the intravenous only schedule response rated was only 2%, with 15% on the continuous oral schedule. The authors concluded that in order to achieve optimal response continuous dosing schedule is required (38). Veliparib has also been tested in germline BRCA1/2 mutation carrier breast cancer patients. In a phase II trial, veliparib was given as a monotherapy at 400 mg BD and at the time of progression carboplatin at a dose of AUC5 was added. Partial response rate was 17% for BRCA1 and 23% for BRCA2 mutation carries who had at least 4 cycles of follow-up (59). Recently the results of phase III BROCADE3 trial were presented. In this trial 509 germline BRCA1/2 mutation carriers with metastatic breast cancer were randomised 2:1 to receive paclitaxel/carboplatin plus intermittent veliparib or paclitaxel/carboplatin plus placebo. Median PFS was improved by 1.9 months (14.5 vs 12.6 months) (39). The results of a phase II open label trial of niraparib in combination with pembrolizumab were recently announced (40). In this study, 55 women with triple negative metastatic breast cancer were treated with niraparib at a dose of 200 mg once daily combined with pembrolizumab 200 mg every 3 weeks. Fifteen patients had somatic or germline BRCA1/2 mutation with 7 achieving partial response (47%). There are no data to support the use of systemic treatments in patients with moderate-risk breast cancer susceptibility mutations. This is currently investigated in a Phase II clinical trial which explores the effectiveness of olaparib in mBC patients with somatic or germline mutations in DNA repair genes. Preliminary data shown efficacy in patient with somatic BRCA1/2 and germline PALB2 mutations but not in those with ATM or CHEK2 mutations (60). CONCLUSION Treating hereditary breast cancer entails more challenges than sporadic cases. High risk patients such as BRCA1/2 germline mutation carriers, present at a young age and their optimal surgical management yet remains an individualized and debatable area. BRCA1/2 mutation carriers face more aggressive surgical interventions for therapeutic and risk reducing purposes due to their high risk of developing primary or contralateral breast cancer. Breast conserving surgery as well as skin sparing mastectomies with or without preservation of the nipple-areolar complex have been proven to be safe and achieve better cosmesis. Selecting the best surgical approach for this patient population requires taking into account several factors including patient's genetic risk, family history, previous BC biology, as well as patient's own preferences. Due to defects in homologous recombination, BRCA1/2 related BC is highly susceptible to treatment with platinum compounds. Several clinical trials demonstrated higher response rates with platinum in BRCA1/2 mutation carriers with metastatic BC. However, this finding was not replicated in the neoadjuvant setting, where an additive benefit of platinum compounds in achieving pCR has not been demonstrated for BRCA1/2 mutation carriers. The therapeutic landscape of BRCA1/2 related breast cancer has been enriched with the addition of PARP inhibitors which led to improvements in survival outcomes. Olaparib and talazoparib have already gained regulatory approval while other such as niraparib and rucaparib and veliparib are undergoing clinical trial assessment. Combinatorial strategies involving PARP inhibitors with chemotherapy or immunotherapy are also being under investigation and hold promise for the future management of BRCA1/2 related breast cancer. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct, and intellectual contribution to the work, and approved it for publication.
2020-10-20T13:07:48.121Z
2020-10-20T00:00:00.000
{ "year": 2020, "sha1": "05a43da8cece3969a7e56b0aaeede991dfda192a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.553080/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05a43da8cece3969a7e56b0aaeede991dfda192a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229071333
pes2o/s2orc
v3-fos-license
Sinking CO2 in Supercritical Reservoirs Abstract Geologic carbon storage is required for achieving negative CO2 emissions to deal with the climate crisis. The classical concept of CO2 storage consists in injecting CO2 in geological formations at depths greater than 800 m, where CO2 becomes a dense fluid, minimizing storage volume. Yet CO2 has a density lower than the resident brine and tends to float, challenging the widespread deployment of geologic carbon storage. Here, we propose for the first time to store CO2 in supercritical reservoirs to reduce the buoyancy‐driven leakage risk. Supercritical reservoirs are found at drilling‐reachable depth in volcanic areas, where high pressure (p > 21.8 MPa) and temperature (T > 374°C) imply CO2 is denser than water. We estimate that a CO2 storage capacity in the range of 50–500 Mt yr−1 could be achieved for every 100 injection wells. Carbon storage in supercritical reservoirs is an appealing alternative to the traditional approach. Introduction Carbon Capture and Storage (CCS) is envisioned as a key technology to accomplish net negative carbon dioxide (CO 2 ) emissions during the second half of the century and meet the COP21 Paris Agreement targets on climate change (Bui et al., 2018; Intergovernmental Panel on Climate Change [IPCC], 2018). However, CCS should overcome two main hurdles, namely, the risks of induced seismicity (Vilarrasa & Carrera, 2015;Zoback & Gorelick, 2012) and CO 2 leakage (Lewicki et al., 2007;Nordbotten et al., 2008;Romanak et al., 2012), before its widespread deployment takes place. Proper site characterization, monitoring, and pressure management should allow minimizing the risk of perceivable induced seismicity in Gt-scale CO 2 injection (Celia, 2017;Rutqvist et al., 2016;Vilarrasa et al., 2019). The considered storage formations to date include deep saline aquifers, depleted oil and gas fields, and unmineable coal seams in which CO 2 stays in supercritical conditions with a relatively high density but lower than the one of the resident brines (Hitchon et al., 1999). Thus, the risk of CO 2 leakage, although low (Alcalde et al., 2018), may be present for up to millions of years until all CO 2 becomes dissolved into the resident brine or mineralized (Benson & Cole, 2008). A few concepts have been proposed to date to reduce the risk of CO 2 leakage. These concepts consist either in promoting fast mineralization or storing CO 2 already dissolved in the resident brine. Regarding rapid CO 2 mineralization, injecting CO 2 in shallow basaltic rock allows a quick mineralization thanks to the favorable chemical composition of the host rock, although leakage through buoyancy remains a major concern in the absence of low-permeable caprocks or whenever the caprock integrity is compromised (Gislason & Oelkers, 2014). Another storage rock for mineralization could be peridotite, in which carbonation occurs naturally when exposed to atmospheric CO 2 (Kelemen & Matter, 2008). Peridotite is rare at shallow depths, and its total capacity for CO 2 storage is in the order of Gt, provided that the rock is massively hydraulically fractured to reach all the available mineral. Regarding dissolved CO 2 storage, the leakage risk is mitigated because brine is heavier when it is CO 2 saturated (Burton & Bryant, 2009;Sigfusson et al., 2015). CO 2 dissolution can be performed either on surface (Burton & Bryant, 2009) or at the reservoir depth (Pool et al., 2013). To balance the injection and pumping energetic cost, geothermal heat can be recovered and even electricity could be produced if the temperature is high enough (Pool et al., 2013). However, this storage concept has the drawback that CO 2 injection capacity is limited by CO 2 solubility into the brine, which is around 4% at 60°C. Such solubility leads to a storage of roughly 0.1 Mt of CO 2 per year and per doublet for a circulating brine flow rate of 80 L s −1 , that is, 2.5 Mt yr −1 of water being pumped and reinjected. Thus, very large volumes of brine would need to be circulated-a scenario that makes injection of dissolved CO 2 only feasible for small-scale decentralized CO 2 storage. Overall, the alternatives that have been proposed to reduce the risk of CO 2 leakage entail a limited storage capacity per well with respect to conventional CO 2 injection in free phase, which diminishes their attractiveness. To overcome this limitation, we propose an innovative CO 2 storage concept that reduces the CO 2 leakage risk, does not require the presence and integrity of a caprock, and maintains a high storage capacity per well. This concept consists in storing CO 2 in free phase into supercritical reservoirs, that is, reservoirs where water is in supercritical state. Supercritical reservoirs are found in the deeper part of volcanic areas (depth >3 km), where pressure, p, and temperature, T, of the pore water are likely to exceed its critical point (p > 21.8 MPa and T > 374°C for pure water). At water's supercritical conditions, an interesting situation occurs: CO 2 density is higher than the one of water and thus, sinks. Consequently, a low-permeable caprock is not needed in deep volcanic areas. Injecting CO 2 into deeper and hotter reservoirs is a new concept that we propose and we deem possible in the light of the recent achievements in deep drilling in volcanic areas demonstrated at the IDDP-2 project, in which a 4.5-km-deep well has been drilled in the Reykjanes volcanic area, Iceland, reaching supercritical water conditions (Friðleifsson et al., 2017). We examine the potential of storing CO 2 in deep volcanic areas where resident water is in supercritical state. First, we analyze the plausible injection conditions at the wellhead that permit injecting CO 2 with a reasonable compression cost. Next, we explore the CO 2 sinking potential and quantify the CO 2 plume shape and injectivity. Finally, we estimate the injection rates that could be achieved and discuss the worldwide CO 2 storage potential in deep volcanic areas. Water and CO 2 Equation of State The equation of state (EOS) of water and CO 2 is computed via the C++ library CoolProp (Bell et al., 2014), available at CoolProp (http://www.coolprop.org/). CoolProp employs the Span and Wagner (1996) EOS of CO 2 , which is valid up to 800-MPa pressure and 1100-K temperature, and the Scalabrin et al. (2006) viscosity model. The EOS of water is valid up to 1 GPa of pressure and 2000-K temperature and is taken after Wagner and Pruß (2002), which is based on the IAPWS Formulation 1995. The viscosity of water is taken after Huber et al. (2009). Temperature, Pressure, and Density Profiles Along the Wellbore We have implemented an explicit scheme to compute the fluid properties variation with depth along the wellbore. During CO 2 injection, the cold fluid quenches the well in a relatively short time (days to months), so that at equilibrium a colder annulus forms around the well, hindering heat transfer from the surrounding rock, and the injection process becomes adiabatic (Pruess, 2006). The enthalpy is fixed at corresponding wellhead conditions of pressure and temperature h(z 0 ) = f ( p (z 0 ), T(z 0 )), and CO 2 density is evaluated with CoolProp functions along the discretized (n = 1,000 intervals) wellbore depth as a function of temperature and pressure ρ(z i ) = f ( p(z i ), T(z i )). At each depth increment i + 1, the pressure increase is given by p(z i+1 ) = p(z i ) + gρ(z i )(z i+1 − z i ), where g is gravity acceleration and T(z i+1 − z i ) is calculated assuming constant enthalpy h(z i ) = h(z 0 ). To compute the initial reservoir in situ conditions of the resident water, the weight of the water column to the corresponding depth is calculated assuming thermal equilibrium with the geothermal gradient, hence the only difference with the described procedure is that T(z i ) is known a priori. CO 2 Plume Calculations We use both analytical and numerical solutions to compute CO 2 injectivity (ratio between flow rate and wellhead pressure) and the plume geometry (see supporting information [SI] for more details). For the analytical solution, we use the Dentz and Tartakovsky (2009) solution with the correction to incorporate CO 2 compressibility effects of Vilarrasa et al. (2010). The CO 2 plume evolution is computed for a specific injection scenario of temperature and pressure that is deemed to be representative of the application. We assume initial pore fluid pressure of 34 MPa and temperature of 500°C and a pressure buildup at the wellhead of 10 MPa in isothermal conditions. The analytical solution is valid for a confined aquifer scenario, which we have assumed to be 500 or 1,000 m thick. The hypothesis of a confined aquifer represents a lower bound case in terms of injection rate: the structural geology features at depth in volcanic areas are quite uncertain and the presence of low-permeability structures could be represented by faults, chemically altered layers or magmatic intrusions, but could not be present as well. Injection Conditions in the Wellbore CO 2 downhole pressure and temperature conditions are constrained by limiting reservoir cooling and by ensuring an adequate flow rate through sufficient pressure buildup. Assuming wellbore quenching during continuous injection, the injection temperature and pressure at depth depend on the CO 2 wellhead temperature and pressure (Figures 1 and S1). According to the EOS of CO 2 , its density is a function of both temperature and pressure and the adiabatic compression generates an increase in CO 2 temperature with depth (inset in Figure 1). The density profile, in turn, is responsible for the weight of the fluid column, which translates into a pressure increase with depth ( Figure S1). At 5 MPa of wellhead pressure, the downhole conditions mildly depend on the wellhead temperature. CO 2 is strongly heated up by compression along the wellbore because of its high compressibility as it transitions from gas to supercritical fluid (the critical point of CO 2 is T = 31.04°C and p = 7.39 MPa) and reaches the reservoir at approximately 100°C and 15-17 MPa, a pressure lower than the one of the reservoirs that prevents CO 2 flow into the rock. At a wellhead pressure slightly above the critical pressure (see 7.5 MPa in Figure 1), the downhole conditions strongly depend upon the wellhead temperature because of phase transition phenomena. While CO 2 is in its supercritical phase when injected warmer than its critical temperature, CO 2 is in liquid phase for cooler injection temperature and reaches the reservoir with higher pressure and lower temperature because of the higher density of the liquid than its gas or supercritical phases. A similar situation occurs when the wellhead pressure equals 10 MPa. At 20 MPa of wellhead pressure, the downhole conditions exhibit small changes between wellhead and downhole temperature because CO 2 density changes are small at such high pressure. Downhole overpressure is necessary to ensure that CO 2 enters into and flows within the reservoir and, if we assume a reservoir pore fluid pressure as in IDDP-2 of 34 MPa (Friðleifsson et al., 2017), the downhole pressure should not fall below approximately 40 MPa. For example, to achieve such downhole pressure, the wellhead temperature should not exceed 30°C for a wellhead pressure of 7.5 MPa. We can limit reservoir cooling only by injecting at high wellhead pressure and temperature, which implies a high energetic cost. Each curve shows the pressure, p down , and temperature, T down , conditions at depth of injection (4.5 km) for several wellhead pressures and as a function of wellhead temperature, T up . Injecting CO 2 at a higher wellhead temperature implies that it reaches the reservoir depth with a lower pressure: In order to ensure injectivity into the rock formation, a minimum downhole pressure threshold should be guaranteed and can therefore be achieved by increasing the wellhead pressure. The sharp transition in the curves corresponding to a wellhead pressure of 7.5 MPa is connected to the phase transition from liquid to supercritical close to the critical point, around which abrupt changes in density take place. The inset displays the evolution of CO 2 pressure and temperature along the wellbore depth for two different cases, indicated by points in the main figure (color corresponding to two different wellhead conditions). Because of the adiabatic hypothesis, the heating of CO 2 is a consequence of pressure increase along the wellbore. CO 2 Sinking Potential Above the critical point of water, both fluids are in supercritical phase and CO 2 becomes denser than water at increasingly higher pressure as temperature increases (Figure 2). The black solid lines in Figure 2 indicate the pressure and temperature conditions reached by a hydrostatic water column at several depths by taking into account a range of geothermal gradients typical of volcanic areas, indicated with dotted lines. Figure 2 also shows the CO 2 injection conditions for a wellhead pressure of 10 MPa and several wellhead temperatures along with the estimated in situ conditions of IDDP-2 of 34 MPa and 500°C (Friðleifsson et al., 2017). For a wellhead pressure of 10 MPa, the maximum wellhead temperature to enable CO 2 injection is approximately 40°C. At higher wellhead temperature, the CO 2 density along the wellbore is too small to yield a downhole pressure higher than the one of the reservoirs. Thermal exchange heats up CO 2 as it flows through the reservoir and CO 2 temperature and pressure equilibrate to the ones of the reservoir at a given distance from the injection point. The starting and end points of the path (yellow line in Figure 2) in the phase diagram depend upon the reservoir initial conditions and the wellhead injection pressure and temperature. Following our assumptions, the optimum in terms of CO 2 sinking potential corresponds to gradients between 90 and 120 K km −1 and at depths >5 km. CO 2 Plume and Injectivity The analytical solution of Dentz and Tartakovsky (2009), with the correction of Vilarrasa et al. (2010) applied to consider CO 2 compressibility effects for accurately computing CO 2 density within the plume, estimates a downward CO 2 plume (Figure 3a). We consider a 10-year injection of CO 2 over 500-and 1,000-m-thick reservoirs, assuming a pressure buildup of 10 MPa in a water-saturated reservoir initially at p = 34 MPa and T = 500°C. The extension and shape of the plume are a function of the reservoir permeability and thickness, with its maximum located in the lower part of the reservoir. The maximum extension of the downward plume spans over almost 2 orders of magnitude for a range of permeability of 3 orders of magnitude, ranging from approximately 2.5 × 10 2 m for the least permeable case to approximately 1.0 × 10 4 m for the most permeable one. The achievable mass flow rate is also proportional to the reservoir permeability and thickness and ranges from 0.0057 to 4.4 Mt yr −1 for a 500-m-thick reservoir and from 0.012 to 8.7 Mt yr −1 for a 1,000-m-thick reservoir. Figure 2. Density difference map between water and CO 2 . The figure shows the density difference between water and CO 2 as a function of pressure (up to 60 MPa) and temperature (up to 800°C). Positive (in blue) values indicate that CO 2 has a lower density than water, which leads to CO 2 buoyancy, and negative (in red) values indicate that CO 2 has a higher density than water, leading to sinking potential in the reservoir. The downhole conditions of IDDP-2 are temperature of 500°C and pressure of 34 MPa, which would lead to CO 2 sinking potential. The dotted black lines indicate the p-T conditions of a hydrostatic water column for a variety of geothermal gradients and the solid black lines are isodepth for the same case. The trajectories on the left-hand side indicate CO 2 injection conditions at the reservoir for several wellhead temperatures and for a wellhead pressure of 10 MPa. The yellow line connects the downhole conditions (buoyant) of a hypothetical injection at IDDP2 with the CO 2 conditions (sinking) within the reservoir far from the injection well. PARISIO AND VILARRASA The gravity number N (Equation S5), which is the ratio between gravity to viscous forces, is computed for the near field (T = 50°C and p = 44 MPa), that is, close to the injection point, and for the far field (T = 500°C and p = 34 MPa), that is, the initial reservoir conditions. At the near field, water is liquid with ρ w = 1,006.3 kg m −3 and CO 2 is supercritical with ρ c = 940.2 kg m −3 , which yields a |Δρ| = 66.2 kg m −3 that favors CO 2 buoyancy. At the far field, both fluids are supercritical, with ρ w = 138.1 kg m −3 and ρ c = 219.2 kg m −3 , which yields a |Δρ| = 81.0 kg m −3 that favors CO 2 sinking. For a 500-m-thick reservoir, the gravity number is for the near field and for the far field and for a 1,000-m-thick reservoir for the near field and for the far-field conditions. According to the gravity number values, at the near wellbore range, viscous forces dominate or are in the range of gravity forces and far enough from the injection point, buoyant forces become predominant. Although the near-field conditions would favor CO 2 buoyancy, viscous forces are in the same range of buoyant ones, and thus, CO 2 buoyancy does not take place or is limited in very thick reservoirs. Far from the injection well, buoyant forces dominate over viscous forces, and since CO 2 has a higher density than water, CO 2 tends to sink (Figure 4). Finite element analyses of CO 2 injection further confirm that an uprising CO 2 plume does not develop near the injection well and that CO 2 sinks once it reaches thermal equilibrium with the rock (Figures 3b and 4). The cooled region concentrates around the injection well ( Figure 3b) and even though CO 2 is lighter than water within this cold region, no upward flow occurs due to buoyancy. Thus, CO 2 sinks, leading to a safe storage despite cooling around the injection well. Challenges The coupling between the wellbore and the reservoir is important in storage formations with high temperature, like deep volcanic areas. The conflicting objectives of limiting cooling to minimize the risk of inducing seismicity in the long term (Parisio, Vinciguerra, et al., 2019) and of minimizing compression costs by lowering wellhead pressure can only be resolved with accurate optimization procedures. Since CO 2 density decreases with temperature, the lower the injection temperature, the higher the downhole injection pressure (Figure 2). Thus, a trade-off arises between the injection pressure and temperature at the wellhead. The optimum injection conditions are site specific and should be computed according to the characteristics of each site. The pressure and temperature injection conditions at the wellhead are coupled to the injectivity of the reservoir and thus to the required pressure buildup at the downhole to inject a given mass flow rate. Given the highly nonlinearity of flow along a wellbore (Lu & Connell, 2014), the wellhead injection conditions will be determined by the injection mass flow rate and the reservoir transmissivity. Injecting relatively cold CO 2 (T = 20°C) reduces the compression costs because of its higher density (Figure 2). The most energetically efficient option is to inject CO 2 in liquid state, that is, T < 31.04°C , a solution that bears the consequence of cooling down the rock in the vicinity of the injection well. Cooling-induced thermal stress is inversely proportional to the injection temperature and is likely to enhance injectivity (Yoshioka et al., 2019) but also microseismicity by approaching failure conditions: Operators may therefore prefer to inject CO 2 at a relatively high temperature (40 ÷ 60°C). Heating CO 2 entails large energetic costs (Goodarzi et al., 2015), which in volcanic areas could be minimized by extracting heat from existing geothermal wells. Injecting hot also increases compression cost because the higher the injection temperature, the higher the required wellhead injection pressure. The energy spent to (Dentz & Tartakovsky, 2009;Vilarrasa et al., 2010) of the CO 2 plume position for a 10-year injection into a 500-m-thick (solid lines) and 1,000-m-thick (dotted lines) reservoirs. We assume a fixed overpressure of 10 MPa at injection, isothermal injection, an initial reservoir temperature and pressure of 500°C and 34 MPa, respectively, and a range of reservoir permeability, k, that spans 3 orders of magnitude. The mass flow rate, Q m , is a function of the reservoir permeability and thickness. The analytical solution predicts a sinking profile due to the density difference between water and CO 2 . (b) Simulation results after 10 years of injecting 1.0 Mt yr −1 of CO 2 at 50°C through 500 m of open well centered into a 2,000-m-thick reservoir. The extent of the cooled region has a limited size compared to the CO 2 plume and does not affect its sinking tendency. compress the CO 2 should have a renewable source to comply with the objective of reducing CO 2 emissions. Unlike solar or wind resources, which provide time-fluctuating power output, geothermal energy best fits the purpose of providing a time-constant heat supply required for continuous CO 2 injection. Combining geothermal energy production with geologic carbon storage is of particular interest to utilize the injected CO 2 and generate a synergy to maximize the cut of CO 2 emissions in volcanic areas. Exploiting a volcanic area for both geothermal and CO 2 storage purposes would foster subsurface characterization, reducing uncertainty and identifying the most suitable areas for both geothermal production and geologic carbon storage. CO 2 could be eventually used as working fluid once the CO 2 plume has grown enough (Randolph & Saar, 2011). Managing Risks The CO 2 injection rates in deep volcanic areas can be of up to several Mt per year per well (Figure 3a). High injection rates induce pressure buildup and cooling that will in turn affect the geomechanical stability of faults and potentially induce seismic events. Pressure buildup is the main triggering mechanism in the short term and cooling dominates in the long term. The latter may limit the lifetime of injection projects if induced earthquakes become too frequent or of excessively high magnitude (Parisio, Vinciguerra, et al., 2019). The thresholds in frequency and magnitude of induced seismicity are site specific and depend on the local structural and tectonic features. Thresholds to induced seismicity, both in terms of magnitude and frequency, depend on the local conditions and on the consequences produced on the population and infrastructure: The risk might be low in isolated areas but unbearably high in densely populated volcanic areas around the world. In any case, induced seismicity risks should be minimized through subsurface characterization, continuous monitoring, and adequate pressure and temperature management. . CO 2 sinking mechanism. The numerically computed sinking profile of CO 2 , represented as the area with CO 2 saturation S c > 1, is a consequence of the interplay between gravity and viscous forces as represented by the values of the gravity number N. Cold CO 2 injection does not increase CO 2 buoyant potential because thermal equilibrium is reached within a small region from the wellbore where viscous forces dominate over gravity forces. At the far field, CO 2 is in thermal equilibrium with the reservoir, becoming denser than water, and since gravity forces are greater than viscous ones, CO 2 has the tendency to sink. The risks of CO 2 injection in volcanic areas are site specific and should be carefully assessed and evaluated prior to each potential development project. These risks are connected with the intrinsic risks of active volcanism, namely, CO 2 degassing, hydrothermal explosions, and magmatic eruptions-occurrences that could raise concerns about the feasibility of anthropogenic CO 2 injection. CO 2 degassing is naturally present in volcanic areas and usually has its origin at boiling aquifers with superheated steam, which is buoyant (Chiodini et al., 2001). For the injected CO 2 to leak and eventually reach the surface, it should reverse its sinking tendency and become buoyant. However, our proposal only considers injecting CO 2 in supercritical reservoirs, which are placed much deeper and at higher temperature and pressure than boiling aquifers. Yet, similarly to what happens in magma chambers, the denser fluid, that is, CO 2 , might migrate laterally outside of the storage formation and encounter different temperature and pressure conditions at which CO 2 becomes buoyant (Gudmundsson, 2020). Hydrothermal explosions are caused by spinodal decomposition from metastable states leading to fast reequilibration phenomena (Thiery & Mercury, 2009) and the relative risks can be increased by long-term fluid extraction in geothermal reservoir, where the pressure drop could bring the system closer to metastable states. We argue that injecting CO 2 will prevent excessive pressure drawdowns and will help maintain a safe distance in the fluid phase space from metastable and dangerous states, where explosive fluid demixing is possible. The risks of magmatic eruptions are strongly linked with the volcanic activity of a specific site. Consequently, volcanic centers with recent eruptive manifestation should be avoided as target areas of deep CO 2 injection. Avoiding recently active volcanic centers is seldom restrictive in terms of geographical development because supercritical resident brine can be potentially found at drillable depth in several parts of the world where volcanic manifestations are present (Elders et al., 2014). As an example, the Acoculco Caldera Complex has shown no sign of volcanic activity in the form of eruptions and lava flows since approximately 60,000 years ago (Sosa-Ceballos et al., 2018). Nonetheless, two wells drilled within the Caldera recorded a very high geothermal gradient, with approximately 300°C at 2-km depth (Calcagno et al., 2018). The feasibility of this technology is strictly connected to the drilling technology available and to the possibility of reaching pressure and temperature above the critical point of water such that CO 2 would sink. For geothermal gradients of 30 K km −1 , the critical point of water would be encountered at around 13-km depth, which is currently beyond the available drilling technology. In volcanic areas, because of the higher geothermal gradients, the critical point of water is located at the accessible depth of 3 ÷ 4 km (Friðleifsson et al., 2017). Isolating the lower part of the well through proper casing-a great technological challenge per se (Kruszewski & Wittig, 2018)-is also necessary to ensure that CO 2 is injected at the proper depth. Perspectives of Technological Development CO 2 injectivity is controlled by reservoir permeability, which is highly dependent on temperature. For example, fractured granite has a transition permeability (called elastoplastic), which depends on a threshold mean effective stress, itself a function of temperature (Watanabe, Numakura, et al., 2017). Above the threshold stress, permeability decreases drastically with increasing mean effective stress. In contrast, fractured basalt is stable until high temperature (>500°C) and at 450°C, the observed permeability depends on stress and ranges from 10 −17 to 10 −16 m 2 for a mean effective confining stress of up to 60 MPa (Watanabe, Numakura, et al., 2017). The mean effective stress in the crust strongly depends on the rheology (Meyer et al., 2019;Parisio, Vilarrasa, et al., 2019), and its determination at high depth and temperature remains uncertain. Considering that permeability measurements on laboratory specimens tend to underestimate natural permeability at the geological scale (Neuzil, 1994) and that during drilling of IDDP-2, all circulation fluid was lost (Friðleifsson et al., 2017), we believe that in situ permeability ranging from 10 −15 to 10 −14 m 2 is possible in the fractured basaltic crust (Hurwitz et al., 2007). Additionally, during injection, the fluid pressure opens up preexisting fractures, while cooling contracts the surrounding rock, generating an additional fracture aperture: Assuming a cubic relationship of transmissivity with fracture aperture (for which fracture permeability is expressed as k = w 2 /12, where w is the fracture aperture), an increase of the fracture aperture of 1 order of magnitude implies an increase of the fracture transmissivity of 3 orders of magnitude. Stimulation techniques have also the potential to achieve higher permeability at depth (Watanabe et al., 2019;Watanabe, Egawa, et al., 2017). We estimate that suitable injection sites will permit an injection rate ranging from 0.5 to 8 Mt yr −1 per well ( Figure 3a). Thus, for every 100 wells drilled worldwide in deep volcanic areas for combined geologic carbon 10.1029/2020GL090456 Geophysical Research Letters storage and geothermal purposes approximately 50 to 800 Mt of CO 2 would be stored each year without buoyancy-driven leakage risk. The number of injection wells that will become operative in the next decades is highly uncertain, but to put in perspective, 100 wells would provide a higher amount than what is currently being stored, representing between 1% and 8% of the total worldwide storage target, a nonnegligible contribution to mitigate climate change effects (IPCC, 2018). Our proposal is currently a blue-sky idea and several challenges need to be addressed in future works, including the exact deployment of the technology, more refined economical and costs/benefit analyses, predrilling geophysical exploration, site monitoring during operation, improvements, and adaptations of drilling technologies. Conclusions We show that storing CO 2 into reservoirs in which the resident water is in supercritical state will reduce the risk of buoyancy-driven CO 2 leakage. Even when CO 2 is injected much colder than the reservoir temperature, leading to CO 2 becoming locally buoyant, no buoyant forces arise around the wellbore and a sinking CO 2 plume develops away from the wellbore. The injectivity per wellbore is relatively high due to supercritical fluid mobility, while overpressure remains low. Continuous injection of CO 2 over a decade is safe, because cooling only affects a radius in the order of tens of meters from the injection wellbore. Over a longer time span, the expansion of the cooled region might increase local seismicity as faults and fractures respond to thermal induced strains, limiting project lifetime. Our analyses prove that injecting into reservoirs above the critical point of water would constitute a complementary solution to the problem of significantly reducing CO 2 emissions and would extend the current applicability of geologic carbon storage through the CO 2 sinking effect that hinders buoyancy-driven leakage to the surface. Conflicts of Interest There are no conflicts to declare. Data Availability Statement The calculations are easily reproducible and described in detail in section 2. The FEM code for computation of CO 2 injection can be downloaded freely online (at https://deca.upc.edu/en/projects/code_bright). The input files for the numerical model can be accessed at the institutional repository Digital.CSIC, which practices FAIR principles (https://digital.csic.es/handle/10261/196740).
2020-08-20T10:02:56.412Z
2020-08-17T00:00:00.000
{ "year": 2020, "sha1": "5e72fd39be1e8da5100b87787da9c6bba12b31fe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1029/2020gl090456", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "29ec0ef9ee8fa4d93ee3d8889cba47dfe74e39cc", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
181859
pes2o/s2orc
v3-fos-license
Efficacy of Steroid Pulse Therapy for Autoimmune Pancreatitis Type 1: A Retrospective Study Autoimmune pancreatitis (AIP) is treatable with steroids, but relapse is frequent. The efficacy of steroid pulse therapy has been shown for various autoimmune diseases, but has not become established therapy. In this study, we reviewed the efficacy of steroid pulse therapy in 24 subjects who were diagnosed with AIP type 1 at our hospital. Patient characteristics, time-course of serum IgG4, and the cumulative relapse-free survival rate were compared between patients who received oral steroid therapy (oral group) and those who were treated with steroid pulse therapy (pulse group). Serum IgG4 was reduced significantly after therapy in both groups and the 5-year cumulative relapse-free survival rates in the two groups did not differ significantly (oral group 46.9%, pulse group 77.8%). However, in a subset of cases with diffuse pancreatic swelling, this rate was significantly lower in the oral group (33.3% vs. 100.0%, p = 0.046). These results suggest that steroid pulse therapy is effective for prevention of relapse in AIP patients with diffuse pancreatic swelling. Introduction Autoimmune pancreatitis (AIP) was defined by Yoshida et al. as pancreatitis caused by irregular narrowing of the pancreatic duct, pancreatic swelling, or infiltration and fibrillation of lymphocytes, with such events related to autoimmune mechanisms [1]. Hamano et al. reported rising levels of serum IgG4 in patients with AIP [2]. The 2010 International Consensus Diagnostic Criteria (ICDC) for Autoimmune Pancreatitis [3] define pancreatitis as "Type 1" when other organ involvement and elevated serum IgG4 are present and lymphoplasmacytic sclerosing pancreatitis (LPSP) is histologically the distinguishing characteristic; or "Type 2" when elevated serum IgG4 is not present, the symptoms accompany inflammatory bowel disease, and idiopathic duct-centric chronic pancreatitis (IDCP) / granulocytic epithelial lesion (GEL) are histologically the distinguishing characteristics. Due to growing recognition of AIP, the number of reported cases has increased. Steroid therapy is the standard treatment, but relapse is reported to occur in 10-53% of cases [4][5][6][7]. Steroid pulse therapy produces local immunosuppression after organ transplantation and is effective for systemic lupus erythematosus (SLE), intestinal pneumonia, and several autoimmune diseases [8][9][10], but has not become established therapy for AIP. Matsushita et al. suggested that steroid pulse therapy can be used for lower bile duct stricture and is useful for following the therapeutic outcome because of the lack of a need for tapering [11]. In 2011, Tomiyama et al. reported significant improvements of γ-guanosine triphosphate (GTP) at 2 weeks after steroid pulse therapy, alanine aminotransferase (ALT) at 2 and 8 weeks, and glycosylated hemoglobin at 7 months after therapy [12]. In addition, γ-GTP at 2 and 8 weeks after therapy were improved in a subset of patients with diffuse pancreatic swelling. Oral steroid therapy with prednisolone (PSL) and steroid pulse therapy with methylprednisolone (mPSL) are administered for AIP in our hospital. The goal of this study was to retrospectively review the efficacy of steroid pulse therapy for AIP. Patients This study was approved by the Ethics Committee of Fukushima Medical University Hospital. Patients were not required to give informed consent to the study because the analysis uses anonymous clinical data obtained after each patient agreed to treatment by written consent. For full disclosure, the details of the study are published on the home page of Fukushima Medical University. Among 39 patients with AIP treated at our hospital from July 2003 to July 2013, 24 AIP type 1 patients with traceable treatment histories of 12 months and initial elevated serum IgG4 135 mg/dl who were initially treated with steroid therapy were selected for the study (Fig 1). Patients with normal serum IgG4 (n = 5), surgery as initial treatment (n = 2), no treatment (n = 1), history of steroid treatment (n = 1), or a traceable history < 12 months (n = 6) were excluded from the study. The normal range of serum IgG4 is 4.8-105 mg/dl in our hospital. All patients were classified as "unconfirmed diagnosis" or higher based on the 2010 International Consensus Diagnostic Criteria for Autoimmune Pancreatitis (confirmed diagnosis: 22 cases, quasi-confirmed diagnosis: 2 cases). All underwent endoscopic ultrasonograpy-guided fine needle aspiration (EUS-FNA) for the purpose of cytology to rule out pancreatic cancer, therefore, there were no diagnoses of LPSP based on EUS-FNA specimens. Therapy Treatment regimens included (1) starting with peroral PSL 30 mg, reducing the dose in 5-mg increments every 4 weeks down to 10 mg, and then reducing the dose in 2.5-mg increments (8 patients, oral group); and (2) intravenous administration of mPSL 250 mg/day or 125 mg/day for 3 days, followed by starting peroral PSL 20 mg and reducing the dose as in method (1) (16 patients, pulse group: steroid pulse therapy with mPSL 250 mg/day in 11 patients, and with mPSL 125 mg/day in 5 patients). There were no specific conditions for steroid pulse therapy and treatment regimens were selected randomly by each attending physician. Definition of relapse Patients were diagnosed with relapse after recovery following steroid treatment based on observation of pancreatitis or cholangitis with re-elevated serum IgG4 (135 mg/dl). Patients were diagnosed with newly-developed cholangitis after pancreatitis if focal or multiple stenoses of extrahepatic or intrahepatic bile ducts were detected by endoscopic retrograde cholangiopancreatography. Examination items The oral and pulse groups were compared based on age, gender, type of pancreatic swelling (diffuse or focal), serum IgG4 before treatment, other organ involvement, obstructive jaundice, and PSL dosage (dosage of mPSL was converted to PSL) until serum IgG4 reached a minimum level, and the number of patients with elevated glycosylated hemoglobin (HbA1c) after steroid therapy. To determine the impact of steroid treatment on the long-term prognosis, serum IgG4 levels were compared before and after treatment. Serum IgG4 after therapy was defined as the minimum value before relapse in patients with relapse. The cumulative relapse-free survival rate was compared between the two groups and was also examined in a subset of patients who showed diffuse pancreatic swelling. As mentioned above, Tomiyama et al. found that γ-GTP at 8 weeks after therapy was improved by steroid pulse therapy only in patients with diffuse pancreatic swelling [12]. Thus, we thought that steroid pulse therapy was more likely to be effective in AIP patients with diffuse pancreatic swelling. Therefore, the cumulative relapse-free survival rate was also compared in patients with diffuse pancreatic swelling (3 in the oral group and 6 in the pulse group). Statistics Age, PSL dosage until serum IgG4 reached a minimum level, and serum IgG4 levels were compared by Mann-Whitney U test. Comparison of serum IgG4 levels before and after treatment in each group were compared by Wilcoxon signed-rank test. Gender, locality of pancreatic enlargement (diffuse or focal), other organ involvement, and the presence of obstructive jaundice, and the number of patients with elevated HbA1c after steroid therapy were compared by Fisher exact probability test. A Log-rank test was used for comparison of cumulative relapsefree survival rates. P < 0.05 was considered to be significant in all tests. All analyses were performed using Statcel 3 (OMS Edition, Saitama, Japan). The serum IgG4 level was significantly reduced from 367.5±287.5 mg/dl before treatment to 124.3±129.3 mg/dl after treatment in the oral group (p = 0.012) (Fig 2A), and from 476.0±409.3 mg/dl before treatment to 110.3±147.6 mg/dl after treatment in the pulse group (p<0.001) (Fig 2B). The 5-year cumulative relapse-free survival rate did not differ significantly between the oral and pulse groups (46.9% vs. 77.8%, p = 0.098) (Fig 3A). However, in a subset of cases with diffuse pancreatic swelling, this rate was significantly lower in the oral group (33.3% vs. 100.0%, p = 0.046) (Fig 3B) Discussion In this study, the long term prognoses of patients who received oral steroid therapy or steroid pulse therapy were not significantly different, but a protective effect against relapse was found after steroid pulse therapy in patients with diffuse pancreatic swelling. Steroid therapy has become standard therapy for AIP; however, regarding the time until relapse, a report from the Mayo Clinic found a 53% relapse rate after 3 months when steroid treatment was stopped at 11 weeks [13], whereas Kubota et al. found a relapse rate of 25.9% when steroid treatment was continued for 12 months [14]. Predictive factors for relapse of AIP include patient background factors and serum IgG4 levels. Patients with AIP in whom serum IgG4 did not elevate were excluded from the current study because these cases do not tend to relapse and can improve spontaneously [14]. In fact, the 5 cases in our hospital with serum IgG4 that was not elevated did not relapse. Among patient background factors, IgG4-related sclerosing cholangitis, obstructive jaundice, immune complex value, diffuse pancreatic ductal changes, and sclerosing sialadenitis have been proposed as relapse factors [6,[15][16][17][18]. Studies of the serum IgG4 level have proposed the following predictive factors for relapse: high serum IgG4 at initial diagnosis [11,[19][20][21], a high level before and after treatment [4], and resurgence of serum IgG4 in the remission phase after treatment [22]. In reports of IgG4-related conditions that were not limited to AIP, the serum IgG4 level was lowered by steroid treatment in all patients, but normalization was only achieved in about half of the patients, and patients with persistent high values were reported to be at risk for relapse [23]. The role of IgG4 in IgG4-related diseases is unclear, but Tabata et al. suggested that the serum IgG4 level reflects the disease activity of AIP [24]. In the current study, serum IgG4 improved in the oral and pulse groups, which suggests that the two treatments do not differ in their ability to reduce disease activity. Steroid pulse therapy for AIP is useful for subsequent evaluation of the therapeutic effect due to the absence of a need for tapering [11] and may also improve liver function (ALT, γ-GTP) and HbA1c [12]. Steroid pulse therapy was found to be effective in a patient with AIP in whom lower bile duct stricture did not improve after oral steroid therapy, and improved γ-GTP on a long-term basis in this patient [12]. Our study indicated no significant difference in the effect on disease activity between steroid pulse therapy and oral steroid therapy. However, steroid pulse therapy was more effective than oral steroid therapy in patients with diffuse pancreatic swelling, and these patients may be suitable candidates for steroid pulse therapy. The regimen of steroid pulse therapy in previous studies was mPSL 500 mg/day×3days. However, several adverse effects of this therapy have been reported: Zonana-Nacach et al. found a relationship between steroid pulse therapy (mPSL 1 g×3days) and dementia in a cohort study of 539 patients with SLE [25], and Haugeberg et al. showed that steroid pulse therapy (6.6-10 mg/kg) elevated the risk of osteoporosis [26]. There may also be an increased risk of failure of the circulatory system following administration of mPSL 500 mg within 10 minutes [27] and of liver dysfunction with steroid pulse therapy of mPSL at 500 mg/day [28]. For these reasons, as low a steroid dose as possible is preferable, and a regimen of steroid pulse therapy of (mPSL 250 mg or 125 mg/day) may be an option. The limitations of the current study are the small number of patients and the steroid tapering dose not being perfectly in accord with the daily moving average, due to the retrospective nature of the study. However, the steroid dose tapering mostly used the methods referred to above and any deviation is likely to have had little influence on the results because all steroid doses during the period of serum IgG4 reduction did not differ significantly between the oral and pulse groups. As described above, steroid pulse therapy (mPSL 500 mg/day) is effective for therapeutic diagnosis of AIP [11]. We did not examine the effective dose of steroid pulse therapy (mPSL 125 or 250 mg/day) because all patients in the pulse group underwent steroid tapering. A future study on the effect of steroid pulse (mPSL 125 or 250 mg) without tapering is required to examine safer therapeutic diagnosis. However, further accumulation of patients is needed to confirm the results. Within these limitations, we conclude that steroid pulse therapy for AIP does not show a stronger effect than oral steroid therapy for prevention of relapse or on long-term prognosis, but pulse therapy may be an effective treatment for patients with diffuse swelling of the pancreas.
2016-05-12T22:15:10.714Z
2015-09-18T00:00:00.000
{ "year": 2015, "sha1": "0278afff4ca99015cd99771866e2814eac42728a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0138604&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0278afff4ca99015cd99771866e2814eac42728a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
202866239
pes2o/s2orc
v3-fos-license
Overexpressing the Myrosinase Gene TGG1 Enhances Stomatal Defense Against Pseudomonas syringae and Delays Flowering in Arabidopsis Myrosinase enzymes and their substrate glucosinolates provide a specific defensive mechanism against biotic invaders in the Brassicaceae family. In these plants, myrosinase hydrolyzes glucosinolates into diverse products, which can have direct antibiotic activity or function as signaling molecules that initiate a variety of defense reactions. A myrosinase, β-thioglucoside glucohydrolase 1 (TGG1) was previously found to be strikingly abundant in guard cells, and it is required for the abscisic acid (ABA) response of stomata. However, it remains unknown which particular physiological processes actually involve stomatal activity as modulated by TGG1. In this experimental study, a homologous TGG1 gene from broccoli (Brassica oleracea var. italica), BoTGG1, was overexpressed in Arabidopsis. The transgenic plants showed enhanced resistance against the bacterial pathogen Pseudomonas syringae pv. tomato (Pst) DC3000 via improved stomatal defense. Upon Pst DC3000 infection, overexpressing BoTGG1 accelerated stomatal closure and inhibited the reopening of stomata. Compared with the wild type, 35S::BoTGG1 was more sensitive to ABA- and salicylic acid (SA)-induced stomatal closure but was less sensitive to indole-3-acetic acid (IAA)-inhibited stomatal closure, thus indicating these hormone signaling pathways were possibly involved in stomatal defense regulated by TGG1. Furthermore, overexpression of BoTGG1 delayed flowering by promoting the expression of FLOWERING LOCUS C (FLC), which encodes a MADS-box transcription factor known as floral repressor. Taken together, our study’s results suggest glucosinolate metabolism mediated by TGG1 plays a role in plant stomatal defense against P. syringae and also modulates flowering time by affecting the FLC pathway. INTRODUCTION Glucosinolates are major secondary metabolites found in the Brassicales order, including the model plant Arabidopsis and many vegetables (Agerbirk and Olsen, 2012). Myrosinase (β-thioglucoside glucohydrolase, TGG) hydrolyzes glucosinolates by cleaving the thioglucosidic bond to release hydrolysis products that are toxic to insects and pathogens (Tierens et al., 2001;Hopkins et al., 2009;Calmes et al., 2015). Glucosinolates and myrosinase are normally harbored in separate compartments within plants (Andreasson et al., 2001;Koroleva et al., 2010). But they come into contact with each other upon tissue disruption from chewing by insects or damaged by pathogens rapidly releasing large amounts of toxic hydrolysis products, typically isothiocyanates and nitriles and their derivatives (Halkier and Gershenzon, 2006). Myrosinase activity has been detected in all glucosinolatecontaining plants (Husebye et al., 2002). So far, six TGG genes encoding classical myrosinases have been found in the model plant Arabidopsis (Xu et al., 2004). The TGG1 and TGG2 genes are expressed aboveground, where the myrosinases encoded by each are considered to mainly break down aliphatic glucosinolates (Xue et al., 1995;Barth and Jander, 2006); TGG3 and TGG6 are pseudogenes having multiple frame-shift mutations in their coding regions, being specifically expressed in the plant's anthers (Husebye et al., 2002;Zhang et al., 2002); finally, TGG4 and TGG5 are specifically expressed in the roots and related to auxin synthesis and rootgrowth regulation (Fu et al., 2016). In addition to these six classical TGG myrosinases, two other β-glucosidases, PEN2 and PYK10, were identified as atypical myrosinases that primarily hydrolyze indole glucosinolates (Bednarek et al., 2009;Nakano et al., 2017). Numerous studies have demonstrated that the glucosinolatemyrosinase defense system supports broad-spectrum immunity in Arabidopsis. Besides generating direct antimicrobial activity through toxic hydrolysis products, glucosinolate degradation can also form signaling molecules to initiate conserved defense responses. Clay et al. (2009) found that degradation of tryptophanderived indole glucosinolate mediated by atypical myrosinase PEN2 is required for callose deposition in pathogen-associated molecular pattern (PAMP)-triggered immunity (PTI). Likewise, TGG1 and TGG2 are both presumed to be involved in conserved immune responses against pathogens. For example, the hydrolysis of methionine-derived aliphatic glucosinolates mediated by TGG1 and TGG2 is required for programmed cell death (PCD) upon inoculation with the bacterial pathogen Pseudomonas syringae pv. tomato (Pst) DC3000 and the downy mildew Hyaloperonospora arabidopsidis (Andersson et al., 2015). Interestingly, the indole glucosinolate degradation, as catalyzed by TGG1, attenuates mycotoxin fumonisin B1 (FB1)-induced PCD (Zhao et al., 2015); this suggests that glucosinolates' metabolism responds to different kinds of pathogens through various molecular mechanisms. Apart from influencing PCD, TGG1 and TGG2 also contribute to stoma activity in Arabidopsis. As the most abundant stomatal protein, TGG1 comprises 40% to 50% of the total proteins found in guard cells; accordingly, the tgg1 mutant has impaired woundinduced stomatal closure and is less sensitive to abscisic acid (ABA)-inhibited opening of its stomata (Zhao et al., 2008), while the tgg1/tgg2 double mutant is defective in both ABA and methyl jasmonate (MeJA)-induced stomatal closure (Islam et al., 2009). Consistent with those findings, some glucosinolate-derived isothiocyanates were found capable of inducing stomatal closure (Hossain et al., 2013;Sobahan et al., 2015). Taken together, these studies indicate that myrosinase activity and corresponding hydrolysis products play a critical role in the regulation of stomatal closure. Yet it remains elusive which particular physiological processes TGG-modulated stomatal activity is involved in. Glucosinolate metabolism contributes to not only defense against biotic stress but also plant development. Several reports have suggested that glucosinolates and their metabolites are involved in the modulation of flowering time in Arabidopsis (Naur et al., 2003;Kerwin et al., 2011;Jensen et al., 2015;Kong et al., 2015;Xu et al., 2018). Both AOP2 and AOP3 are genes in the aliphatic glucosinolate biosynthesis pathway, encoding two 2-oxoglutarate-dependent dioxygenases that modify glucosinolate side chains (Kliebenstein et al., 2001). These two paralogous genes possess the ability to affect glucosinolate accumulation and flowering time; however, they differ in their ability to modulate flowering time, so the two genes may affect the flowering phenology via separate mechanisms (Jensen et al., 2015). In particular, AOP2 can alter the circadian clock pathway, but whether this contributes to regulated flowering time is questionable (Kerwin et al., 2011). Although cross talk does occur between glucosinolate metabolism and flowering control, many unknown processes await further exploration. In this study, transgenic Arabidopsis plants overexpressing a homologous TGG1 gene from broccoli (designated here as BoTGG1) were investigated, from which several interesting phenotypes were observed. We also performed the same analysis in plants overexpressing endogenous AtTGG1; similar phenotypes were observed but none as significant as 35S::BoTGG1. To more clearly expound on the function of myrosinase TGG1, only data concerning 35S::BoTGG1 are presented here. We found that 35S::BoTGG1 plants were more resistant against the bacterial pathogen Pst DC3000. Given that TGG1 participates in stomatal activity, we hypothesized that this enhanced pathogen resistance present in 35S::BoTGG1 arose from an altered stomatal defense. Stomata are natural openings on the surface of leaves, which not only enable the gas exchange but also facilitate the entry of bacteria. Hence, stomatal closure is considered among the conservative immune mechanisms plants employ against bacterial pathogens (Melotto et al., 2006;Zeng et al., 2010;Sawinski et al., 2013). When bacteria attack, the plant first recognizes PAMP and responds by closing its stomata. To circumvent this immune response, bacteria may release a specific virulence factor to effectively cause stomata to reopen (Melotto et al., 2006). In our study, we discovered that overexpression of BoTGG1 accelerated stomatal closure and inhibited stomatal reopening upon infection of Pst DC3000. Furthermore, 35S::BoTGG1 was more sensitive to ABA-and salicylic acid (SA)-induced stomatal closures while less sensitive to indole-3-acetic acid (IAA)-inhibited stomatal closure, indicating that TGG1-affected stomatal defense likely operates via the signaling pathway of these hormones. In addition to enhanced pathogenic resistance, 35S::BoTGG1 displayed another phenotype, that of delayed flowering, which led to significant increases in the biomass of both the aerial part and root system. Considering that high biomass and pathogen resistance are important plant breeding goals, our study will prove useful for breeding economically valuable cruciferous vegetables with both traits by modifying their glucosinolate metabolism. Plant Material and Growth Conditions Seeds of broccoli (Brassica oleracea var. italica) cultivar 'Qingxiu' were purchased from the JiaHe Seeds Company (Beijing, China) and used for BoTGG1 cloning. Seeds of Arabidopsis ecotype Columbia (Col-0) were obtained from the Arabidopsis Biological Resource Center and used for genetic transformation. All plants were grown under a 16-h photoperiod, with a photosynthetic photon flux density of 100 μmol·m −2 ·s −1 , at a 23°C temperature and 60% relative humidity. Molecular Cloning and Plant Transformation Total RNA was extracted from 3-day-old broccoli seedlings using the EASYPure Plant RNA Kit (TransGen, Beijing, China). The cDNA was synthesized from total RNA with the PrimeScript RT-PCR Kit (Takara, Shiga, Japan). The coding sequence (CDS) of the BoTGG1 gene was amplified by using the primers BoTGG1-F and BoTGG1-R (primer sequences are listed in Table S1). To construct the expression vector 35S::BoTGG1, the obtained PCR product was cloned into the expression vector pCAMBIA330035Su according to a previously described method (Nour-Eldin et al., 2006). The constructed expression vector was passed through the Agrobacterium tumefaciens strain LBA4404 and transferred into Arabidopsis via inflorescence infection (Clough and Bent, 1998). To select the transformants, seeds were planted on the 1/2 Murashige and Skoog (MS) agar medium containing 50 mg L −1 of kanamycin. Two independent homozygous transgenic lines were used in the subsequent analyses. Glucosinolate Extraction and Analysis The 35S::BoTGG1 and wild-type plants were grown simultaneously for 4 weeks. For each plant, 100-150 mg of rosette leaves was harvested for the glucosinolate extraction according to a previously described method (Hansen et al., 2007). Glucosinolates were extracted with methanol, and the desulfoglucosinolates were obtained by filtration through a DEAE Sephadex column followed by sulfatase treatment. High-performance liquid chromatography (HPLC) analysis was carried out as previously described (Grosser and van Dam, 2017). Glucosinolates were identified as desulfoglucosinolates, with sinigrin used as the external standard. Myrosinase Determination Rosette leaves of 4-week-old wild-type plants and two independent transgenic lines of 35S::BoTGG1 were harvested for their myrosinase activity assay. These fresh leaves (150 mg) were frozen in liquid nitrogen and quickly ground into powder. This ground sample was solubilized in 1-ml extraction buffer (pH 7.2) containing 10 mM of K-phosphate, 1 mM of EDTA, 3 mM of dithiothreitol (DTT), and 5% glycerol and then vortexed and centrifuged at 12,000 × g for 15 min at 4°C. The supernatant was collected to measure myrosinase activity. Myrosinase activity was quantified by calculating the rate of hydrolysis of sinigrin. For this, the reaction buffer consisted of 33.3 mM of K-phosphate (pH 6.5) and 0.2 mM of sinigrin; the reaction was initiated by adding 100-μl extracted enzyme into 2.9 ml of reaction buffer. The decline in absorbance at 227 nm and 37°C spanning a 5-min period was plotted. Myrosinase activity was evaluated as the amount of sinigrin degraded by the enzyme from 1 g of fresh leaf per minute. Bacterial Growth Assay The virulent pathogen Pst DC3000 was used, with bacteria cultured in King's B medium at 28°C. For the sprayed infection, 4-week-old plants were sprayed with a bacterial suspension of Pst DC3000 [10 8 colony-forming units (CFU) ml −1 ] in 10 mM of MgCl 2 containing 0.04% Silwet L-77. For infection by injection, leaves were syringe-infiltrated with a bacterial suspension of Pst DC3000 (10 6 CFU ml −1 ) in 10 mM of MgCl 2 . Those plants sprayed or injected with only 10 mM of MgCl 2 served as the corresponding control. The inoculated plants were kept at high humidity for 3 days. The amount of bacterial growth in the infected leaves was determined as described (Katagiri et al., 2002). Measurement of Stomatal Aperture Four-week-old plants were used for the stomatal aperture bioassay. Peels of rosette leaves were first floated in an opening buffer containing 5 mM of KCl, 50 mM of CaCl 2 , and 10 mM of MES-Tris (pH = 6.15) under light for 3 h to induce the maximum opening of the stomata. For the bacterial infection, leaf peels were transferred to a water suspension of Pst DC3000 (10 8 CFU ml −1 ), while those moved to water alone served as the control. Stomatal aperture was observed every 15 min during a 3-h period. For the ABA, SA, and MeJA treatments, leaf peels were respectively transferred to the opening buffer with 10 μM of ABA, 500 μM of SA, and 10 μM of MeJA for 2 h. For the IAA treatment, two groups of leaf peels were transferred to a new opening buffer solution, with or without IAA addition, and then placed under darkness for 2 h. Leaf peels likewise transferred to the opening buffer without hormone additions served as the control. Width and length of each stoma were measured using ImageJ software, and stomatal apertures were expressed by their width-to-length ratio. Quantitative Real-Time PCR Analysis To detect transcript levels of genes involved in the glucosinolate biosynthesis pathway, rosette leaves from 4-week-old plants were harvested. To detect transcription levels of genes involved in stomatal defense, detached leaves from 4-week-old plants were incubated with a water suspension of Pst DC3000 (10 8 CFU ml −1 ). Leaves incubated with water served as the corresponding control. The incubated leaves were collected every 15 min during a 1-h period. To detect transcript levels of genes involved in flowering, rosette leaves from 18-day-old plants were harvested. For all sets of leaves, their total RNA was isolated using the TRIzol reagent (Invitrogen, Carlsbad, CA). The first strand of cDNA was synthesized using the PrimeScript RT Reagent Kit (Takara, Shiga, Japan), and quantitative real-time PCR (qRT-PCR) analyses were performed using the Trans Start Top Green qPCR SuperMix (TransGen, Beijing, China) on an ABI 7500 sequence detection system. The detected genes and the primers used in qPCR are listed in Table S1. The ACTIN2 gene in Arabidopsis served as the internal reference gene. The gene expression level was calculated according to the 2 −ΔΔCt method. Biomass Determination For biomass determination, mature plants that had completely developed and grown after they produced their terminal flowers were used. The aerial tissues and seeds were harvested respectively and dried at 70°C for 2 days. The dry weight of each plant was measured, and the relative weight of wild type and 35S::BoTGG1 was calculated. Detection of Drought Resistance Wild-type and 35S::BoTGG1 plants were grown under a longday condition (16-h photoperiod) for 4 weeks. Detached rosette leaves from 35S::BoTGG1 and the wild type were placed on a piece of weighing paper, and the fresh weight of these leaves was measured every 20 min during a 3-h period. Water loss was defined as the percentage of initial weight reduced at each time point. The same amount of soil (by weight) was placed into each pot, after which all pots were soaked on a tray to ensure equivalent soil and water conditions among them. Seeds of 35S::BoTGG1 and the wild type were planted and allowed to grow for 4 weeks, with water added to the tray twice a week. To emulate drought, water was withheld from plants for 2 weeks, and then all the plants were rewatered for 2 days. Plant growth under each water condition was photographed and observed. Cloning of BoTGG1 From Broccoli According to our previous transcriptome analysis in broccoli (Gao et al., 2014), the CDS of a myrosinase gene was amplified from the broccoli (B. oleracea var. italica) cultivar 'Qingxiu' , by using the unigene TGG as the reference sequence. The CDS of the obtained gene was 1,647 nucleotides long, encoding a protein of 548 amino acids; this revealed 98.4% nucleotide and 99.2% amino acid identity when compared to the predicted myrosinase gene in broccoli (B. oleracea var. oleracea). Compared to its homologous TGG genes in Arabidopsis, the obtained gene was most similar to TGG1, with which it shared 71.5% nucleotide and 79.5% amino acid identity. We therefore designated the obtained gene as BoTGG1 (GenBank accession no. MG252789). Overexpression of BoTGG1 Increased Myrosinase Activity and Decreased Aliphatic Glucosinolate Content in Arabidopsis To confirm whether BoTGG1 possesses myrosinase activity as its homolog in Arabidopsis, BoTGG1 was overexpressed in Arabidopsis, and this overexpression of BoTGG1 was confirmed by RT-PCR ( Figure S1). Rosette leaves from 4-week-old 35S::BoTGG1 and wildtype plants were harvested, and the myrosinase activity assay was performed using sinigrin (2-propenyl glucosinolate) as the substrate. Compared with wild-type plants, myrosinase activity in 35S::BoTGG1 was significantly increased (Figure 1), demonstrating that BoTGG1 is functional in vitro. To further detect the effects of overexpressing BoTGG1 on the glucosinolate profile in vivo, the glucosinolate content of rosette leaves from wild-type and 35S::BoTGG1 plants was measured. In 35S::BoTGG1, the content of indole glucosinolates did not show any significant difference, but aliphatic glucosinolates were significantly reduced to approximately half that in the wild type (Figures 2A, B). This decrease of aliphatic glucosinolates in 35S::BoTGG1 was possibly due to increased myrosinase activity, but it could also have arisen from decreased biosynthesis of these compounds. To determine which, the expression levels of key genes involved in glucosinolate biosynthesis were assessed. Comparing with the wild type, in 35S::BoTGG1 plants, expression levels of the indole glucosinolate biosynthetic genes MYB51 and CYP83B1 were apparently not altered, while CYP79B3 and SUR1 were slightly changed ( Figure 2C). For the genes related to aliphatic glucosinolate biosynthesis, the respective expressions of MYB28, MYB29, CYP83A1, SUR1, and FMO GS-OX1 were all slightly increased, whereas MAM1 and CYP79F2 were not changed and CYP79F1 slightly decreased ( Figure 2D). Nevertheless, since the altered expression of each gene was less than twofold, we may infer that these genes' expression during both indole and aliphatic glucosinolate biosynthesis in 35S::BoTGG1 remained unchanged compared with the wild type. This suggested the lowered content of aliphatic glucosinolates in 35S::BoTGG1 ought to be due to enhanced myrosinase activity and corresponding degradation FIGURE 1 | Myrosinase activity of wild type (WT) and 35S::BoTGG1. Rosette leaves of 4-week-old WT and two independent transgenic lines of 35S::BoTGG1 were harvested for myrosinase activity assay. Means (± SE) from three independent biological replicates and three technical repeats are shown. **, significantly different (Student's t-test; P < 0.01) from the WT. processes. In sum, these results indicated that under normal growing conditions, the overexpressed BoTGG1 primarily broke down aliphatic glucosinolates in intact tissues of Arabidopsis. Overexpression of BoTGG1 Enhanced Resistance to Pst DC3000 To determine whether overexpression of BoTGG1 could affect pathogenic resistance in Arabidopsis, virulent Pst DC3000 was used as a representative pathogen, and dip inoculation and syringe infiltration assays were performed. Leaves of 35S::BoTGG1 and wild-type plants were infected with Pst DC3000. Three days after infection, for both dip inoculation and syringe infiltration, wild-type plants displayed clear chlorotic symptoms. In contrast, 35S::BoTGG1 plants showed no significant signs of infection (Figures 3A, B). The growth of bacteria in leaves of the wild type was approximately 10-fold higher than that of 35S::BoTGG1 in the dip assay and likewise sixfold higher in the syringe assay ( Figures 3C, D). These results suggested that overexpression of BoTGG1 enhanced resistance to Pst DC3000 in Arabidopsis. The difference in bacterium growth between the wild type and 35S::BoTGG1 was larger when inoculated on the surface than when inoculated directly into the apoplast; hence, it may be more difficult for bacteria to enter through the epidermis in 35S::BoTGG1. Overexpression of BoTGG1 Accelerated Stomatal Closure and Inhibited Stomatal Reopening Upon Infection of Pst DC3000 Previous work discovered that Arabidopsis responds to the PAMPs of bacteria by closing its stomata ~1 h after incubation with Pst DC3000, but this stomatal closure was transient, as bacteria subsequently produced a polyketide toxin, coronatine, which induced stomatal reopening ~3 h after incubation to let them enter the host plant (Melotto et al., 2006). It is of great interest to know whether overexpression of BoTGG1 can affect the closing and reopening of stomata during the arms race between plants and pathogenic bacteria. Therefore, we observed the dynamic state of stomata during the first 3 h of Pst DC3000 incubation. Water incubation was performed as control, and no stomatal movement was observed (Figure S2). In the wild type, 90% of stomata had completely closed by ~60 min post inoculation, while in 35S:: BoTGG1 this extent of closure occurred earlier, at ~45 min post inoculation. Within 60-120 min after the inoculations, half of the stomata had reopened, but not to their maximum apertures, and they were subsequently closed again at 120 min post inoculation in both wild-type and transgenic plants. No significant differences between the two genotypes were observed within this period. During the last 60 min, however, 90% of the stomata had reopened gradually in the wild type while most of the stomata remained closed in 35S::BoTGG1 (Figures 4A, B). These results suggested that overexpression of BoTGG1 accelerated Pst DC3000-induced closing of the stomata and inhibited their later reopening. As shown in Figures 5A, C, ABA and SA effectively induced stomatal closure in both the wild type and 35S::BoTGG1. Compared with wild-type plants, the stomatal aperture was significantly smaller in 35S::BoTGG1, indicating the latter plant was more sensitive to ABA-and SA-induced closing of the stomata. Much like ABA and SA, JA was able to induce stomatal closure in both the wild type and 35S::BoTGG1, but no significant difference was observed between the two genotypes, suggesting that overexpression of BoTGG1 did not affect stomatal closure induced by this hormone. IAA is another hormone that can positively regulate stomatal opening and is strongly induced by coronatine upon infection by pathogenic bacteria (Lohse and Hedrich, 1992;Kunkel and Harper, 2018). As Figures 5B, D show, in wild-type plants, stomata were largely closed under darkness in the absence of IAA but stayed open in its presence. In contrast, in 35S::BoTGG1, the darkness led to stomatal closure irrespective of IAA being present or not. This result suggested that 35S::BoTGG1 was less sensitive to IAA-inhibited stomatal closure. ABA-mediated stomatal closure is involved in bacteriumtriggered stomatal defense (Melotto et al., 2017). To determine whether ABA-mediated stomatal closure is more sensitive in 35S::BoTGG1 in response to P. syringae, we analyzed the expression levels of several pathogen defense-related genes in the ABAmediated stomatal closure. Since bacterium-triggered stomatal defense is a fast response (<1 h) (Melotto et al., 2017), the gene expression levels were detected at five time points within an hour upon Pst DC3000 infection. ABI1, ABI2, and PP2CA are negative regulators of ABA signaling and play essential roles in pathogen resistance (Park et al., 2009;Rodrigues et al., 2013). The expression levels of the three genes were significantly higher in 35S::BoTGG1 than in the wild type before Pst DC3000 infection ( Figure 6A). During the first hour of Pst DC3000 infection, the expression levels of ABI1, ABI2, and PP2CA in 35S::BoTGG1 decreased, while the expression levels of the three genes in the wild type did not change significantly. The decreased expression of these negative regulator genes indicated the activating of the ABA signaling pathway, and this might contribute to the faster stomatal closure in 35S::BoTGG1. The SA signaling pathway is also required for stomatal defense (Khokon et al., 2011). NPR1 is a major activator of SA-mediated responses and is essential for stomatal defense (Schellenberg et al., 2010). LOX1, a gene expressed in guard cells, encodes lipoxygenase, the catalytic product of which triggers SA-mediated stomatal closure (Montillet et al., 2013). As shown in Figure 6B, the expression of both NPR1 and LOX1 showed a rapid and transient increase during the first 30 min of Pst DC3000 infection. The expression level of the two genes in 35S::BoTGG1 increased much more than that in wild-type plants, which suggested SA-triggered stomatal response might be more sensitive in 35S::BoTGG1. SLAC1 encodes a guard cell-expressed anion channel, which is a major contributor of stomatal closure. OST1 and GHR1 are two kinases that activate SLAC1 in parallel during ABA-induced stomatal closure, and GHR1 is also involved in SA-mediated stomatal closure (Hua et al., 2012;Acharya et al., 2013). Similar to what has been observed in LOX1 and NPR1, a rapid and transient increase in the expression of OST1 and GHR1 was observed in 35S::BoTGG1 during the early response to Pst DC3000, while the expression increases of the two genes in wild type were not significant ( Figure 6C). These results further suggested that in response to Pst DC3000, stomatal closure mediated by ABA and SA might be more sensitive in 35S::BoTGG1. ANAC019, ANAC055 and ANAC072 are three homologous NAC family transcription factors that are required for coronatineinduced stomatal reopening through JA signaling pathway (Zheng et al., 2012). As shown in Figure 6D, the expression of these three genes showed a significant and transient increase, but no significant difference were observed between 35S::BoTGG1 and the wild type. So ANAC019, ANAC055 and ANAC072 might not (or at least not at transcription level) contribute to the insensitive stomatal reopening in 35S::BoTGG1. Stoma's movement is the most critical factor regulating water transpiration in plants, and is closely related to their drought resistance. Compared with the wild type, 35S::BoTGG1 exhibited no difference in the water loss of its detached rosette leaves and showed comparable growth under drought stress (i.e., withholding water for 2 weeks; Figures S3A, B). This result suggested that overexpressing BoTGG1 enhanced stomatal closure in response to pathogen infection but not to drought stress, and thus it did not change the drought resistance of plants. Overexpression of BoTGG1 Delayed Flowering Time The mechanism by which glucosinolates modulate flowering time and whether myrosinase is involved in the modulation remains unclear. We found that overexpressing BoTGG1 in Arabidopsis significantly delayed flowering under a long-day condition (Figures 7A-C). Compared with the wild type, 35S::BoTGG1 plants flowered about 8-9 days later. Recently, 2017) identified a single major quantitative trait locus controlling total glucosinolate content; they suggested that FLC is a potential major regulator of glucosinolate content, and that a plant's defense and its vegetative-to-reproductive stage transition can affect each other. To determine whether the delayed flowering in 35S::BoTGG1 was indeed related to a FLC-mediated flowering pathway, transcription levels of FLC, FT, and SOC1 genes in rosette leaves of wild-type and 35S::BoTGG1 plants were detected. As Figure 7D shows, in 35S::BoTGG1 the expression of FLC significantly exceeded the wild type whereas that of FT and SOC1 were significantly lower than in wild type. This indicated that an FLC-dependent flowering pathway might have contributed to the delayed flowering that characterized 35S::BoTGG1. Due to this delayed flowering, vegetative growth in 35S::BoTGG1 was prolonged by 9 days and its total growth phase prolonged ca. 28 days. Consequently, these transgenic plants appeared significantly larger in both their aerial and underground parts in the later stage of life cycle (Figures 8A, B). After terminal flower production, the final dry weight of aerial tissue and seeds of 35S::BoTGG1 respectively reached 2.4and 1.9-fold that of wild-type counterparts (Figures 8C, D). However, an increased biomass in 35S::BoTGG1 before flowering could not be detected, indicating the improved biomass found before was due to a prolonged growth phase caused by delayed flowering. DISCUSSION Being the most common natural openings in plants, stomata are the first barriers bacterial pathogens must overcome to successfully colonize their host. Not surprisingly, plants have evolved ways to thwart such pathogen attack by closing their stomata rapidly upon detecting an infection in progress. To circumvent this plant immune response, bacteria have evolved a reciprocal strategy to effectively cause stomata to reopen, so they could penetrate the host (Melotto et al., 2006). In this study, we found that enhanced myrosinase activity promoted plant resistance against a pathogenic bacterium (Pst DC3000) by accelerating the closure and inhibiting the reopening of stomata upon infection. Both ABA and SA play positive roles in stomatal closure and are thought to function in PAMP-induced stomatal closure (Melotto et al., 2017). In our study, BoTGG1-overexpressing plants were more sensitive to ABA-and SA-induced stomatal closure, thus indicating overexpression of BoTGG1 promoted bacteria-induced stomatal closure possibly via the ABA and SA signaling pathways. The quantitative real time PCR analysis of the key genes in ABA-and SA-mediated stomatal closure pathways supported our speculation. In Arabidopsis, TGG1 and TGG2 are required in ABA-and MeJA-induced stomatal closure (Islam et al., 2009). We found that MeJA induced stomata to close in both wild-type and 35S::BoTGG1 plants, but no difference was detected between the two genotypes; this suggests altered TGG1 activity did not affect MeJA-mediated stomatal closure pathway as tested under our experimental conditions. Although MeJA-induced stomatal closure has been detected in some studies (Suhita et al., 2004;Munemasa et al., 2007;Arnaud et al., 2012;Hua et al., 2012;Yan et al., 2015), this could not always be verified by other research groups (Montillet et al., 2013;Savchenko et al., 2014). In fact, the role of JA-Ile as an inhibitor of stomatal closure is more strongly supported (Staswick and Tiryaki, 2004;Sellam et al., 2007;Panchal et al., 2016). In response to plant stomatal defenses, bacterium-produced coronatine mimics JA-Ile to induce stomatal reopening. This inconsistency is explained in that MeJA-induced stomatal closure might depend on reaching an endogenous ABA threshold . Generally, IAA plays a positive role in regulating the opening of stomata (Lohse and Hedrich, 1992). Pst was found to promote IAA production and enable the pathogen to colonize successfully host plants (Chen et al., 2007). In our study, darkness-induced stomatal closure was successfully inhibited by IAA in the wild type but not in 35S::BoTGG1, indicating that 35S::BoTGG1 was insensitive to IAA-inhibited stomatal closure. Taken together, we speculate that IAA production triggered by Pst DC3000 perhaps promotes stomatal reopening and an insensitivity to IAA may contribute to enhanced stomatal defense in 35S::BoTGG1. In addition to improving stomatal defense, overexpression of BoTGG1 should have activated other immune pathway(s), since 35S::BoTGG1 showed significantly higher resistance to Pst DC3000 than the wild type in the syringe assay which bypass the stomatal barrier. In 35S::BoTGG1, aliphatic glucosinolates were significantly reduced due to increased TGG activity, indicating that aliphatic glucosinolate degradation possibly contributes to the enhanced pathogen resistance. Isothiocyanates derived from aliphatic glucosinolates were reported to limit pathogen growth by direct antimicrobial activity for a wide range of bacterial pathogens (Sellam et al., 2007). In addition, TGG mediated degradation of aliphatic glucosinolates is required in programmed cell death during hypersensitive responses upon bacterial inoculation (Andersson et al., 2015). Thus, we speculate that, in addition to stomatal defense, the improved immune response in 35S::BoTGG1 might be due to the increased antimicrobial activity and (or) the activated hypersensitive response. Overexpressing BoTGG1 promoted stomatal resistance against bacteria by enhancing the ability of stomata to close. Nevertheless, the stomatal behavior in response to drought stress seemed unaffected, since in 35S::BoTGG1 transpirational water loss from its detached rosette leaves and whole-plant survival under drought stress were similar to wild-type plants. This finding indicates that enhanced stomatal closure behavior may specifically function in the plant defense response against bacterial pathogens, in a way that is distinguishable from other stomatal-related physiological activities responsive to abiotic stress. Under adverse environmental conditions, plants do not only initiate defense reactions, but also need to coordinate their growth and defense to maximize plant fitness. As a potent defensive compound, glucosinolate and its metabolism play a vital role in biotic stress resistance while also profoundly impacting plant development. Previously, it was found that overexpressing CYP83B1, a glucosinolate biosynthesis enzyme that catalyzes indole-3-acetaoxime to indole-3-acetonitrile oxide, causes the early onset of flowering (Naur et al., 2003;Xu et al., 2018). When Jensen et al. (2015) introduced AOP2 (encoding a 2-oxoglutaratedependent dioxygenase modifying glucosinolate side chains) into a naturally null Col-0 background, this led to delayed flowering under long-day condition. The double-mutant myb28/myb29, which lacks both MYB28 and MYB29, the main regulators of biosynthesis of aliphatic glucosinolates, presents delayed flowering under both short days and long days. Interestingly, under a constant lit condition, the plants with AOP2 introduced to them were observed to flower earlier, yet myb28/myb29 showed no alteration in its flowering time (Kerwin et al., 2011). Furthermore, Jensen et al. (2015) showed that the ability of AOP2 to affect flowering time varies in different accessions due to different genetic backgrounds. Kerwin et al. (2011) study demonstrated that the glucosinolate pathway modulates the plant circadian clock, subsequently leading to complex physiological shifts; however, they also suggested that glucosinolate pathway's influence on circadian clock does not extend to flowering time. In our study, the expression of FLC was significantly higher in late-flowering TGG1-overexpressing plants, thus indicating glucosinolate metabolism may regulate flowering time through the FLC pathway. In short, the mechanism by which glucosinolates regulate flowering time is quite complex, requiring further study to reveal more about the cross talk between this secondary metabolite and flowering in plants. In summary, as depicted in Figure 9, we have showed that overexpressing a myrosinase gene TGG1 promoted stomatal defense against a bacterium pathogen in two complementary ways: (1) by accelerating stomatal closure and (2) by inhibiting the reopening of stomata, with former response possibly mediated by ABA and SA. Additionally, the transformation of TGG1 delayed flowering, possibly by promoting the FLC pathway. Due to delayed flowering, the vegetative and total growth phases were prolonged by 9 and 28 days, respectively, which translated into a significant increase in plant biomass. In an applied breeding context, controlling flowering time would be helpful to produce high yields throughout the year. For many Brassicaceae vegetable crops, their late flowering is an important breeding goal; for example, premature bolting is a severe problem in Brassica rapa plants-including Chinese cabbage, pak choi, and turnip-as it largely reduces both quality and yield, so extremely late bolting is a major breeding goal in these crops (Kitamoto et al., 2017). Considering that the glucosinolate-myrosinase system is highly conserved between cruciferous crops and the model plant Arabidopsis, transformation of TGG1 might offer a viable and valuable method for breeding late flower varieties to increase their biomass as well as their resistance to bacterial pathogens. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the manuscript/Supplementary Files. AUTHOR CONTRIBUTIONS JL designed the experiment and KZ conducted the experiment. HS, JZ, WL, and DL participated in various parts of the experiment. JL and KZ wrote the manuscript. All authors have read and approved the final manuscript. FUNDING This work was supported by the National Natural Science Foundation of China (NSFC) (31570298) and the Heilongjiang Natural Science Foundation (C2017031). SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpls.2019.01230/ full#supplementary-material FIGURE S1 | Expression level of BoTGG1 in wild type (WT) and 35S::BoTGG1. Rosette leaves of 4-week-old WT and two independent transgenic lines were harvested for the RT-PCR analysis. ACTIN2 served as the internal control. Four-week-old WT and 35S::BoTGG1 were treated by withholding water for 2 weeks followed by re-watering.
2019-09-27T00:15:56.665Z
2019-10-04T00:00:00.000
{ "year": 2019, "sha1": "0b5092353a5134811ffe70fdc170113abeef6f47", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.01230/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1fd982ad37fb33f9bbfe102625a5320d28673827", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235851372
pes2o/s2orc
v3-fos-license
Features of mercury migration in natural objects of technogenically polluted areas The article discusses own studies' materials of anthropogenic mercury migration in the “soil plant surface waterwater body bottom sediments” system on the example of technogenically mercury-polluted territory of the northern industrial zone of Pavlodar city in Kazakhstan. The mercury distribution processes in soil at a depth of up to two meters, in water and bottom sediments of the Irtysh River, as well as in “Pamyat' Azieva” soft spring wheat variety grown on soils with maximum pollution level were studied. It was discovered that the mercury migration degree to groundwater and plant objects depends on the soil type: so, it is minimal in chernozem soil. It was revealed that the mercury amount in the soil is in a positive correlation with its mineralization degree, filtration level, as well as the presence of chloride ions. The article also discusses the research results on the complex and diverse influence of predominant ions in soil, as well as their combinations on the content and distribution of mercury. Introduction Mercury is known to be the most dangerous toxicant due to its ability to migrate in sustainable state from atmospheric air to soil cover, vegetation, water bodies and animal organisms >1, 2@. If mercury is coated with a thick silt layer in bottom sediments, its toxicity decreases, but this process is time-consuming. Several works confirmed low mercury mobility in soils at contaminated sites >3@. The industrial zone of Pavlodar city was chosen as the territory of the study; for a long time, it has been polluted because of the Pavlodar chemical plant operation, which performs electrolytic production of chlorine and caustic soda using mercury. According to expert estimates, the loss of metallic mercury during the operation of the enterprise amounted to 1,089 tons. The processes of mercury migration in soils, plants, as well as with groundwater to water bodies were studied as the most important in terms of environmental safety and human health >2@. The relevance of the study is confirmed by many works devoted to the study of mercury accumulation by various agricultural plants (spinach >2@, rice >4, 5, 6@, ginger >7@) or forest leaf litter >8@ and the study of ways to prevent this process. Nevertheless, there were no comprehensive studies on mercury migration and distribution in soils, bottom sediments and plants, determination of soil characteristics affecting mercury migration processes in the literature. Materials and methods Mineralization degree, cathode-anion composition, as well as mercury content were determined in soil samples at different depths of occurrence. The granulometric and microaggregate composition was determined by sieving (GOST 12536-79), nitrates content was identified by photometric method (GOST 26488-85), the content of carbonates and bicarbonates' ions was evaluated by titration with sulfuric acid solution (GOST 26424-85), chloride ions were determined by argentometric titration method with silver nitrate solution (GOST 26425-85), sulfate ions were identified by weighing method after deposition with barium chloride solution (GOST 26426-85). Additionally, soil filtration coefficients were determined (GOST 25584-90). Mercury concentration was determined by atomic adsorption method with pyrolytic samples decomposition. The analysis was carried out on 22 wells at a depth of 0.5-2 meters, total of 87 samples were investigated. Statistical data analysis included correlation and regression analysis. Water samples and bottom sediments of the Irtysh river were taken simultaneously, 40 samples of water and 88 samples of bottom sediments were investigated. Research was carried out up to Tatarka village located 17 km from the border with the Republic of Kazakhstan. Water sampling was carried out in the most turbulent sections of the river, where more active mixing should take place. Bottom sediments were taken at the shoreline with a depth of 15 cm into the bottom by scooping. "Pamyati Azieva" soft spring wheat variety was used as a plant object, its mercury content was determined by the atomic absorption method. Results and Discussion The area under study is a dry steppe with mostly chestnut soils. The valley of the river Irtysh is located at the west side, which is dominated by chernozem floodplain soils. The concentration of mercury at the study sites did not exceed the normative values (0.1-4 mg/kg). The lowest concentrations of toxic substance were determined near the Irtysh river (0.2-0.5 mg/kg) ( Figure 1, 2). The maximum mercury concentration was observed on the territory of the plant, where the focus of strongly salted and brackish formations of chloride and alkaline composition was found. Further, the pollution aureole spreads to the west, crosses the highway and deviates to the north covering the territory of Pavlodar state farm. Throughout the observations from 1997 to 2020, despite a general decrease in mercury concentration in dynamics, periods of rise in pollution levels were recorded, which then subsequently decreased. Obviously, this process is associated with periodic rise of the groundwater level, flooding, and water logging. Despite the relatively low mercury content in the westerly direction, the movement of surface mercury pollution to the right bank side of Irtysh has been traced. Individual spots are fixed in the floodplain aureole circuit, characterized by a high level of contamination, and maintaining stability with depth ( Figure 3). Fig. 3. Mercury distribution on a site near the Irtysh River Analysis of mercury migration nature depending on the soil type concluded that chernozem soil can bind and retain mercury compounds. It is this humus-rich soil type that was found in the area of the Irtysh floodplain, where the minimum mercury concentrations were recorded. A higher mercury concentration has been found in areas with steppe chestnut soils containing significantly lower amounts of humus. Consequently, these lowhumus soils do not have the ability to securely bind mercury, and therefore it penetrates to depth together with filtering waters. Thus, in soils with low humus content, heavy metals are likely to enter groundwater with consequent mercury migration together with it to the Irtysh floodplain. Soil filtration rates near the river amounted to 18.2 to 35.2 cm/day. With the distance, the filtration coefficient decreased to 16.8-32.9 cm/day and amounted to 11.5-14.3 cm/day in the territory of maximum contamination. This indicates a high rate of mercury migration with groundwater near large water sources if it enters such a site from adjacent areas. Cation-ion soil analysis revealed predominance of ions Cl -, NCO3 -, SO4 -2 in this territory. The degree of mineralization ranged from 0.73 to 17.1 g/dm 3 . Mercury concentration in soils was directly dependent on the degree of their mineralization (r = 0.59, p≤0.05). It can be concluded that mercury forms complex compounds by interacting with the mineral component. Such compounds are durable and resistant and contribute to the desorption of movable mercury forms from the surface of soil particles. We conducted a regression analysis and derived the formula describing this dependence: Y=0,3994*0.161; the calculated approximation is 0.16 (Figure 4). Fig. 4. Ratio of mercury concentration and mineralization level HCO3ions, SO4 -2 , Clwere found on sites in the following combinations: HCO3 -; SO4 -2 and Cl -; SO4 -2 and Cl -; HCO3 -, SO4 -2 and Cl -; Cl -. The statistically reliable linear relationship between the level of mercury accumulation and the content of various ions in soils r=0.81 (p≤0.05) was revealed ( Figure 5). As it is known, mercury is a strong complex former. HCO3and Clions have the property of forming complex compounds with mercury, so they bind it into soluble compounds. The calculated linear regression equation is as follows: Y=0.943x+0.1227, with approximation value R2=0.9696. The toxic substance accumulation dynamics analysis performed from 2010 to 2018 showed that accumulation is higher with increased concentration of chlorine ions in the soil cover and lower compared to years with the predominant content of HCO3and SO4 -2 ions (Figure 7). Chlorine ions contribute to mercury release and form more stable complexes than other ions, contributing to the retention of mercury in soils. Fig. 7. Mercury accumulation in soils in the presence of various ions Analysis of the amount of mercury in surface waters and bottom sediments of the Irtysh River found no exceedances of the maximum permissible values. Surface water analysis was carried out along the course of the Irtysh River starting from Maly Atmas village through the city of Omsk and ending below Solyanoe lake. The mercury content of all the ranges was approximately the same at 0.0001-0.0002 mg l. Despite the absence of deviation from the norms, the mercury content in bottom sediments differed significantly in different zones (r=0.52, p≤0.05). In the immediate vicinity of the pollution source, the gross mercury content averaged 0.037 mg kg, near Beregovoe village, Bolshoy Atmas village, below Solyanoe lake 0.020 mg kg, below Cherlak village and near Verkh-Ilinka village 0.015 mg kg. The calculated variation coefficient showed the absence of significant variation in the indicator in different years. Mercury content in wheat tissues growing at the study sites correlated with mercury content in soils: agricultural plants can accumulate mercury in significant quantities. The accumulation factor determined by the ratio of mercury content in wheat tissues to that in soil, amounted to 1.8-2.4. Mercury concentration in plants ranged from 0.0025 to 0.038 depending on the location of growth. Relatively low mercury concentration was found in "Pamyati Azieva" variety wheat tissues on the territory adjacent to the Irtysh floodplain rich in chernozem. During the comparative analysis of mercury accumulation in the stems, leaves and roots of wheat, the predominant accumulation of mercury in the roots of about 60% of the total mercury accumulation by all tissues of the plant (r=0.52, p≤0.05) was found. Similar results were also found in studies of Chinese scientists investigating mercury content in spinach >2@. Conclusion There is a clear trend of mercury content decrease in soils with depth. Mercury migrates unevenly to the underlying layers, but it is possible to forecast the level and rate of migration depending on the cation-anion composition and the degree of soil mineralization. High soil mineralization contributes to an increase in the amount and rate of mercury penetration into deep soil layers. The average mercury content in soil is higher when chlorine ions are present in it, compared to the presence of sulfate and carbonate ions in soils irrespective of the intended level of contamination from the source. The linear dependence of mercury concentration growth depending on the presence of sulfate, bicarbonate and chloride ions in soils was also revealed. Mercury's ability to migrate with groundwater to the nearest water source depends on the soil type. Humus-enriched soils hold mercury more firmly and prevent its migration to plant organisms and groundwater. On the example of the studied plant object -wheat variety "Pamyati Azieva", it is possible to conclude the easy mercury penetration into agricultural plants and accumulation in them.
2021-07-16T00:05:58.007Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "02da3b2239918f5ff33140c94ac92a5c4c86943a", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/30/e3sconf_farba2021_02024.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8fd22bb48de63abf9eb4df31fed851a5fecf2bf3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
225057856
pes2o/s2orc
v3-fos-license
Psychosocial factors affecting sleep misperception in middle-aged community-dwelling adults Sleep misperception has long been a major issue in the field of insomnia research. Most studies of sleep misperception examine sleep underestimation by comparing the results of polysomnography conducted in a laboratory environment with patients’ sleep diary entries. We aimed to investigate psychosocial characteristics of adults who underestimated or overestimated sleep time in a nonclinical, middle-aged community-dwelling population. We collected one week of sleep data with wrist-worn accelerometers. We used egocentric social network analysis to analyze the effects of psychosocial factors. Among 4,060 study participants, 922 completed the accelerometer substudy. Underestimation was defined as an accelerometer-measured sleep time ≥ 6 h and a subjective sleep time < 6 h. Overestimation was defined as an objective sleep time < 6 h and a subjective sleep time ≥ 6 h. Psychosocial characteristics of the sleep misperception group were evaluated using multivariate regression analysis. A total of 47 participants underestimated sleep time, and 420 overestimated sleep time. Regression analysis revealed that women, living with spouse, economic satisfaction, and bridging potential had protective effects against sleep underestimation. Blame from a spouse involved a 3.8-times higher risk of underestimation than the control group (p = 0.002). In men, discussing concerns with a spouse had a protective effect against underestimation (p < 0.001). Economic satisfaction, feeling social network-based intimacy, and support from a spouse were associated with overestimation in women. In men, feeling social network-based intimacy was also associated with overestimation (p < 0.001). We found that social relationship quality was related to sleep overestimation and underestimation. This association was marked in women. Good social relationships may have positive effects on sleep misperception via attenuation of negative emotional reactions and effects on emotional regulation. Introduction Insomnia disorder is a common mental health problem with a worldwide prevalence of 20% to 40%. This condition can co-occur with psychiatric disorders (e.g., depression, cognitive dysfunction, and impairment of attention and executive function) and medical disorders (e.g., cardiovascular disease) [1][2][3][4][5]. Insomnia has been defined as subjective discomfort associated with sleep initiation or maintenance experienced by an individual. Objective measures of sleep like polysomnography (PSG) or actigraphy are not always used in the evaluation of insomnia because insomnia disorder diagnosis and treatment outcomes are determined using subjective measures. Use of objective measures of a disease is always important. However, the subject of whether objective or subjective measures are more important in the study of insomnia remains controversial [6]. Results using subjective sleep quality or quantity measures do not always coincide with those of objectively measured sleep [7][8][9]. Sleep misperception is the discrepancy between the estimates of time for subjectively reported sleep and objectively measured sleep. In patients with insomnia, it usually presents as an underestimation of total sleep time or overestimation of wake time after sleep onset or sleep onset latency [10]. Sleep misperception is a very common phenomenon that can occur in >25% of primary insomnia patients [11,12]. Sleep misperception is important because it has a high prevalence and may have a critical role in the progression of insomnia [13]. Harvey et al. suggested that the tendency to misperceive sleep time causes an individual to mistakenly believe they did not have sufficient sleep the previous night [13]. It increases worry and anxiety about sleep quality. The resulting state of anxiety is associated with too much attention paid to factors that reduce sleep time. The excessive increased anxiety and worry worsens insomnia by facilitating detection of meaningless sensations or cues [14]. Given that this cycle of misperceived sleep time and increased anxiety can perpetuate insomnia, it is essential to understand and respond to characteristics of sleep misperception to improve insomnia treatment. There are no clearly established theories about the mechanisms of sleep misperception. However, previous studies found several factors related to sleep misperception in patients with insomnia. Most are internal factors (e.g., psychiatric/medical comorbidities or subclinical psychological problems) that can cause mismatches between perceived and measured sleep times [15,16]. Insomnia is also affected by social factors, such as marital status, social support, and social connectedness [17][18][19]. Unfavorable social factors can act as psychological stressors and cause insomnia, regardless of whether an individual has depression or anxiety [20]. Studies of sleep misperception have not included these social factors [20,21]. Most sleep misperception studies have only included patients with insomnia who underestimate their sleep time [7,12,22,23]. In addition, the objective criteria for sleep in all the studies were made with PSG, which has limitations in terms of accessibility and familiarity. This limitation is partially attributable to the use of objective criteria and PSG in the experimental setting. Given that sleep misperception is a very common problem, it is likely to be prevalent in the general population as well as in patients with insomnia [24]. Sleep misperception can also result in sleep overestimation [25,26]. Investigation of sleep overestimation may be helpful to alleviate negative consequences associated with subjective sleep estimate errors. Social network analysis can be used for objective and systematic identification of social factors. This structural analysis method focuses on relationship research and is used to derive characteristics of social networks by measuring information about individual relationships. Egocentric network analysis is a questionnaire-based method used to study these individual network characteristics. Social network size and bridging potential are the main variables examined using this analysis method. Social network size is the absolute number of people who most often discuss their important issues. Bridging potential is a tie between parties, which increases independence in everyday social life [27,28]. No studies have looked at the effects of social networks on sleep, but there are studies that have looked at the effects of social networks on severe mental illness [29,30]. We investigated social and psychological characteristics of people who underestimated or overestimated sleep in the Cardiovascular and Metabolic Diseases Etiology Research Center (CMERC) cohort. This cohort consisted of nonclinical, middle-aged community-dwelling individuals [31]. We collected one week of objectively measured sleep data using accelerometer instead of PSG. To investigate psychosocial factors, we used egocentric social network analysis to systematically assess individuals' qualitative social characteristics (e.g., perceived social support and relationship problems) [32]. Study population and design The CMERC cohort was developed to identify novel risk factors and to develop evidencebased prevention strategies for cardiovascular and metabolic diseases in Korea. Data were collected from the cohort from July 2013 to June 2018 [31]. The cohort consisted of communitydwelling middle-aged adults excluding those with cardiovascular disease, a history of myocardial infarction, heart failure, stroke, or transient ischemic infarction, diagnosed with cancer in the previous 2 years, or who were currently being treated for any of the above conditions. Adults aged 30 to 64 years who resided at their current residence for at least 8 months, had no plans to move out of the study area in the next 2 years, and who could articulate their intention regarding study participation were eligible for the study. The study protocols were approved by the Institutional Review Boards of Severance Hospital, Yonsei University Health System, Seoul, Korea (4-2013-0661). Written informed consent was obtained from all participants before baseline measurements were taken. The baseline measures consisted of socio-demographic factors, medical history, health-related behaviors, psychological factors including depression (Beck Depression Inventory-II [BDI] score � 14) [33] and quality of marital relationship, social network and support, anthropometry, and body composition. Results of cardiovascular examinations, blood analysis, and urinalysis were also included. Among the 4,060 participants, 1,626 participated in a sub-study to check physical activity using wrist-worn accelerometers; 922 participants completed the study for 7 consecutive days. We mutually excluded individuals who self-reported histories of psychotropic medication prescription (antidepressants or sedative-hypnotics that could affect participant sleep, N = 22). We also excluded individuals with extremely low accelerometer-recorded sleeping times (<2 h total during the 7 days) because of possible errors in machine operation (N = 23) [11]. Among the excluded participants, 10 were in both groups. Data from 887 participants were included in the final analysis. Sleep characteristics Self-reported sleep characteristics. For the cohort survey, all participants completed the self-reported questionnaires about sleep characteristics during the past year. Based on survey results, the average bed time and wake time, and total sleep time were collected for each participant. We defined reported sleep time as the subjective total sleep time. Respondents also answered questions about how difficult it was to fall asleep and to maintain sleep during the last week. The respondents chose one of four answers: never, sometimes (1−2 days per week), often (3−4 days per week), and almost every day (�5 days per week). For the analysis, we dichotomized the categories into 'yes' (often or almost every day) and 'no' (never or sometimes). To screen for obstructive sleep apnea, questions about participants' snoring patterns, daytime fatigue, and sleepiness while driving, which were based on the Berlin apnea questionnaire, were also included [34]. Accelerometer-measured sleep characteristics. Each CMERC participant wore a GEN-EActiv accelerometer (Activinsights Ltd., Kimbolton, UK) on their nondominant wrist day and night for 7 consecutive days. Use of an accelerometer is a valid, cost-effective, and convenient method. Accelerometer-measured sleep parameter results highly correspond to results using PSG. In previous studies, sleep parameters, such as sleep onset time and sleep duration, measured using GENEActiv were not significantly different from those measured using PSG [35,36]. The GENEActiv triaxial accelerometer recorded movement in three mutual vertical axes, environmental temperature, and light exposure, and was programmed to collect data at a frequency of 100 Hz. The raw accelerometer data were downloaded and scored in 1-min epoch files via post-processing software developed by the accelerometer manufacturer [37]. Time-to-bed, wake-up time, and total sleep time were determined based on metabolic equivalents task units, which represented the energy costs of an individual's physical activities. Signal vector magnitude measurement was used to indicate behavioral changes [38]. In our study, we focused on total sleep time and total bed time, which have most often been used in previous studies. We calculated sleep efficiency by dividing total sleep time by total bed time. To obtain measured total sleep time and sleep efficiency, we calculated the average of 7 days of the measured values and defined the results as objective total sleep time and objective sleep efficiency. Definitions of sleep underestimation and overestimation. Previous research on sleep studies has determined the standard value of sufficient sleep. Most studies have defined 6-7 hours as the minimum time for sufficient sleep. Focusing on previous studies, we defined the standard time of sleep misperception based on 6 hours of objective and subjective sleep [39][40][41]. Among those with an objectively measured total sleep time �6 h, we divided participants into one of two groups: subjective total sleep time <6 h or �6 h. We defined the former group as the sleep underestimation group and the latter as a control group. We also divided the group of participants with objectively measured total sleep time <6 h into one of two groups: subjective total sleep time �6 h or <6 h. The former group was defined as the sleep overestimation group and the latter was a control group. Psychosocial factors Due to the nature of the cohort, which consisted mostly of middle-aged people, social factors included marriage-related questions (e.g., marital status, relationship with spouse). All participants answered the questions about social relationships, including the egocentric social network and marriage variables. The egocentric social network analysis is a methodological approach used to understand the social structure, function, and composition of network ties around an individual [42]. The participants were all asked to identify up to five non-spousal network members with whom they had most often discussed their important issues. The social network size was calculated as the sum of the spouse (0 or 1) and the number of non-spousal members (0 to 5). They were also asked how close they felt to each of the members (up to six) in their network. The four categories were 'not close', 'somewhat close', 'close', and 'very close'. We calculated the average intimacy score and used it in the analysis as a proxy for relationship satisfaction. We also assessed the value of bridging potential, which referred to the extent to which an individual was associated with people who were not directly connected to each other. We defined that a participant could act as a bridge in a network when he or she was connected to at least two individuals who otherwise were not connected to each other [43]. Marriage is a permanent relationship among two adults made by a social contract. It has a variety of psychological effects on an individual's mental health [44,45]. Some studies found that for both sexes, married adults with more relationship problems tend to have more trouble with sleep [46]. Others found that a stable relationship status with a spouse independently correlates with better sleep quality and continuity in women [47]. We wanted to identify which of the marriage-related factors had different effects on sleep perception in men and women. In the cohort survey, the questionnaire about marital relationships included leisure time (how often do you spend leisure time with your spouse?), worry (how often do you discuss your concerns with your spouse?), support (how often do you depend on your spouse in difficult situations?), and blame (how often does your spouse blame you?). All these questions had four choices (i.e., 'often', 'sometimes', 'not very often', and 'not at all'). We dichotomized the answers into the categories 'yes' (often, sometimes) and 'no' (not very often, not at all). Statistical analyses Because the effects of psychosocial factors on sleep differ by sex, we first compared the baseline characteristics of participants based on sex. As continuous variables did not follow a normal distribution, the Mann-Whitney test and χ 2 -tests were used to compare continuous and categorical variables, respectively, between the two sexes [48]. The analyses to identify factors associated with sleep overestimation and underestimation were also performed separately, depending on the sex of the participants. After a univariate regression analysis of each psychosocial variable, we performed a multivariate regression analysis. In addition to the psychosocial factors related to our main interests (i.e., social network and marriage-related variables), age, educational status (high school or less), current smoking or drinking use (yes or no), obesity (body mass index � 25 or less) [49], depression (Beck depression inventory-II score � 14) [33], and self-reported sleep disturbances including high risk of obstructive sleep apnea measured using the Berlin questionnaire and difficulty in sleep induction and maintenance were also included the analysis as covariates. These variables were found to affect sleep misperception or thought by the study investigators and to be potentially significant independent variables [23]. All independent variables were imported into the model and significantly associated factors were identified using backward elimination procedures in multivariate analyses. The maximum variance inflation factor value was <2.5, so multicollinearity was not problematic. The data analyses were performed using SPSS version 25.0 software (SPSS Inc., Chicago, IL), and the threshold for statistical significance was set at p < 0.05. Results The results for the characteristics included in the final analysis are presented in Table 1. Sleeprelated characteristics and psychosocial factors, including social network size and marital relationship, were significantly different between the two sexes. From a total of 887 participants included in the analyses, 47 underestimated their sleep time. Thirty-four participants in the underestimation group were women, which was 5.8% of all women. Thirteen male participants (4.3%) underestimated sleep time. This result was not significantly different from the rate of underestimation for women (p = 0.367). A total of 420 participants overestimated sleep time; 247 of them were women (42%) and 173 were men (58%). This sex-based difference was statistically significant (p < 0.001). The results for the characteristics of those who underestimated and overestimated sleep time are presented in S1 and S2 Tables. The results for the factors that were significantly related to sleep underestimation are presented in Table 2. The complete results for the univariate and multivariate analyses are presented in S3 and S4 Tables. Psychosocial factors had significant effects on sleep underestimation, especially in women. Women who lived with a spouse were 78% less likely to underestimate sleep time. Satisfaction with household economic status was negatively related to sleep underestimation in women. Women who reported blame by their spouses were 3.84 times more likely to underestimate sleep time than women who did not. Women with bridging potential had a 65% lower chance of sleep underestimation than women with no bridging potential. Men were less likely to underestimate sleep time when they discussed their concerns with their spouses. The results for the factors significantly associated with sleep overestimation are presented in Table 3. The effects of psychosocial factors on sleep overestimation were also marked in women than in men. Intimacy with social network members had a positive association with sleep overestimation in both sexes. Unlike men, women who got support from their spouses in difficult situations were more likely to overestimate sleep time. Satisfaction with household economic status was also positively associated with sleep overestimation in women, but not in men. The complete results for the univariate and multivariate analyses of sleep overestimation are presented in S5 and S6 Tables. Discussion In this study, we evaluated relationships between sleep misperception and psychosocial factors in a middle-aged nonclinical community-dwelling cohort. Good social relationships were associated with sleep underestimation and overestimation. The results were marked in women than in men. Among the 364 participants who had an accelerometer-measured sleep time �6 h, 47 people underestimated sleep time. The subjectively reported sleep time was underestimated by an average of 206 min in men and 96 min in women, compared with the sleep time measured using accelerometer (S1 Table). Women, persons living with spouse, persons satisfied with economic status, and persons with bridging potential between unconnected people had protective effects against sleep underestimation. However, even when living with a spouse, blame from that spouse resulted in a four-fold increase in the risk of sleep underestimation. This result indicated that relationship quality had a greater effect on sleep underestimation than simply being in a relationship. In men, sharing worries with a spouse also had a protective effect on sleep underestimation. Previous study findings have revealed mechanisms associated with the effects of good social relationships on mental health [50]. First, social relationships affect network members by providing normative guidance on health-relevant behaviors, such as regular exercise. These social influences consequently have positive effects on mental health by exerting beneficial effects on individuals. Second, integration in a social network is associated with a positive psychological state, like a sense of belonging and recognition of self-worth. Third, social network participation modulates neuroendocrine responses to stressors, which in turn increases the capacity to protect oneself against distress. Considering that many life stressors are related to breaks in social relationships (e.g., divorce or the death of a loved one), emotional regulation is closely affected by social relationships as well as by an individual's own ability. Close relationships with others can attenuate emotional distress through coregulation. During this process, close relationships help to maintain an optimal emotional state by regulating emotional arousal [51]. Harvey et al.'s findings suggest that emotional arousal level can affect sleep misperception regardless of an insomnia diagnosis [52]. Therefore, good social relationships could have a protective effect on sleep underestimation by downgrading excessive arousal, a key mechanism of chronic insomnia, through attenuating negative emotional reaction and helping emotional regulation. Social network bridging potential is defined by the ability to act as a link between two people not connected to each other. Usually, individuals with bridging potential are believed to be independent, autonomous, and socially active [53]. Although bridging potential does not necessarily result in bridging behaviors, it is associated with good health (e.g., cognitive function and functional health), especially in older women [43]. Studies on older adults > 57 years of age found that women have more bridging potential than men, including more non-kin bridging potential [53]. Cornwell suggested that in older women, bridging potential boosts independence and control in daily social life. Our study of this middle-aged cohort did not find sexbased differences in bridging potential. Nevertheless, women have more non-family friends than men. Because this study did not perform a complete social network analysis and count the total number of bridging potential for each participant, it is possible that a sex-based difference in bridging potential was not revealed. No studies have revealed the mechanisms underlying bridging potential that can affect sleep misperception. Because greater bridging potential indicates more social activity, thereby reflecting more social activities during the daytime, it may attenuate the effects of negative factors on sleep underestimation by diverting excessive attention from negative emotions. Alternatively, through an association with psychological strength characteristics (e.g., autonomy, independence, and self-regulation), emotional arousal may be reduced via the effects of bridging potential. The effects of social position on emotional regulation, arousal, and sleep perception should be examined in future studies. Of the 523 participants who had an accelerometer-measured sleep time < 6 h, 420 overestimated their sleep time. The subjectively reported sleep time overestimated actual sleep time by an average of 122 min in women and 146 min in men (S2 Table). Sleep overestimation was also affected by good social relationships. In men, social network-based intimacy increased the tendency to overestimate sleep by about 1.9 times. In women, social network-associated intimacy and support from a spouse were associated with an increased tendency to overestimate sleep (1.5 and 2.1 times, respectively). These results suggested that similar to the feeling of intimacy, support from a spouse in difficult situations was likely to serve as perceived social support. A sleep diary-based study of older adults found that perceived social support is associated with shorter sleep latency times, irrespective of an insomnia diagnosis [54]. Similar to the previously mentioned results for the underestimation of sleep time, good social relationships had a positive effect on sleep time perception; this effect was marked in women. Study results suggest that social relationships have greater effects on mental health in women than in men [50]. Compared with men, women are more likely to be affected by stressful negative social network-based components. Inversely, women are also more influenced by supportive social network-based components [54]. Women also tend to respond more sensitively to both positive and negative factors in a marriage than men [46,55]. Women with a highly critical spouse have a greater tendency to exhibit maladaptive coping behaviors and report poorer psychological adjustment [56]. Our findings also suggested that social relationships played a more important role in women than in men for both sleep underestimation and overestimation. Negative components of social relationships (e.g., criticism from a spouse) increased the risk of sleep underestimation; positive components (e.g., feeling intimacy with network members and support from a spouse) affected sleep time perception in a positive direction. The multivariate regression model of psychosocial factors found that satisfaction with economic status was associated with reduced sleep underestimation and increased sleep overestimation in women. Economic status can also have gender-dependent effects in terms of the relationship between social relationships and mental health [57]. Women with poorer economic conditions are more likely to provide their resources to others in the network than to get help from the network. This behavior is more likely to harm than to help them. This study had several limitations. First, because the CMERC cohort study was not designed primarily for sleep research, evaluation of sleep was limited. For indicators of objective sleep, we only obtained information about total sleep time, time in bed, and sleep efficiency. We could not obtain information on sleep onset latency or wake after sleep onset, which are also associated with sleep misperception [11]. For subjective sleep assessment, we only had information on participants' average sleep time and the reported difficulties with sleep initiation and maintenance. Most importantly, our data could not confirm whether a participant was diagnosed with insomnia or not. Because of the lack of information on insomnia diagnosis, we were unable to determine the proportion of people diagnosed with insomnia among those with and without sleep misperception. If many insomnia patients were in the target population, it is possible that sleep overestimation occurred if time spent lying down while awake was mistaken for sleeping time [58]. Secondly, as the accelerometer study was conducted after the CMERC cohort survey, the assessment of subjective sleep patterns and the objective sleep assessments through the accelerometer were not performed at the same time. Further studies should be conducted on various measurements to evaluate subjective and objective sleep characteristics. Second, no information on sleep schedules during weekdays and weekends or on shift-work schedules was available. Third, because this study used crosssectional data, it is not valid to use the results to predict causal relationships between variables. Fourth, other than marriage, no other information about family-based social relationships was available. Nevertheless, to our knowledge, this is the first study to investigate relationships between psychosocial factors and sleep misperception, including both sleep underestimation and overestimation, in a nonclinical, middle-aged, community-dwelling population. Previous studies revealed characteristics of sleep misperception from the perspective of PSG findings [59][60][61][62]. Those studies were used to examine physiological aspects (changes in the frequencies of electroencephalogram patterns or in cyclic alternating patterns) of sleep misperception. Insomnia disorder is significantly affected by psychosocial factors. Therefore, as a type of insomnia disorder, sleep misperception can also be affected by psychosocial factors [63]. The positive effects of using a psychological approach to treat insomnia have been demonstrated in previous studies [64][65][66]. These study results are important because they suggest that psychological intervention can be used to modify sleep misperception. Modifying psychosocial factors may be a way to improve sleep underestimation. Sleep misperception is not limited to those with insomnia; it is a common phenomenon that can occur in the general population [24]. Considering that misperceived sleep escalates the worry of having insufficient sleep and consequently worsens sleep disturbance, interventions for sleep misperception should be initiated at the prodromal phase [67]. Conclusion In summary, we found that psychosocial factors were related to sleep misperception in both sexes in a nonclinical, middle-aged cohort study population. Good social relationships were associated with sleep underestimation and overestimation. The results were marked in women than in men. Studies that include various kinds of social interactions as potential covariates are needed to identify other psychosocial effects on sleep misperception. Supporting information S1
2020-10-25T13:05:16.743Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "cb8d82b6664ee3efc47cbf38e9e2b0588692ebe5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0241237&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11978da1ed1b48fad6c182e7441515e70d25cf2b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
119243773
pes2o/s2orc
v3-fos-license
Goldstone's Theorem and Hamiltonian of Multi-galileon Modified Gravity The galileon model was recently proposed to locally describe a class of modified gravity theories, including the braneworld DGP model. We discuss spontaneous symmetry breaking of the self-accelerating branch in a multi-galileon theory with internal global symmetries. We show a modified version of Goldstone's theorem is applicable to the symmetry breaking pattern and discuss its implications. We also derive the Hamiltonian of a general multi-galileon theory and discuss its implications. I. INTRODUCTION The DGP model [1,2] is a 5 dimensional braneworld theory that non-trivially modifies General Relativity (GR) in the infrared. Nevertheless, at sub-crossover (sub-Hubble Length) scales many of its properties can be captured by a 4D (boundary) effective theory [3,4]. This effective theory amounts to GR coupled to a scalar field π whose equation of motion has only second derivatives and is invariant under the Galilean shift π → π + a µ x µ + b, a µ and b being constant. This scalar is related to the bending of the DGP brane in the bulk and has been termed as galileon [5]. As ghost instability has been identified on the phenomenologically interesting selfaccelerating branch of the DGP model [6], which can also be easily seen in the local galileon approximation [3,4], attempts have been made to generalize the DGP galileon description to produce a healthy modified gravity theory [5,[7][8][9][10][11][12][13][14][15][16]. In [5], the authors wrote down the most general single galileon Lagrangian. Remarkably, there are only d + 1 possible galileon terms in d dimensional spacetime, and ghost free self-accelerating background solutions have been shown to exist in a generalized galileon theory. However, a few phenomenologically challenging problems have also been identified in the single galileon theory, such as Cherenkov-like radiation in the solar system, superluminal propagation far away from a matter sauce and very low strong coupling scales [5]. It turns out that these problems can be avoided by adding another galileon (in a bi-galileon theory), meaning the theory space of the single galileon model is actually too small [13]. A local bi-galileon description is also what one might expect from co-dimension 2 braneworld models [12,15,17], as there are generally two brane bending directions. One would want to generalize the galileon description to have even more degrees of freedom [11,[14][15][16]. To avoid a proliferation of possible terms in the theory, we can impose internal (global) symmetries within the multiple galileons, so that the multiple galileons form some representation of a group [14], π = (π 1 , ..., π N ). That is, the multi-galileon Lagrangian is imposed to be invariant under the internal transformation where R j i is the representation matrix of a certain group and summation over repeated group indices is implied. Notice that the internal symmetry could originate from braneworld scenarios, as has been identified for the SO(N ) fundamental representation [14,15]. For other interesting field theoretical and cosmological implications of the galileon theory, see [18][19][20][21][22]. In [14], we wrote down all possible multi-galileon terms that are consistent with the fundamental and adjoint representations of SO(N ) and SU (N ), and looked for soliton solutions in multi-galileon theories; We did not consider coupling the symmetric multi-galileon to gravity. In this paper, we will put the symmetric multi-galileon in the context of modified gravity. In Section II, we will venture a tentative coupling, but we want to emphasize that the main results of this paper are insensitive to this explicit coupling. In Section III, we discuss the spontaneous symmetry breaking phenomenon of the symmetric multi-galileon theory on a self-accelerating background. Starting from an example, we build up a new version of Goldstone's theorem in symmetric multi-galileon theories that for every broken continuous symmetry a canonical kinetic degree of freedom is lost. In Section IV, we derive the Hamiltonian formulation of a general multi-galileon theory (with or without internal symmetry) and find it is not bounded below. We speculate whether this might be overcome in more complete theories. II. MULTI-GALILEON MODIFIED GRAVITY In the original galileon model [5], the galileon is coupled to graviton mainly via the kinetic mixing h µν =h µν + 2πη µν , where h µν andh µν are Jordan and Einstein frame (perturbative) metrics; π's contribution to the energy momentum tensor, or, its direct influence to the geometry is negligible. So in a sense the galileon modified gravity is a "genuine" infrared modification of General Relativity, differing from models such as quintessence [23], which has significant contribution to the energy momentum tensor. In this paper, we stick to this paradigm and tentatively propose the multi-galileon's coupling to gravity as where T ≡ η µν T µν and L π is the multi-galileon Lagrangian. For a general multi-galileon theory without internal symmetries, we might want to redefine π 1 = π 1 + ... + π N to simplify the coupling, while keep the structure of L π unchanged. But this is usually not feasible in symmetric multi-galileon models. For example, in the case of SO(N ) fundamental representation, π = (π 1 , π 2 , ..., π N ) can not be linked to π = (π 1 , π 2 , ..., π N ) by an internal SO(N ) transformation. (Note that the SO(N ) invariant coupling P (π 2 )T , P (π 2 ) being a general function of π i π i , has been considered in [16], and the authors found gradient instability as well as superluminal excitations for the spherically symmetric background.) We could argue that from the viewpoint of braneworld scenarios the coupling (3) (instead of, say, π 1 T ) might be what one might expect for symmetric multi-galileon models. In a braneworld setup, the multiple galileon fields living on a brane usually descend from the extra dimension coordinates as functions of the 4D brane volume coordinates [9,15]. Since the symmetric multiple galileon fields enjoy some internal symmetry, the extra dimensional coordinates must have the corresponding symmetry at least near the brane. As the near brane geometry is expected to plays a role in determining the coupling to gravity, we may expect the different multiple galileons couple to gravity on a equal or similar basis. At distances and time scales shorter than the Hubble length, the Friedmann-Robertson-Walker metric can be considered as a perturbation above Minkowski spacetime. Due to the kinetic mixing (3), the cosmic profile of the multi-galileon can be cast within the Hubble length as [5] where Σπ ≡ π 1 + ... + π N , H is the actual Hubble parameter for a given source T µν and H gr is the hypothetical Hubble parameter in GR with the same T µν as the source. Thus the cosmic background configuration of Σπ is given by − 1 4 (H 2 − H 2 gr )x µ x µ . Assuming all the fields have similar coordinate dependence, the vacuum solution is given bȳ In this section we will see that the symmetric multi-galileon modified gravity exhibits spontaneous breaking of symmetries on some vacuum solution, and for every broken continuous symmetry the theory loses a canonical kinetic term, which resembles the usual Goldstone's theorem in a scalar field theory. We will also discuss the implications of this modified Goldstone's theorem. A. An Example Let us first see a simple example of this theorem: spontaneous breaking of the SO(N ) fundamental representation. The most general SO(N ) multi-galileon Lagrangian in the fundamental representation is given by [14] where δ , and α and β are free parameters. Varying (3) with respect to π i , we get the equations of motion: We would like to see whether there is any self-accelerating background (or vacuum) in this theory. By a self-accelerating background, we refer to the case where the universe has a (at least approximately) de Sitter solution without support of a cosmological constant, i.e., the case whereπ i = − 1 4k i x µ x µ with Σk = H 2 > 0 and T = 0 is a solution to the equations of motion (7). Substituting this profile into the equations of motion, we get which reduce tok The former solution corresponds to Minkowski spacetime, while the later can be a selfaccelerating solution if α/β < 0 and Σk = H 2 > 0, which we assume to be satisfied. Note that (10) is not an isolated solution, instead it is a continuum of possible solutions. Then we would like to see whether the self-accelerating solution can be free of ghosts, negative canonical kinetic terms. To this end, we expand the Lagrangian (6) above the background (10), i.e., we do the transformation π i →π i + π i and neglect the background part of the Lagrangian: Requiring the self-accelerating background to be ghost free gives rise to β > 0, so the conditions for a ghost free self-accelerating solution are β > 0 and α < 0 and Σk = H 2 > 0. (12) Therefore, in the SO(N ) (fundamental) multi-galileon theory, when the self-accelerating branch is ghost free, the Minkowski branch is inevitably haunted by ghosts, and vice versa. Also, we see that there is just one canonical kinetic term on the self-accelerating background, while on the Minkowski background there are N of them 1 . All of these would become apparent from a point of view of spontaneous symmetry breaking. To facilitate this approach, we would like to utilize the action polynomial introduced in [13] 2 : where π is evaluated at −k i x µ x µ /4. By explicit calculation [13], we have shown that the extrema of L(k) correspond to cosmic background solutions; also, the coefficient matrix of the canonical kinetic terms of the N -galileon about a background (k i =k i ) is equal to the Hessian of L(k) about the background: meaning among the extrema only the (local) minima are ghost free ones. These properties of L(k) allow us to treat L(k) as some kind of effective potential in finding ghost free vacua. As an aside, note that in canonical field theories such as a scalar field theory, the Hamiltonian of the theory provides an energy function to minimize to find stable vacua. However, due to their non-trivial vacuum configurations and higher derivative nature, the Hamiltonian formulation of multi-galileon theories does not give rise to such a clear energy function for the background configuration π = −k i x µ x µ /4; see Section IV for details. Now, we can easily recover the results of the SO(N ) multi-galileon vacuum solutions using L(k). The extrema of L(k) give rise to the Minkowski backgroundk i = 0 and the self-accelerating backgroundk jk j = −α/3β. The backgroundk jk j = −α/3β is a minimum of L(k) only if α < 0 and β > 0. Also, since the continuumk jk j = −α/3β is a minimum, topologicallyk i = 0 can not be a minimum, thus for the same set of parameters only one of the two backgrounds can be stable. The Hessian of L(k) about the self-accelerating background is given by K ij (k) = 12βk ikj , which has only one non-zero eigenvalue, so there is just one canonical kinetic term on this background. Indeed, we might visualize L(k) with 1 The same result was also reached in [15] as we were preparing this paper. 2 Note that here we define a slightly different L(k) from that defined in our previous paper. This is because here we write L π ∼ −∂π∂π∂∂π..., while in [13] we use L π ∼ π∂∂ππ∂∂π.... These two forms are related by integration by parts in the action, so they are physically equivalent. However, when π is evaluated at π = −k i x µ x µ /4, total derivatives also give rise to terms proportional to x µ x µ , so they differ by a factor of −2, i.e., L π = −2L π at π = −k i x µ x µ /4. a ghost free self-accelerating background by a "Mexican hat" (Fig. 1). The trough of this Mexican hat is an (N − 1)-sphere, respecting SO(N ). An (N − 1)-sphere (or SO(N )) has N (N − 1)/2 independent rotational symmetries. The vacuum solution occupies one point on the trough and thus only respects an SO(N−1) subgroup, which leaves a sub (N−2)-sphere still rotational symmetric and breaks N − 1 rotational symmetries. Only the radial direction around the trough accommodates non-trivial "oscillations", reflecting the presence of only one canonical kinetic term. The N − 1 flat directions represent the loss of N − 1 canonical kinetic terms. B. General Proof This is of course reminiscent of Goldstone's theorem for a canonical scalar field theory with a potential. Here we are able to prove an analogous theorem for a symmetric multigalileon theory with an arbitrary internal group that the number of canonical kinetic terms that are lost is equal to the number of spontaneously broken symmetries, which in turn equals the dimension of the total symmetry group minus that of the unbroken subgroup. Again it is sufficient to use the action polynomial L(k) to prove this. Let k =k i be a (local) minimum of L(k), so it is a sensible background to expand the theory. Since k =k i is a minimum, K ij (k i ) should only have non-negative eigenvalues. The eigenvectors of positive eigenvalues correspond to the canonical kinetic terms, while the eigenvectors of zero eigenvalues correspond to the degrees of freedom without canonical kinetic terms. To prove the theorem, we must show that every spontaneously broken symmetry gives rise to an independent zero-eigenvalued eigenvector. Under an infinitesimal group action, for the configuration π i = −k i x µ x µ /4, we have where is an infinitesimal. Since L π is invariant under a group transformation, from (13), we infer that L(k) is also invariant. So we have which leads to the identity Differentiating it with respect to k i and evaluating it at the vacuum of the theory (k i =k i ) gives where K ij (k) is the coefficient matrix of the canonical kinetic terms, as defined in (15). Now, if the transformation (16) belongs to the unbroken subgroup, the vacuum k i =k i is invariant under the transformation and the relation (19) is trivial as we have ∆ j (k) = 0. If the transformation (16) belongs to a spontaneously broken symmetry, the vacuum is changed along the flat directions of the continuous minimum of L(k) and so we have ∆ j (k) = 0. In this case, K ij (k) has a zero eigenvalue and the eigenvector ∆ j (k), or ∆ j (π), is the degree of freedom that loses its canonical kinetic term. C. Implications In multi-galileon theories, due to the presence of higher order kinetic terms, absence of a canonical kinetic term does not necessarily mean loss of a dynamical degree of freedom. Taking the SO(N ) multi-galileon theory for example, by integration by parts the cubic term of the Lagrangian above the self-accelerating background (11) can be cast as where a, b, c, d are spatial indices (rather than group indices), and the theory has N cubic kinetic terms. The conjugate momenta ofπ i (x, t) is non-vanishing and the canonical phase space is non-trivial for all the N degrees of freedom. So there are still N apparent dynamical degrees of freedom on the self-accelerating background. However, since a mode without a canonical kinetic term can be regarded as infinitely strongly coupled and Vainshtein mechanism takes effect in galileon models, these modes would self-screen themselves from the others. We shall demonstrate this schematically. Suppose π 1 loses its canonical kinetic term around the self-accelerating vacuum and consider a slightly different background where the Lagrangian is given by with ... standing for other interactions and modes. To see the genuine dynamics of this mode, we canonically normalizing it, which gives rise to We can recover the perturbative Lagrangian around the vacuum by taking the limit → 0, where we can clearly see that π 1 is infinitely strongly coupled. Now, we can calculate that the Vainshtein radius of a spherical source (M s ) for π 1 (see e.g. [13]): It goes to infinity when goes to 0, meaning this mode would be self-screened at infinitely large distances, and thus is effectively non-dynamical on the vacuum, at least in terms of weak gravitational interactions. Nevertheless, as mentioned above, although some modes in the galileon multiplet lose their canonical kinetic terms on a self-accelerating vacuum, these modes can re-acquire their quadratic kinetic terms on backgrounds with matter sources. Therefore, around a generic background such as in the solar system, these modes are indeed not strongly coupled. As an aside, if there is a cosmological constant, the multi-galileon internal symmetry will be explicitly broken, in which case there is generally no loss of canonical kinetic terms. For the example of SO(2) fundamental representation, the action polynomial is deformed to be a tilted Mexican hat where there is only an unique minimum. When calculating the leading corrections to GR, thanks to Vainshtein effect, we might simply exclude these inert modes. So the spontaneous symmetry breaking and the subsequent freeze-out of some dynamical modes could be reflected in tests of modification to gravity force, as the leading corrections are encoded in the canonical kinetic terms. We still take the SO(N ) multi-galileon for example. First, note that the one particle exchange amplitude between two conserved sources T µν and T µν in GR schematically is given by For simplicity, we assumingk i ∼k, so we have Nk 2 ∼ −α/3β. When the vacuum is spontaneously broken and rests on the self-accelerating branch (10), from (11) we can see that the SO(N ) multi-galileon gives rise to a leading correction This is compared to the case without spontaneous symmetry breaking, when the leading correction on the Minkowski branch (9) is given by On the other hand, when testing the multi-galileon modification to gravitational force upto leading order, we have to deal with observational degeneracy between multi-galileon theories with different internal symmetries and different choices of vacuum branches. Again taking the SO(N ) example and assumingk i ∼k, the leading correction from the SO(N ) multi-galileon on the self-accelerating branch (25) is the same as that from a multi-galileon theory without internal symmetries and with canonical kinetic terms −α(∂π 1 ∂π 1 + ... + ∂π N/2 ∂π N/2 ), provided N is an even number. IV. HAMILTONIAN FORMULATION OF MULTI-GALILEON THEORIES In this section, we deviate from our main plot of the paper and briefly introduce a subplot: the Hamiltonian approach of multi-galileon theories. First, we derive the Hamiltonian for a general multi-galileon theory with or without internal symmetries. As the Lagrangian of a multi-galileon theory contains terms with more than 2 spacetime derivatives, one might expect the Hamiltonian formulation of a multi-galileon theory should involve Ostrogradski's prescription for high order derivative theories (see for example [24]). However, a bell should be certainly rung to this naive thinking once we notice the fact that the equations of motion of a multi-galileon theory has only second order derivatives. We will see that a general multi-galileon Lagrangian can be cast to have only up to first order time derivatives. A general multi-galileon theory without a tadpole term can be written as [14] where δ µ 2 ...µn [ν 2 ...νn] ≡ (n − 1)!δ µ 2 [ν 2 ...δ µn νn] , i 1 , ..., i n label different galileons (not necessarily internal group indices) and summation over repeated i k is understood. α i 1 ...in are free parameters of the theory, and can be chosen as symmetric in exchanging the indices since δ ..∂ µn ∂ νn π in can be made symmetric in exchanging the galileon indices by integration by parts. To see what the derivative structure is, we should unfold the anti-symmetrisation. Since the Hamiltonian formulation only requires the knowledge of the time derivative structure, we only need to separate the time derivatives from the spatial ones. A useful relation for the separation is where T ν 2 ...νn µ 2 ...µn is an arbitrary tensor, t 1 and t 2 are time indices, and a i and b i are spatial indices. The double summation is over replacement of one up spatial index with t 1 and one down spatial index with t 2 , so there are (n − 1) 2 terms with time derivatives. Applying this formula to (27) and repeatedly integrating by parts, we can see that for n-th order a term with δ ...t 1 a i ... ...t 2 b i ... gives rise to α i 1 ...in δ a 3 ...an only gives rise to half of that, with all the other terms cancelling each other. Therefore the Lagrangian (27) can be cast aŝ where C 2 n ≡ n(n − 1)/2. The appearance of the combinatorial number C 2 n is what one might expect, since the indices i 1 , ..., i n are symmetric and so there are C 2 n ways to pick out two π i s with first order time derivatives. Due to the first order structure in time derivatives, we can simply take π i (x, t) canonical coordinates and define the conjugate momenta as Defining the matrix we can reverse (30) and getπ To get the Hamiltonian of the multi-galileon theory, we perform the Legendre transformation where the Hamiltonian density is given bŷ Now, we would like to know what the Hamiltonian looks like for the vacuum configuration π i = −k i x µ x µ /4: where L (i) (k) are the i-th order terms of the action polynomial L(k). In a canonical field theory with a constant field background, since the Hamiltonian is an (infinitely) extensive quantity, we can divide the Hamiltonian by the volume of the spacetime to extract an energy function of the constant field, which can be minimized to find the vacua of the theory. Here we find the same procedure is not applicable to a multi-galileon modified gravity theory, as we can see from (35) that the "volume factor" is different for different orders of k i . This of course originates from the high derivative nature of multi-galileon theories and the nontrivial background configuration π i = −k i x µ x µ /4. Note that for a multi-galileon action with the configuration π i = −k i x µ x µ /4, a total derivative (say, ∂ t (π 1 ∂ t π 2 ∂ i ∂ i π 3 )) will actually give rise to nontrivial contribution (−3k 1 k 2 k 3 (3t 2 − x 2 )/16). Indeed, from (27) to (29) we have performed a series of integration by parts and neglected the subsequent total derivatives, which is responsible for the different "volume factors" in (35). We also note that the Hamiltonian density (34) (hence the Hamiltonian) is generally unbounded from below, i.e., the Hamiltonian density can be arbitrarily lowered by choosing suitable initial field configurations. This is due to the presence of higher than quadratic order multi-galileon terms and because there are terms where the first derivatives of galileon fields are not in "squared" forms (e.g.,π 1π1 (∂ a ∂ a π 2 ) 2 is "squared", but ∂ a π 1 ∂ a ∂ b π 1 ∂ b π 1 ∂ c ∂ c π 1 anḋ π 1π2 ∂ a ∂ a π 3 are not.). Since we know the galileon models define a conventional Cauchy problem, the galileon fields and their first derivatives can be arbitrarily chosen. By making the first derivatives of galileons increasingly steep, we can lower the Hamiltonian density arbitrarily. Note that, even for the background configuration π i = −k i x µ x µ /4, the Hamiltonian density at a fixed spacetime point is not bounded below if the highest order of galileon terms is odd. The perturbative Hamiltonian above some self-accelerating background (k i =k i ) can also be cast in the form (33) with the parameters α i 1 ...in replaced by a new set of parameters β i 1 ...in (k) (as polynomials ofk i ) (see e.g. [13]), so it is also unbounded below. In a fundamental theory, this of course signals instabilities. However, the multi-galileon modified gravity is only supposed to be the decoupling limit of some underlying full theory, so one should really check whether the Hamiltonian of the underlying full theory is well behaved or not. The underlying theory presumably has 4D diffeomorphism invariance, so the corresponding naive 4D Hamiltonian (excluding the part from extra dimensions) is tuned to zero by 4 constraint equations, similar to that in GR. A useful 4D Hamiltonian arises when the theory is "deparameterized" [25], but from the experiences in GR, even checking the positivity of the background solution could be nontrivial 3 . On the other hand, due to the derivative structure of the multi-galileon theories, the most negative Hamiltonian value is achieved by setting the gradients close to the cutoff of the theory, i.e., ∂ ∼ Λ cutoff . This kind of being unbounded below pushes the limit of a classical theory, as it relies on a small region of the canonical phase space, so one might also doubt whether quantum corrections can alther the picture. A famous example of this is the Hydrogen atom: The classical Coulomb potential for this system (−e 2 /r) can be made arbitrarily negative by placing the electron close to the nucleus, but the Hydrogen atom is stable upon quantisation of electrodynamics. V. CONCLUSION We have coupled the multi-galileon theory with internal symmetries studied in [14] to conventional General Relativity (GR) and proposed it as a modified gravity theory in the decoupling limit where the multi-galileon modifies GR only by mixing with the transverse graviton. We have discussed the phenomenon of spontaneous symmetry breaking of these theories on (classical) self-accelerating vacua. We point out that, similar to that in canonical scalar field theories, the pattern of the symmetry breaking is governed by a new version of Goldstone's theorem that for every broken continuous symmetry the theory loses a canonical kinetic term. Note that as the energy-nomentum tensor T µν by definition vanishes in the self-accelerating vacuum, this theorem is largely insensitive to the coupling to GR. But we do assume the background configuration of the multi-galileon is given by π i = −k i x µ x µ /4. We have also discussed implications of this theorem. In particular, we suggest that the mode that loses its canonical kinetic term, although apparently non-trivial in the phase space, becomes inert due to Vainshtein mechanism. This would lead to different modification to gravitational force, compared to what one would naively expect from the Lagragian with the broken vacuum hidden. Also, there would be degeneracy among multi-galileon theories with different internal symmetries and different choices of vacuum branches. We have also derived the Hamiltonian of a general multi-galileon theory. We find the Hamiltonian with the configuration π i = −k i x µ x µ /4 does not give rise to a good "effective potential" to minimize to find the background solution. Besides, we find the Hamiltonian is not bounded below because of the higher order multi-galileon terms. We speculate this pathology might arise from the decoupling limit or the classical nature of multi-galileon theories and argue that the underlying full theory for the multi-galileon or even its quantum version should be investigated to decide whether this is a real problem or not. There are a few attempts to put the galileon description in a more formal framework [5,7,9,15], and it is interesting to see whether the vacuum Hamiltonian in these models is bounded below, which we leave for future work.
2011-01-23T13:48:44.000Z
2010-11-03T00:00:00.000
{ "year": 2010, "sha1": "6a6b723ce12c7c0f31f278b137a53fa1697456a7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1011.0863", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6a6b723ce12c7c0f31f278b137a53fa1697456a7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
268583353
pes2o/s2orc
v3-fos-license
Contemporary Critical Reflections on Ion Bria’s Vision for Ecumenical Dialogue : In this study, I will expose the perspective of the ecumenical dialogue in the theology of Fr. Ion Bria, one of the well-known Romanians involved in the ecumenical movement. In the first part, after a short introduction, I will present the most important biographical milestones of the Romanian theologian, as well as some details about his activity in the World Council of Churches. Then, in the second part, I will critically present the most important aspects of Bria’s ecumenical theology, as well as the reception of these ideas in contemporary Orthodox theology, in discussion with common witness and eucharistic communion within ecumenical dialogue. In the last part, I will present the critical remarks on ecumenism in Bria’s theology. Through this analysis, I will emphasize important directions that the ecumenical dialogue can exploit today to overcome some historical, cultural or theological preconceptions and misunderstandings. Introduction Currently, the Romanian Orthodox Church (ROC) is actively involved in bilateral ecumenical dialogue or in various ecumenical forums, such as the World Council of Churches (WCC) or the European Conference of Churches.Moreover, some of the members of the higher clergy had the opportunity to study abroad and better understand the phenomenon of ecumenism, both theologically and culturally-historically.An important moment for the affirmation of the Romanian Orthodox Church in the communion of autocephalous Orthodox churches remains, without a doubt, the participation in the Holy and Great Synod of Crete, held between 16 and 26 June 2016. Among the prominent names of Romanian Orthodox theology involved in ecumenical dialogue, we can mention Fr.Dumitru Stăniloae, Fr.Ion Bria, Fr.Viorel Ionit , ă, Fr.Ioan Sauca, Fr.Daniel Buda, His Eminence Nifon of Târgovis , te, His Beatitude Daniel, Patriarch of the Romanian Church, and many others.I can say that with the enthronement of His Beatitude Daniel, a new stage of the relationship with the WCC began, but also with prominent representatives of other churches or Christian denominations.I recall important moments for ecumenical dialogue, such as the visit to the Patriarchal Residence of His Excellency, Rev. Dr. Olav Fykse Tveit, on 17 June 2015, at that time Secretary General of the WCC, or the visit of Pope Francis to the Palace of the Patriarchate with His Beatitude Father Patriarch Daniel and the members of the Permanent Synod of the ROC on 31 May 2019.The ROC accepted that Fr.Dr. Ioan Sauca, one of its representative theologians, was to be Interim and Acting General Secretary of the WCC between April 2020 and December 2022.Also, His Eminence Archbishop and Metropolitan Dr. Nifon, from the Archdiocese of Târgovis , te, was elected as a member of the Central Committee at the 11th Assembly of the WCC, held in Karlsruhe, Germany, from 31 August to 8 September 2022.Officially, at least, the ROC continues to be represented at the highest level in ecumenical forums.It remains to be seen to what extent these official positions will also have an effect at the local level.As a general impression, the attitude of the Romanian Patriarchate towards ecumenical dialogue is positive, with theologians dedicated to this commitment. Ion Bria is one of the well-known Romanian Orthodox theologians involved in the ecumenical movement.Beyond his administrative involvement in the WCC, in recent years, his theology has attracted the attention of several Romanian and foreign theologians.Certainly, the history of the ecumenical movement will hold a special place for him in terms of ecumenical dialogue in the Orthodox and Romanian spheres.I will present some biographical details, his activity in the WCC and also his theological ideas about ecumenism.The Romanian professor and theologian Ion Bria had the opportunity to pursue higher studies in the West.He was sent to Great Britain to the Anglican College "St.Augustin" in Canterbury between October 1962 and June 1963.There, he had the chance to meet Lesslie Newbigin, Nicolas Zernov and William Chadwick.Between March and June 1966, Deacon Ion Bria was sent for a new training period at the Faculty of Theology "St.Chad" in Durham, UK.That scholarship was the result of an intense dialogue between the Romanian Patriarchate and the Anglican Church.Visibly, those two scholarships had the role of opening the ecumenical vision of the theologian Ion Bria, who thus had the opportunity to attend courses of renowned professors, to meet Orthodox theologians from abroad and to access the libraries of those theological institutes. The Historical Personality of Ion Bria After the West experience, Fr.Bria returned to Romania, where he would continue his teaching activity.Prof. Ion Bria applied for PhD courses in November 1960, but the defense of the thesis with the title "Aspecte dogmatice ale unirii Bisericilor cres , tine" (Dogmatic Aspects of the Union of Christian Churches) would only take place on 18 June 1968.The thesis coordinator was Prof. Nicolae Chit , escu, and Fr.Prof. Dumitru Stăniloae was also part of the committee (see Bria 1968).Moreover, at the beginning of January 1965, he would be appointed assistant at the Theological Institute in Bucharest.The theologian Ion Bria's activity in the WCC can only be summarized.Officially, Fr.Bria worked on the basis of a contract in the WCC for no less than 21 years and 3 months, between April 1973 and June 1994.In the Council, Fr.Bria held various positions.Firstly, he served as Executive Secretary at the Office for Orthodox Studies and Relations between April 1973 and December 1986.Another important position was that of Deputy Director at the Commission on World Mission and Evangelism.The position was officially advertised as vacant starting on August 31, 1980.Unfortunately, I have not officially identified the date of the appointment, but in 1982, when Fr. Bria edited the volume "Jesus Christ-the Life of the World", he appeared with the full title of Deputy Director of the Commission and Executive Secretary at the Office of Orthodox Studies and Relations (Bria 1982b).According to my research, he would hold this representative position until December 1986.In January 1987, Bria became the Director of the Renewal and Congregational Life sub-unit.After 1991, Bria was appointed Interim Convener of Unit I: Unity and Renewal and Executive Director from April 1993 to June 1994, when he retired. In January 1994, in Johannesburg, South Africa, the retirement of Fr.Ion Bria was reminded during the meetings of the Executive Committee and the Central Committee.At the commencement of the Executive Committee meeting on 18 January 1994, Fr.Ion Bria led the opening service as a sign of appreciation for the activity carried out.Between 20 and 28 January 1994, at the meeting of the Central Committee, also in Johannesburg, South Africa, Fr.Bria's retirement was noted, and official thanks were given to him.In Geneva, on 30 June 1994, the farewell celebration took place in the presence of Mr. Konrad Raiser, the new General Secretary of the WCC since 1992. After his official retirement from the WCC activity, Fr.Bria continued to participate in various consultations and conferences from the WCC and even the Romanian Patriarchate.In October 1995, at the invitation of Fr.Prof. Mircea Păcurariu, Dean of the Faculty of Theology "Andrei S , aguna" in Romania, Fr.Prof. Ion Bria accepted the position of Associate Professor within the Department of Dogmatic and Ecumenical Theology, where he would remain until the end of the academic year 1998-1999.The Sibiu project would result in the publication of some works and also in the training of theologians who are still active in the Romanian theological school today (see Marcu 2022b). On 2 July 2002, at the age of 73, Fr.Ion Bria passed to eternity after a heart attack.His body would be brought from Geneva and buried on 8 July 2002 in the cemetery of Cernica Monastery, near Bucharest, Romania.The funeral service was attended by a group of bishops and priests, relatives and close friends (Tia 2002;Mos , oiu 2002;Necula 2002). Ecumenical Theology and Dialogue Promoted by Ion Bria Bria, the pioneer of ecumenism in the Romanian Orthodox Church.Regarding the theological reception of Fr.Bria, different theologians actively involved in the ecumenical movement consider him a pioneer.In recent years, his ecumenical theology has become the subject of research at the level of master's and doctoral studies or for articles.An interest in his theology can be seen in the circles of evangelical theologians in Romania, but also from abroad (see Oborji 2006 Fr.Bria was one of the theologians passionate about ecumenism and the effort of mutual recognition of Christians who belong to different churches or Christian denominations/groups.In the course of five decades of theology at the highest level, he earned a reputation for speaking on these sensitive subjects, sometimes even contradicting his initial personal premises.What are the most important elements of this vision about church boundaries, ecumenism and eucharistic communion in the thinking of Fr.Ion Bria?In the following lines, I will critically present these topics and the vision of Bria regarding ecumenical dialogue. First of all, Fr.Bria identifies the Orthodox Church, which manifests itself as local Orthodox churches, with the historical Church, Una Sancta, which was formed at the time of Pentecost as the mystical Body of Christ, the Head of the Church.This church is confessed by the Nicene-Constantinopolitan Creed.In other words, "the universal Church is the Orthodox Church; the universal Church is one, but it is embodied in local Churches" (Bria 1989, p. 181).Consequently, the identity of the Orthodox Church is unique.Therefore, Fr.Bria believes that placing the Orthodox Church in a confessional triangle limits its identity as an ecumenical church.At the same time, in a paradoxical way, Fr.Bria claims that "the Church-Una Sancta does not exist without the Orthodox, but it is not the property of the Orthodox" (Bria 1997, p. 3).Hence the opinion of Bria about "our confessional pride", which has, as a consequence, insensitivity towards the status of other Christians. What is actually the intention of ecumenism?Fr.Bria talks about several types of ecumenism (integral, local, spiritual), but all of them refer to the attempts of Christians who belong to different churches/denominations/Christian groups to get closer.Ecclesial unity must be the most important concern of ecumenism.Any deviation from this goal entails a disregard for ecumenism itself.Obviously, this unity involves "a full consensus in the fundamental truths of faith".Today, Christians find themselves in a state of separation, for theological and non-theological reasons.To resolve these misunderstandings, dialogue is the only working method.Fr.Bria does not believe that "the dividing walls between religious beliefs are raised to the sky", but he believes that an active involvement of the entire Orthodoxy, clergy and laity, is not optional but vocational.Fr.Bria is also aware of the voices in the Orthodox churches that do not agree with the contemporary ecumenical movement, but he believes that they "like to live in the comfort and isolation of the past".In other words, for Bria, "ecumenism does not mean erasing the doctrinal divergences and cultural tensions created by 'non-theological factors', but to reset confessional and cultural particularities in their historical, local and universal context, to find a 'catholic' space of communion and solidarity, to inspire an evolution towards a synthesis in the form of a consensus" (Bria 2003, p. 88). Bria's perspectives on common witness and Eucharistic communion.Fr.Bria talks about the importance of a common witness for credibility in front of the world.We must acknowledge that the current separated life of Christian communities constitutes the most massive obstacle to the credibility of the Gospel for our contemporaries.The lack of unity among Christians acts like a screen, preventing the manifestation of Christ Himself.Unfortunately, the reception of ecumenical convergences raises great problems of communication and accountability.Fr.Bria says explicitly that, most of the time, these theological results are not taken into account by the leading clergy and are not brought to the attention of the members of the Church.He believes that, "nevertheless, common witness is a unique ecumenical chance, especially for small communities, with important value for people struggling not only with the old and new confessional isolations, but also with the new political alienation and ideological restrictions.There are situations where common witness is an urgent need for individuals and established communities.The task of the churches is therefore to encourage the common witness experience as an immediate living form of our historical, possible, already-given conciliarity.The large Christian fellowship desires to live today, now, as one people of God" (Bria 1982a, p. 396). The greatest impasse of the lack of unity among Christians is seen in the Holy Eucharist.The Orthodox Church does not accept communion with anyone who is not an official member of the church.Beyond the various names-eucharistic hospitality, intercommunion, eucharistic concelebration-I believe that the expression eucharistic communion is the most comprehensive.Fr.Bria states repeatedly that it is not possible to have eucharistic communion with other Christians, even Catholics, as long as we do not share the same faith, the same creed.However, there is a unique statement by Fr.Bria where he claims that Orthodox priests can offer Holy Communion to believers from traditional churches: the Roman Catholic Church, the Old Catholic Church and the Oriental churches.Interestingly, this statement is found only in English.Moreover, it does not appear at all in the Romanian version of that work (Bria 1996a): "It is the priest's responsibility to encourage all people who take part in the offertory and the anaphora to come for holy communion.At his discretion he may give communion to members of Oriental Orthodox, Roman Catholic and Old Catholic churches without formal conversion to the Orthodox church.Of course, the way for full eucharistic communion needs solid preparation" (Bria 1996b, p. 29). Fr. Bria's statement was commented on by two Romanian Orthodox theologians from abroad: Viorel Coman and Fr.Radu Bordeianu.Regarding Coman's criticism, we should note the pertinent observation that Fr.Bria did not express whether it was possible for an Orthodox believer to receive communion in other churches, such as those already specified (see Coman 2019, p. 236).Fr.Bordeianu did not agree with the lack of approval from the episcopal authority, in the absence of which the gesture of an Orthodox priest to offer Communion to a Catholic could entail his defrocking (see Bordeianu 2019, pp. 15-16).I believe that a possible answer to this attitude must be correlated with a question that Fr.Bria formulated a few years before.According to his words, "could the eucharist be shared not only to consolidate a proper ecclesial life and celebrate the reununion of divided Christians, but also to challenge exclusive, historically organized communities to transcend their visible institutional limits in order to share the "common bread and cup" in a more catholic way, with others and for others who need the bread of life?" (Bria 1991, p. 79). I could say that the desire to see a real rapprochement between churches is characteristic of those who participated in so many ecumenical meetings and conferences: "We waited with hope not only for unconditional forgiveness between Churches, but also for the opening of the Altar doors for those 'outside', who have clothes of a different color, we mean we waited for mutual Eucharistic communion" (Bria 2005, pp. 286-87). In a personal testimony, the Greek theologian Petros Vassiliadis points out Fr. Bria's enthusiasm but also the disappointment that the eucharistic communion between the Orthodox and the Orientals did not take place: "In a private conversation we had during our last meeting in Geneva, a few months before his death, he openly confessed to me his disappointment that at least some sort of intercommunion had not taken place between the Eastern and the Oriental Orthodox churches; and with all humility, he put the blame on us theologians!"(Vassiliadis 2013a, p. 67). In agreement with these positions, we must be aware that an important point is the differentiation of dialogue partners.We need a more accurate classification because, at the popular level, most of the time, the differences or qualities are standardized.It is not possible, theologically and historically, to accept the mixing of differences and similarities.We should highlight that there is a difference between traditional churches, such as the Roman Catholic or Oriental Churches, and those that have been formed much more recently.This classification would help us, at a theological and pastoral level, to have a much more achievable dialogue. Critical remarks in Bria's theology about ecumenism.Fr.Bria considers that the issue of accepting or rejecting the ecumenical dialogue in the Orthodox communities, in Romania especially, should be related to the position of the hierarchy.At the institutional level, the attitude towards ecumenical dialogue has a double standard.More precisely, in some situations, theologians or hierarchs who represent the voice of the Orthodox Church are reserved in transmitting, promoting or applying the decisions approved in ecumenical forums.What is more, there have been situations when, in their position as lay theologians, they have been open to ecumenical dialogue, but when they became members of the higher clergy, they changed their attitude.In this sense, the impression conveyed by some hierarchs is interpreted as anti-ecumenism and against dialogue.In Bria's words, "unfortunately, the results of consultations such as these seem often to disappear en route to Orthodox theological schools, parishes and other centres.Or they go quickly into the filing cabinets of ecclesiastical offices, never to be taken out again" (Bria 2000, p. 255). Another critical point, related to the former, is the problem of translating and presenting the results of the ecumenical dialogue.Here, an important role belongs to the institutional church, which should transpose the common agreements at the local level.In practice, many reports or consultations are translated late and without genuine intention to implement them.Reception in some cases does not exist, hence the hostile attitude towards what was not present at the right time and in the right context.In this regard, Bria is among those who noticed this situation and pointed out the imposture in which the Orthodox can find themselves in relation to their ecumenical partners: "For Churches that do not have a proper ecumenical experience, ecumenical convergences can occur as something imposed from outside.There are situations in which the church authority does not allow ecumenical perceptions and experiences to reach the level of believers and parish communities.The question is therefore whether, currently, the laity is trained, excluded or denied in this process of reception" (Bria 1985, p. 133). In Orthodox communities, the word ecumenist or ecumenism has become a pejorative one.When you categorize someone as an ecumenist, it can mean that they have lost their faith in the Church.In the perspective of some, the one who is an ecumenist must be considered a traitor of the Orthodox teachings and needs to repent and be re-accepted in the Church (see Kalaitzidis 2014, pp. 134-52).For example, in the documents of the Holy and Great Synod, although they speak of ecumenical dialogue, the word ecumenism is not used even once.If Romanian theology had been consistent with the ecumenical language, the current reaction of those who do not accept ecumenical dialogue would have been much more moderate.Before 1989, but also after the fall of communism in Romania, Bria published a lot of articles, both in Romanian and foreign languages, in which he encouraged the involvement of Orthodoxy and Orthodox people in ecumenical dialogue.At least for the Romanian Orthodox space, he is the theologian who wrote the most about ecumenism and its implications. The anti-ecumenical position in the Orthodox Church in general and the Romanian Orthodox Church in particular must be presented with a lack of a common vision towards the ecumenical dialogue of all the autocephalous Orthodox churches.Moreover, the antiecumenical attitude of the monks of Holy Mount Athos is increasingly accepted as the norm and considered an indisputable spiritual position.Although the Holy and Great Synod of Crete presented a balanced image towards ecumenical dialogue, the reception of these positions suffers mainly from the lack of unity of all the autocephalous churches.In Bria's words, "of course, there are pious groups and theologians who like living in the comfort and isolation of the past, and who try to avoid the controversial issues by withdrawing and by-passing the present ecumenical structure.This attitude is understandable since all of us have had both positive and negative experiences in the ecumenical movement.But it is not a sound enough reason for weakening Orthodox participation in the present struggle for ecumenism" (Bria 1981, p. 322). The main argument of those who do not accept ecumenical dialogue is related to the truth of faith.They say that through dialogue with others, there is a possibility that the teachings of the Church can be changed or altered.As the only true Church of Christ, the Orthodox Church has the sacred duty not to change anything from the teaching received from Christ, the Holy Apostles and the Holy Fathers.Orthodox theologians who accept ecumenical dialogue, like Ion Bria, understand the Church in the same sense, but they want to offer others the opportunity to appreciate the testimony of the Orthodox Church. The ecumenical movement is not sufficiently studied in the faculties of Orthodox Theology in Romania.Moreover, others are presented only from a negative, schismatic or sectarian perspective.Their theology is presented only through the lenses of triumphalist and theoretical Orthodoxy.In the past, the discipline was called Missiology and Ecumenism.Today, it is called Orthodox Missiology.I personally believe that a Catholic, a Protestant or any other theologian from another Christian group or denomination would present the theology of his church or his community much better than any Orthodox professor.This is where the unresolved issue of proselytizing comes into play.Many Orthodox suspect that others have only the intention of converting members of our church.Bria insisted on ecumenism being taught in Orthodox Theology faculties: "Ecumenism has to become a theological discipline in its proper sense.While many faculties of theology have accepted ecumenism in the academic curriculum, it is practically limited to the history and development of the ecumenical movement.The doctrinal profiles of other churches are still described according to the old apologetic model of confessionalistic comparison" (Bria 1996c, p. 209). In the Romanian Orthodox Church, there is a need to give voice to the new generation of theologians.Moreover, as Fr.Bria said, we need to train theologians who are able to dialogue with others.My impression and experience are that there is no careful concern for those who may represent the Orthodox position in the future in these dialogues.Also, I think it is time to recover the Romanian Orthodox theologians from the diaspora, who have come to understand dialogue with others much more clearly and honestly than us.Bria always claimed that the mission of the Church is fulfilled through various factors, but an important place is occupied by the work of theologians and theology.Without theology and theologians, the Church lacks a vital dimension to its work. Conclusions An important deduction of this study is related to the outstanding personality of Ion Bria, a pioneer of ecumenical dialogue in contemporary Orthodoxy.Today, his studies on ecumenism are being re-evaluated and proposed for study by the new generation of Orthodox theologians.In particular, in the Orthodox faculties in Romania, Bria's theology is intensively studied.The discipline of missiology is marked by Bria's vision, and most of the Romanian professors of missiology continue Bria's legacy.Certainly, much more needs to be done in the continuation of this work. Bria's pro-ecumenism positions and the solutions offered are in the process of implementation and re-evaluation.There are several solutions for rebooting ecumenism, among which we mentioned its introduction as a theological discipline in theology faculties, the training of theologians and bishops who know the rigors of ecumenism, informing Christians in parishes or local communities, etc.Most of the time, at the non-academic level, the intentions and objectives of dialogue are not translated or presented clearly enough.But theologians and hierarchs must take on the task of learning and explaining the role of ecumenical dialogue correctly.In this sense, the legacy of Ion Bria's theology remains relevant and offers opportunities for the new generation of theologians. Funding: This research received no external funding. Data Availability Statement: No new data were created or analyzed in this study.Data sharing is not applicable to this article. Ion Bria was born on 19 June 1929, in the locality of Telega, Prahova County, Romania.His parents were Ion Gheorghe and Maria.Between 1936 and 1940, Fr.Bria attended the primary school in his village.He attended middle school between 1940 and 1944 and high school between 1944 and 1948 in Ploies , ti.After graduation from high school, Fr.Bria entered, in turn, the Faculty of Petroleum and Gas and the Faculty of Agronomic Sciences in Bucharest.For political reasons, he had to give up both.But in December 1950, he enrolled at the Theological Institute in Bucharest, where he graduated with excellent results in 1954.Accepting the advice of the rector, Prof. Ioan Coman, he entered the MA or magisterium courses at the Theological Institute in Bucharest, completing them in the period 1955-1957.Upon the recommendation of Fr.Prof. Dumitru Stăniloae, the graduate Ion Bria was appointed professor at the Theological Seminary "Bishop Chesarie" in Buzău.In November 1962, he was transferred to the Theological Seminary in Bucharest, where he would stay, with interruptions, until December 1964. On 30 January 1965, he was ordained deacon, and later, in April 1972, priest by His Beatitude Justinian Marina, the Patriarch of the Romanian Orthodox Church.Another stage in Fr.Ion Bria's missionary journey, which will mark his career, was his appointment to the staff of the World Council of Churches.That unique opportunity arose in the context of Fr.Bria's participation in the World Missionary Conference in Bangkok, Thailand, which took place between 31 December 1972 and 7 January 1973.There, Bria, as a delegate of the Romanian Patriarchate, met with important theologians of the Commission, among whom we can mention Anastasios Yannoulatos, Jürgen Moltmann, Philip Potter, Jacques Rossel and Emilio Castro.A short time before, the Commission on World Mission and Evangelism had established a new office called Orthodox Studies and Relations, coordinated for a short time by Archimandrite Yannoulatos, who was meanwhile appointed to head a missionary studies center at the University of Athens.In that context, Fr.Bria was asked to take over the WCC Office for Orthodox Studies and Relations.Fr.Bria's final answer was given after returning to Romania, where he had a meeting with His Beatitude Patriarch Justinian.Fr.Bria was officially employed by the WCC in Geneva since April 1973, when he moved with his wife Ecaterina and son Alexandru.
2024-03-22T15:22:39.524Z
2024-03-20T00:00:00.000
{ "year": 2024, "sha1": "7d6bbb3c527ebc579b4df8c286e8e3aabf3b2011", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/rel15030369", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6f747c410462d03649c8cd3ba8fe98d6d1c2d000", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
258276373
pes2o/s2orc
v3-fos-license
TWEAK Signaling‐Induced ID1 Expression Drives Malignant Transformation of Hepatic Progenitor Cells During Hepatocarcinogenesis Abstract The malignant transformation of hepatic progenitor cells (HPCs) in the inflammatory microenvironment is the root cause of hepatocarcinogenesis. However, the potential molecular mechanisms are still elusive. The HPCs subgroup is identified by single‐cell RNA (scRNA) sequencing and the phenotype of HPCs is investigated in the primary HCC model. Bulk RNA sequencing (RNA‐seq) and proteomic analyses are also performed on HPC‐derived organoids. It is found that tumors are formed from HPCs in peritumor tissue at the 16th week in a HCC model. Furthermore, it is confirmed that the macrophage‐derived TWEAK/Fn14 promoted the expression of inhibitor of differentiation‐1 (ID1) in HPCs via NF‐κB signaling and a high level of ID1 induced aberrant differentiation of HPCs. Mechanistically, ID1 suppressed differentiation and promoted proliferation in HPCs through the inhibition of HNF4α and Rap1GAP transcriptions. Finally, scRNA sequencing of HCC patients and investigation of clinical specimens also verified that the expression of ID1 is correlated with aberrant differentiation of HPCs into cancer stem cells, patients with high levels of ID1 in HPCs showed a poorer prognosis. This study provides important intervention targets and a theoretical basis for the clinical diagnosis and treatment of HCC. Introduction Hepatocellular carcinoma (HCC) is one of the most common malignant tumors worldwide. [1] It is estimated that 80% of all HCCs DOI: 10.1002/advs.202300350 have a history of HBV and HCV infection. [2] In addition, chronic inflammatory damage caused by alcoholic liver disease and nonalcoholic fatty liver is also an important cause of liver cancer. [3] Abundant evidence has confirmed that aberrant differentiation of hepatic progenitor cells (HPCs) in the inflammatory microenvironment is the origin of HCC. [4] However, the dynamic changes and the potential molecular mechanisms underlying the malignant transformation of HPCs during the development of liver cancer are still elusive. HPCs are referred to as oval cells in rodents, have bidirectional differentiation potential toward either hepatocytes or a biliary phenotype. HPCs normally reside in biliary ducts and can be activated by impairment of hepatocyte replicative potential during chronic liver damage. [5] Thorgeirsson et al found that a distinct subtype of aggressive HCC expresses HPC markers, suggesting that this subtype of HCC might arise from HPCs. [6] Markers of cancer stem cells (including EpCAM, CD133, CD24, and CD44), specific cytokeratins (including CK7 and CK19), CLDN4, and the transcription factor Sox9 [7] can be used to identify HPCs. It has been reported that HPCs are commonly accompanied by immune cells and cytokines in rodents. [8] Our previous study found that the activation and aberrant differentiation of HPCs in the inflammatory microenvironment are the root causes of the occurrence and recurrence of liver cancer. [4b] Here, we identified the HPCs subgroup by single-cell RNA (scRNA) sequencing and investigated the phenotype of rat HPCs at different time points in the DEN-induced primary HCC model. For further verification, bulk RNA sequencing (RNA-seq) and proteomic analyses were also performed on organoids derived from primary HPCs. We first characterized the transcriptomic and proteomic profiles in HPCs at different time points in the primary HCC model. Further, we employed cell interaction analysis between non-parenchymal cells and HPCs in the inflammatory microenvironment. We identified the molecular mechanism that drives the malignant transformation of HPCs. Taken together, our data promote an understanding of how the liver inflammatory microenvironment affects the function of HPCs and the malignant transformation of HPCs in hepatocarcinogenesis. Identification of HPCs by Single-Cell RNA Sequencing during Hepatocarcinogenesis In order to analyze the potential mechanism underlying the malignant transformation of HPCs during hepatocarcinogenesis, we collected rat liver tissues at different time points (0, 4, 8, 12, and 16 weeks) of the primary HCC model for scRNA sequencing (one rat for each time point). Transcriptomic (two rats for each time point) and proteomic (one rat for each time point) analyses were also performed in primary HPC-derived organoids for further verification ( Figure 1A). After quality control and removal of the batch effect ( Figure S1A,B, Supporting Information), 37 930 single cells were clustered into 30 clusters. Clustering analysis revealed several major cell types in the liver ( Figure 1B,C). All these cell subtypes were found in all the samples, albeit in different proportions ( Figure 1D,E and Figure S1C, Supporting Information). Sox9 + Epcam + Cd24 + Cldn4 + HPCs were chosen for further analysis ( Figure 1F and Figure S1D, Supporting Information). Compared to 0, 4, and 8 weeks, the infiltration levels of HPCs were greatly increased at 12 and 16 weeks of DEN treatment (Figure S1E, Supporting Information). Sox9 + Epcam + Cd24 + Cldn4 + HPCs were also observed in rat liver tissues during hepatocarcinogenesis, and the number of HPCs was also presented at an increased level at the 12th and 16th weeks of DEN treatment (Figure 1G). Tumorigenicity of HPCs at Different Time Points during Hepatocarcinogenesis After we successfully used single-cell sequencing technology to capture HPCs in the rat HCC model, we then observed the gene expression changes in HPCs at different time points during hepatocarcinogenesis. We applied the fuzzy C-means algorithm to cluster the transcript expression profiles across all the developmental stages. [9] In total, we observed 15 distinct clusters of temporal expression patterns (Figure 2A and Table S1, Supporting Information). Among them, clusters 5 and 12 contain genes that are upregulated and cluster 11 contains downregulated genes, KEGG enrichment analysis was then performed (Figure 2A). The genes in clusters 5 and 12 were enriched in functions related to the following KEGG pathways: cell cycle, NF-B, hepatocellular carcinoma, MAPK, pathways in cancer, VEGF, Rap1, TNF, TGF-, Wnt, AMPK, mTOR, PI3K-Akt, HIF, and Hippo signaling ( Figure 2B and Figure S2A, Supporting Information). Genes of cluster 11 were enriched in Gene ontology (GO) function: liver regeneration, negative regulation of apoptosis, cell division, and Notch signaling ( Figure 2C). For further verifying the expression changes of the above-mentioned signaling pathways at the mRNA and protein level, we established an organoid culture system for HPCs from rat liver ( Figure 2D). As shown in Figure 2E, the HPC-derived organoids stained positively for HPC markers (Sox9, EpCAM, CD24, and CLDN4). We also examined the differentiation potential of HPCs toward hepatocytes. The results demonstrated that HPC-derived organoids were able to differentiate into hepatocytes with the expression of hepatocyte markers (Alb, Cyp2b1, Hnf3 , Hnf6, and Hnf4 ) ( Figure S2B,C, Supporting Information). Then we isolated primary HPCs at different time points of DEN treatment and total mRNA and protein were extracted for transcriptomic and quantitative proteomic analyses. At the mRNA level, fuzzy C-means clustering identified eight distinct temporal patterns of gene expression. Cluster 7 contains genes showing upregulation and cluster 6 is downregulation ( Figure S2D and Table S2, Supporting Information). KEGG enrichment analysis indicated that the activated pathways included HIF, pathways in cancer, TNF, PI3K-AKT, mTOR, TGF-, cell cycle, and MAPK signaling ( Figure S2E, Supporting Information). The downregulated functions were cell differentiation, liver regeneration, and Notch signaling ( Figure S2F, Supporting Information). These are mostly consistent with the changes observed in the scRNA sequencing data. For proteomic analysis, there were also 15 distinct clusters of temporal protein expression patterns ( Figure 2F and Table S3, Supporting Information). Upregulated proteins in clusters 12 and 15 and downregulated proteins in cluster 11 were selected for further KEGG and GO enrichment analysis. The activated signaling pathways were PI3K-Akt, Hippo, pathway in cancer, TGF-, VEGF, Rap1, HIF, MAPK, Ras, and TNF signaling ( Figure 2G and Figure S2G, Supporting Information). The inhibited function was liver development and regeneration ( Figure 2H). Together, the transcriptome and proteome results suggest that the MAPK, Hippo, VEGF, PI3K-Akt, HIF, TGF-, TNF, PI3K-Akt, Rap1, and Ras signaling pathways may contribute to the regulation of HPC function and fate, the above signaling pathways were correlated with cell differentiation, proliferation, and response to the harsh hypoxic microenvironment. Additionally, the normal cell differentiation of HPCs was suppressed during HCC occurrence. Then, the phenotype of HPCs at different time points was investigated. The formation of HPC-derived organoids was observed by microscopy ( Figure 2I). The tumorigenicity of HPCs was examined by assessing subcutaneous tumor formation in nude mice. We found that HPCs obtained from rat liver at 0, 4, 8, and 12 weeks did not form tumors. Tumors were formed from HPCs collected from the peritumor tissue of rats at the 16th week (16 wpt) of DEN treatment ( Figure 2J). Then, we employed H&E and immunohistochemical (IHC) staining to identify the . The x-axis shows four developmental stages (4, 8, and 12 weeks, and 16 wpt), while the y-axis shows the log2-transformed, normalized intensity ratios at each stage. B,C) Bar graphs showing the pathways identified after KEGG and GO enrichment analysis of the genes in clusters 12 and 11, respectively. D) Primary HPCs were then isolated and cultured to form organoids. Pictures of HPC-derived organoids were obtained at 1, 4, and 13 days. E) Expression of HPCs markers (Sox9, EpCAM, CD24, and CLDN4) was detected in HPC-derived organoids by immunofluorescence analysis. Nuclei were stained with DAPI (blue). F) Fuzzy C-means clustering identifies 15 distinct temporal patterns of protein expression. The x-axis shows four developmental stages (4, 8, and 12 weeks, and 16 wpt), while the y-axis shows the log2-transformed, normalized intensity ratios at each stage. G,H) Bar graphs showing the pathways identified after KEGG and GO enrichment analysis of the proteins in clusters 15 and 11, respectively. I) Primary HPCs were obtained from rat livers at different time points of DEN treatment. The formation of HPC-derived organoids was observed by microscopy and the number of organoids was calculated. Data presented as mean ± SD, **p < 0.01, ***p < 0.001. J) Tumorigenic potential of HPCderived organoids was observed. 200 HPC-derived organoids were injected subcutaneously into the right axilla of the mice. Tumor volume and weight of mice were calculated and the data were presented as mean ± SD. www.advancedsciencenews.com www.advancedscience.com histological type of the HPCs-derived tumors. The results were consistent with the HCC phenotype ( Figure S2H, Supporting Information). These results indicate that hepatocarcinogenesis may originate from HPCs and the recurrence of HCC may also originate from HPCs in peritumor tissues. Changes in the Transcriptomic and Proteomic Profile of Malignant HPCs As shown in Figure 2J, HPCs collected from 16 wpt demonstrated tumor formation potential. Therefore, we further explore the potential signaling pathways that contribute to the malignant transformation of HPCs. GSVA analysis of scRNA sequencing data showed that oncogenic signaling Kras, Myc, and TGF-was upregulated in HPCs from 16 wpt and tumor tissue of rats treated with DEN for 16 weeks (16 wt) ( Figure 3A). And expression profile in HPCs from 16 wpt was most correlated with that from 16 wt, which suggested that HPCs from 16 wpt presented the expression features of cancer stem cells (CSCs) ( Figure 3B). In order to investigate the change of expression profile in HPCs during hepatocarcinogenesis, GO and KEGG enrichment analyses were performed on the differentially expressed genes (DEG) and proteins between different time points. Compared to HPCs collected at the 4th week, GO analysis revealed that the upregulated genes at the 8th week were mainly enriched in functions related to positive regulation of cell adhesion, positive regulation of cell migration, and cell division ( Figure S3A, Supporting Information, left). KEGG enrichment analysis showed that the genes were enriched in pathways related to focal adhesion, collagen formation, and G2/M checkpoints ( Figure S3A, Supporting Information, right). At the protein level, focal adhesion, liver regeneration, cell differentiation, and cell migration were also enriched ( Figure S3B,C, Supporting Information). HPCs are mainly responsible for repairing liver injury, which indicates that they respond to DEN-induced liver damage by increasing their proliferation and differentiation potential. At 12 weeks, DEG enrichment analysis found that many pathways were activated, including Hippo, PI3K-AKT, TGF-, HIF-1, PPAR, VEGF, PDGF, FGFR2, Hedgehog signaling, and cell cycle ( Figure S3D, Supporting Information). At the protein level, enrichment analysis identified that HIF, PI3K-AKT, mTOR, Rap1, VEGF, MAPK, Hippo, and Hedgehog signaling pathways were activated (Figure S3E, Supporting Information). The results suggested that the proliferation and differentiation of HPCs were further activated, and the HPCs showed a phenotype consistent with response to the harsh hypoxic microenvironment. However, tumors were not visible in rat livers after 12 weeks. Therefore, HPCs do not possess a tumorigenic phenotype in vivo, while the genotype of HPCs is already in a state of instability. When the DEN treatment time was extended to 16 weeks, visible tumors were formed in the rat livers. Gene expression profile analysis was performed on HPCs from 16 wpt. DEG enrichment analysis revealed that pathways in cancer, PI3K-AKT, Rap1, Hippo, TGF-, MAPK, and cAMP signaling were activated, which was consistent with the HPCs from 16 wt ( Figure 3C). At the protein level, GSEA analysis of the oncogenic signatures from the Molecular Signatures Database (MSigDB) further validated our gene expression-based index and confirmed HPCs from 16 wpt presented oncogenic differentiation ( Figure 3D). Expression of CSCs' markers was also upregulated in HPCs from 16 wpt (Figure S3F, Supporting Information). The results indicate a tumorinitiating phenotype in HPCs from 16 wpt. GSEA analysis of bulk RNA sequencing data also demonstrated the activation of Myc and TGF-, activated pathways included cell cycles, MAPK, mTOR, and pathways in cancer were also verified in HPCs from 16 wpt by GSEA analysis ( Figure 3E). Further proteomics analysis indicated that the HIF-1, PI3K-AKT, Rap1, MAPK, Ras, VEGF, cAMP, and TGF-signaling pathways were significantly upregulated in HPCs from 16 wpt ( Figure 3F). TGF-, Rap1, PI3K-AKT, and MAPK signaling were activated at both the mRNA and protein levels ( Figure 3G,H). The above data suggested that TGF-, Rap1, PI3K-AKT, and MAPK signaling significantly may contribute to the malignant transformation state of HPCs. To explore the relationship among HPCs from week 4, 16 wpt, and 16 wt, we constructed a transcriptional trajectory of these cells on a pseudotime scale using a monocle. [10] HPCs from week 4 (normal) were distributed at one end of the pseudo-temporal trajectory whereas HPCs from 16 wpt (malignant state) and 16 wt (CSCs) resided at the other end, suggesting that CSCs might derive from normal HPCs ( Figure 3I). ID1 Is Predicted to Drive the Malignant Transformation of HPCs Several studies have confirmed that the differentiation and function of HPCs are regulated by the liver microenvironment. [11] To investigate the interactions between HPCs and liver microenvironment in HCC occurrence, we utilized a set of ligandreceptor (L-R) pairs to gain insights into the regulatory relationships among HPCs, myofibroblasts, endothelial cells, and immune cells. Based on this analysis, myofibroblasts, endothelial cells, and macrophages showed strong potential interactions with HPCs of 16 wpt (Figure 4A,B). In order to further explore the potential mechanism underlying the malignant differentiation of HPCs, we performed NicheNet analysis, which allowed us to predict cellular interactions by linking ligands and target gene expression in HPCs of 16 wpt. [12] Interestingly, the results showed that inhibitor of differentiation-1 (ID1) in HPCs can be activated by ligands (BMP2 and TNFSF12) derived from cells in the liver microenvironment, including myofibroblasts, endothelial cells, and macrophage ( Figure 4C). Id1 is a stem cell-like gene and is overexpressed in several types of cancers. Compared to HPCs from 4w, Id1 expression was significantly increased in HPCs from 16 wpt at both scRNA sequencing data and mRNA level from primary HPCs ( Figure 4D). We then validated these observations by fluorescence co-staining experiments and observed increased expression of ID1 in HPCs derived from 16 wpt compared to normal liver and liver at 4, 8, and 12 weeks ( Figure 4E). In order to further decipher the receptor-ligand interactions in myofibroblasts, endothelial cells, macrophages, and HPCs, we performed cell-cell interaction analysis; the data showed that TNFSF12 and BMP2 might regulate ID1 expression through binding to related receptors including TNFRSF12A and SMO in HPCs ( Figure 4F). These results strongly imply that ID1 may play a key role in the malignant transformation of HPCs induced by microenvironment-associated cells. www.advancedsciencenews.com www.advancedscience.com Macrophage-Derived TWEAK Promoted ID1 Expression through Activation of NF-B Signaling To further investigate the mechanism involved in microenvironment-associated cells-induced ID1 expression, we observed the expression of Tnfsf12 and Bmp2 in myofibroblasts, endothelial cells, and macrophages, which induced the upregulation of ID1 in HPCs from 16wpt. As shown in Figure 5A, Tnfsf12 was upregulated in myofibroblasts, endothelial cells, and macrophages, and Bmp2 was mainly upregulated in myofibroblasts and endothelial cells. The receptor of Tnfsf12 (Tn-frsf12a) is highly expressed in HPCs, however, the expression of Smo was at a low level ( Figure 5B), which suggested that Tnfsf12 mainly contributed a role in HPCs. The Tnfsf12 gene is primarily responsible for encoding tumor necrosis factor-like weak inducer of apoptosis (TWEAK) in cells, which is a secretory protein. Next, we detected the expression of TWEAK in myofibroblasts, endothelial cells, and macrophages, and found that macrophages mainly produced TWEAK in 16wpt ( Figure 5C). The co-location of TWEAK-expressing macrophages with ID1-expressing HPCs was also confirmed by fluorescence co-staining analysis (Figure 5D). Then, we isolated rat liver macrophages at different time points of DEN treatment and a conditioned medium (CM) of macrophages from 16 wpt was collected, we observed that the CM could effectively upregulate ID1 expression in HPCs ( Figure 5E). It was found that the level of TWEAK secreted by macrophages from 16 wpt was significantly increased by ELISA examination ( Figure 5E), suggesting that TWEAK was mainly derived from macrophages in the liver microenvironment. To further demonstrate the role of TWEAK in regulating ID1 expression in HPCs, we used Tnfsf12 siRNA to block TWEAK secretion in macrophages ( Figure S3G, Supporting Information), and ID1 expression in HPCs was reduced when treated with CM of Tnfsf12 siRNA-disposed macrophages from 16 wpt ( Figure 5F). HPCs from 8 weeks was isolated and pre-treated with TWEAK, HPCs that received 100 and 150 ng mL −1 TWEAK for 24 h exhibited significantly increased level of phosphorylated p65, I B , and ID1 ( Figure 5G). Next, aurintricarboxylic acid (ATA), an inhibitor of TWEAK/fibroblast growth factor-inducible 14 (Fn14)/NF-B signaling, [13] was used to determine whether Fn14-NF-B signaling was associated with the mechanism of TWEAK-mediated ID1 upregulation. As presented in Figure 5H, the results demonstrated that TWEAK-mediated ID1 upregulation is inhibited by ATA treatment ( Figure 5H). To further confirm the role of TWEAK/Fn14/NF-B signaling in the upregulation of ID1 expression, Tnfrsf12a shRNA and BAY 11-7078 (inhibitor of NF-B signaling) were used to suppress TWEAK/Fn14/NF-B signaling in HPCs, which lead to the non-effectiveness of ID1 expression when HPCs treated with TWEAK ( Figure 5I,J and Figure S3H, Supporting Information), suggesting that TWEAK/Fn14/NF-B signaling serves a key role in ID1 expression in HPCs. In order to further verify the role of macrophage-derived TWEAK-induced ID1 up-regulation in HCC occurrence, rats DEN-treated for 8 weeks received a single injection of clodronate liposome from the tail vein per week to eliminate macrophages ( Figure 5K,L). HCC occurrence was significantly inhibited after treatment with clodronate liposome (Figure 5K), and further investigation found that treatment of clodronate liposome signif-icantly decreased the expression of ID1 in HPCs ( Figure 5M). The results indicate that macrophage-derived TWEAK promotes ID1 expression, which mediates the malignant transformation of HPCs through the Fn14/NF-B pathway. ID1 Suppresses Differentiation and Promotes Cell Proliferation in HPCs through Inhibition of HNF4 and Rap1GAP Transcription For investigating the mechanism involved in ID1-induced malignant transformation of HPCs, HPCs were isolated from 16 wpt, cultured in organoid form in vitro, and transfected with Id1 shRNA lentivirus ( Figure S4A, Supporting Information). The results showed that Id1 shRNA effectively reduced the expression of ID1 in HPCs ( Figure S4B, Supporting Information). Knocking down the expression of ID1 in HPCs greatly inhibited the formation of organoids derived from HPCs ( Figure 6A). We further observed the effect of inhibiting the expression of ID1 on the tumorigenicity of HPCs. As shown in Figure 6B, the tumorigenic potential of HPCs was suppressed by knocking down the expression of ID1. TUNEL staining revealed increased apoptosis of HPCs in the ID1-knockdown group. Immunofluorescence and IHC data indicated upregulation of cleaved caspase3 and downregulation of PCNA ( Figure 6C). Immunoblot analysis showed marked down-regulation of proteins in cell proliferation-related signaling pathways, including the Rap1, PI3K-AKT, MAPK, and TGF-signaling pathways which were activated in malignant HPCs, in HPCs with ID1 knockdown ( Figure 6D and Figure S4C, Supporting Information). Besides that, we also observed the effect of TWEAK on the expression of Rap1, PI3K-AKT, MAPK, and TGF-signaling pathways. We found that TWEAK could effectively lead to the activation of Rap1, PI3K-AKT, MAPK, and TGFsignaling pathways ( Figure S4D, Supporting Information). These results strongly imply that TWEAK-induced upregulation of ID1 may lead to the malignant transformation of HPCs by promoting cell proliferation and suppressing cell apoptosis. ID1 was first reported to be expressed in stem cells and to inhibit the maturation of stem cells. [14] Therefore, we investigated the effect of ID1 on the stemness of HPCs. As shown in Figure 6E, the expression of stem cell markers, including Sox9, EpCAM, CD133, ALDH1A1, and CD44, was greatly reduced in HPCs with ID1 shRNA transfection compared to the scramble group. In order to further validate the influence of ID1 on the differentiation of HPCs, we designed a lentivirus to overexpress ID1 and transfected it into HPCs from 8w ( Figure S4E,F, Supporting Information). Further in vitro differentiation experiments confirmed that overexpression of ID1 notably suppressed the expression of the hepatocyte markers Alb, Cyp2b1, Hnf3 , Hnf6, and Hnf4 ( Figure S4G, Supporting Information and Figure 6F). This suggests that ID1 inhibits the differentiation of HPCs into hepatocytes. Compared with the scramble group, overexpression of ID1 promoted the expression of PCNA and stem cell markers ( Figure 6G). Taken together, our results reveal that ID1 inhibits the differentiation and enhances the stemness of HPCs, thereby promoting the malignant transformation of HPCs by increasing cell proliferation while decreasing cell apoptosis. Id1 proteins lack a DNA binding domain, and they function as dominant negative regulators of basic HLH transcriptional HPCs (EpCAM, green) was detected by immunofluorescence analysis. Nuclei were stained with DAPI (blue). E) Macrophages derived from rats with different time points of DEN treatment were collected and conditioned medium (CM) from week 4 and 16 wpt was used to treat HPCs. ID1 expression was examined by western blotting (lower left panel). TWEAK level was measured in macrophages derived from different time points of DEN treatment by ELISA kit (lower right panel), data are presented as mean ± SD. ***p < 0.001. F) CM derived from Tnfsf12-siRNA-treated-macrophages from 16 wpt was collected and then used to treat HPCs. ID1 expression was examined by western blotting. GAPDH was used as the internal reference. G) HPCs were treated with different concentrations of TWEAK (0, 50, 100, and 150 ng mL −1 ) respectively and then the expression of Fn14, p-p65, p65, p-I B , I B , and ID1 was detected by western blotting in each group. GAPDH was used as the internal reference. H) TWEAK (100 ng mL −1 ) and ATA (10 μm) www.advancedsciencenews.com www.advancedscience.com factors through heterodimerizing with other bHLH factors such as E2A and inhibiting their binding to DNA. [14] We performed SCENIC analysis for the investigation of transcriptional factors in HPCs from scRNA sequencing data. [15] The results indicated that E2A transcriptional factors family members (TCF3 and TCF12) were expressed in HPCs from rat model, SCENIC also identified a network of TCF3 and TCF12 in HPCs ( Figure 6H and Figure S4H, Supporting Information). The in vitro investigation also demonstrated the positively expressed protein E2A was presented in HPCs ( Figure 6I). IP assay indicated that ID1 could bind to protein E2A ( Figure 6J). The JASPAR website predicted that the E2A motif can bind to three binding sites in the hepatocyte nuclear factor 4 (HNF4 ) promoter and one binding site in the Rap1 GTPase-activating protein (Rap1GAP) promoter (Figure 6K). HNF4 is a nuclear receptor that plays an important role in mediating the differentiation of HPCs. Rap1GAP has been reported to as a negative regulator of Rap1 activity and serves an important role in tumor cell proliferation. [16] The results demonstrated that silencing Id1 expression could increase HNF-4 and Rap1GAP protein levels, and overexpression of Id1 could suppress the level of HNF-4 and Rap1GAP ( Figure 6L). CUT&Tag assays revealed that E2A binds to three high-affinity E-boxes in the HNF4 promoter and one in the Rap1GAP promoter (Figure 6M). The binding of E2A to the promoters of HNF4 and Rap1GAP was enhanced after ID1 was knockdown ( Figure 6N). The transcription of HNF4 and Rap1GAP was also suppressed when E2A was knockdown by shRNA ( Figure S4I,J, Supporting Information), which indicates that ID1 inhibits HNF4 and Rap1GAP transcription through binding to protein E2A. Taken together, our results reveal that ID1 inhibits the differentiation of HPCs and promotes cell proliferation via suppression of HNF4 and Rap1GAP transcription, thereby promoting the malignant transformation of HPCs. The Correlation between ID1 Expression in HPCs of Clinical Specimens and Prognosis of HCC Patients We have demonstrated that ID1 plays a key role in the malignant transformation of HPCs during hepatocarcinogenesis in animal HCC models, and then we investigated the relationship between ID1 expression in HPCs and aberrant differentiation of HPCs as well as the prognosis of HCC patients. We performed scRNA sequencing of liver tumors from 2 HCC patients and downloaded liver scRNA sequencing data of healthy donor (n = 2), HCC adjacent tumor (n = 7), and tumor samples (n = 7) from GEO databases (Figure 7A-C). HPCs subset was identified with SOX9, EPCAM, CD24, and CLDN4 ( Figure 7D and Figure S5A, Supporting Information). DEG enrichment analysis indicated that Rap1, PI3K-AKT, MAPK, and TGF-signaling was also activated in HPCs from adjacent tumor and tumor tissue of clinical samples, epithelial cell differentiation, liver regeneration, and development was enriched in HPCs from healthy cases (Figure 7E), which suggested the signaling pathway mentioned above might contribute to the malignant transformation of HPCs into CSCs. Pseudotime analysis also suggested HPCs from HCCPT and HCCT were diverging from the HPCs of healthy samples ( Figure 7F). ID1 expression was observed in HPCs from HC-CPT and HCCT samples ( Figure 7G,H), which was consistent with the results that were found in the rat HCC model. TNFSF12 was mainly derived from macrophage and TNFRSF12A was also found in HPCs from HCCPT samples ( Figure 7I,J, and Figure S5B, Supporting Information). Furthermore, co-expression analysis demonstrated that ID1-high HPCs are largely overlapping with EPCAM, PROM1, and CD44-high HPCs, and there is a significant correlation between ID1 and EPCAM, PROM1, and CD44 in HPCs from HCCPT and HCCT samples (Figure 7K,L and Figure S5C, Supporting Information). We used the ssGSEA approach to deconvolve the relative abundance of each cell type based on expression profiling data retrieved from the GEO database [17] (Figure 7M). Based on this analysis, we found a significant correlation between the level of macrophages and level of activated HPCs (p = 6.24e-03, r = 0.37), TNFSF12 expression and level of macrophage (p = 2.51e-03, r = 0.41), ID1 expression and the level of HPCs activation (p = 6.22e-04, r = 0.46) in HCC adjacent non-tumor tissues ( Figure 7N). These results suggest that macrophage-derived TWEAK may contribute to the ID1 expression and HPCs proliferation. We further examined the expression of ID1 in HPCs in adjacent non-tumor tissues in clinical patients and divided the patients into high-expression and low-expression groups. Recurrence analysis revealed that the ID1 high expression group had a shorter recurrence time than the ID1 low expression group ( Figure 7O), which suggested that ID1 expression was correlated with malignant transformation of HPCs and recurrence of HCC. We also verified the positive correlation between ID1 expression and SOX9 (p = 6.90e-05, r = 0.20), EPCAM (p = 5.37e-06, r = 0.23), CD24 (p = 1.82e-07, r = 0.27), and CLDN4 (p = 6.04e-05, r = 0.18) expression in HCC tumor tissue from TCGA datasets ( Figure S5D, Supporting Information). The results of survival analysis showed that SOX9 and ID1 double positive samples demonstrated a poorer overall survival time compared with SOX9 and ID1 double negative group. Besides that, the combination of CLDN4 and ID1 also indicated a poorer OS in HCC patients ( Figure S5E, Supporting Information). We performed drug sensitivity testing in liver tumor organoids derived from 5 HCC patients. There were three patients (HCC-2, HCC-3, and HCC-5) showed resistance to sorafenib treatment and two samples (HCC-1 and HCC-4) were sensitive ( Figure S5F, Supporting Information). IHC data showed that HCC-2, HCC-3, and HCC-5 cases presented a high level of ID1 expression (Fig-were used to treat HPCs and then the expression of Fn14, p-p65, p65, p-I B , I B , and ID1 was detected by western blotting in each group. GAPDH was used as the internal reference. I) TWEAK was used to treat the HPCs that the expression of Fn14 was inhibited by Tnfrsf12a-shRNA#2 and then the expression of Fn14, p-p65, p65, p-I B , I B , and ID1 was detected by western blotting in each group. GAPDH was used as the internal reference. J) TWEAK (100 ng mL −1 ) was used to treat HPCs with or without BAY 11-7082 and then the expression of p-p65, p65, p-I B , I B , and ID1 was detected by western blotting in each group. GAPDH was used as the internal reference. K) Rats of DEN-treated 8 weeks received a single injection of clodronate liposome into the tail vein per week to eliminate macrophages, HCC occurrence was observed in different groups, and tumor number and volume were calculated in each group, *p < 0.05. L) CD68 expression was detected by IHC in different groups. The number of positive cells was calculated in each group, data are presented as mean ± SD. ***p < 0.001. M) ID1 expression (green) was detected in HPCs (red) in different groups by immunofluorescence analysis. Nuclei were stained with DAPI (blue). The white arrow indicates the positive location. Figure 6. ID1 suppresses differentiation and promotes cell proliferation in HPCs through the inhibition of HNF4 and Rap1GAP transcription. A) Rat liver HPCs were treated with lentivirus expressing scramble shRNA (Scr), Id1 shRNA#2, or Id1 shRNA#3. HPC-derived organoid formation in each group was observed by microscopy. The number of organoids was calculated in each group, data are presented as mean ± SD. ***p < 0.001. B) Tumorigenic potential of HPC-derived organoids in each group was assessed by subcutaneous injection of organoids into the right axilla of mice. C) Top: TUNEL staining was performed to investigate the apoptosis of HPCs in each group. Middle and bottom: cleaved caspase3 and PCNA were detected by immunofluorescence and IHC analysis, respectively, in each group. The number of positive cells was calculated in each group, data are presented as www.advancedsciencenews.com www.advancedscience.com ure S5G, Supporting Information). These results strongly imply that ID1 was correlated with the prognosis of HCC patients. Discussion It has been confirmed that aberrant differentiation of HPCs in the inflammatory microenvironment is the origin of HCC. However, the dynamic changes and the potential molecular mechanisms underlying the malignant transformation of HPCs in the inflammatory microenvironment during the development of HCC are still elusive. Here, the DEN-induced primary rat HCC model can well simulate chronic inflammatory damage-induced liver fibrosis, cirrhosis, and ultimately liver cancer, which is consistent with the pathogenesis of clinical HCC. We generated a comprehensive single-cell atlas of the liver to understand the cellular landscape from early development to terminal disease. The dynamic changes of the transcriptome and proteome in HPCs during the formation of HCC were further analyzed. We found that macrophage-derived TWEAK-promoted ID1 expression plays a key role in regulating the proliferation, differentiation, and malignant transformation of HPCs in hepatocarcinogenesis. The expression of ID1 in HPCs in clinical samples is negatively correlated with the recurrence and prognosis of HCC patients. Our study represents an essential step toward understanding how HPCs initiate tumor occurrence and reveals the existence of active crosstalk between HPCs and the inflammatory microenvironment in HCC. As reservoir cells, HPCs were shown to be activated in a wide range of liver diseases. [18] The presence of HPCs in primary liver cancers, therefore, raised the suspicion that they may be implicated in hepatocarcinogenesis. Related theories emerged that included maturation arrest and dedifferentiation as mechanisms. Here, we first described the dynamic changes of HPCs at different time points during the occurrence of liver cancer. We isolated primary HPCs from rats at the early, middle, and late stages of DEN-induced primary hepatocarcinogenesis. Only HPCs collected from peritumor tissue at the late stage (16 weeks) caused a malignant phenotype. HPCs were activated in the early stage of HCC occurrence and participated in the repair of liver damage. The instability of HPCs in the process of proliferation and differentiation increased when chronic liver injury persisted. HPCs continued to proliferate while losing the ability to differentiate into hepatocytes, thus initiating tumorigenesis. scRNA sequencing, bulk RNA-seq, and proteomic analyses in primary HPCs suggest that the TGF-, Rap1, PI3K-AKT, and MAPK signaling pathways may make a major contribution to the malignant transformation of HPCs. TGF-signaling plays an important role in tumor initiation by controlling numerous cellular functions including proliferation, apoptosis, and differentiation. [19] The Rap1 and PI3K-AKT signaling pathways exert a wide range of biological effects in tumorigenesis, including anti-apoptotic effects and the promotion of cell survival. [20] Abnormal or excessive activation of the MAPK signaling pathway plays an important role in the malignant transformation and evolution of cells. [21] The inflammatory microenvironment plays an important role in regulating the activation and function of HPCs during chronic liver injury. In the chronically injured rodent liver, oval cells are commonly accompanied by immune cells and cytokines. [11c] Furthermore, the infiltration of inflammatory cells is immediately followed by the proliferation of HPCs during chronic liver injury, [11d] and anti-inflammatory agents can effectively reduce the activation of HPCs in liver injury models. [22] The results of our previous work indicate that there is a correlation between the degree of inflammatory infiltration and the number of oval cells. [4b] Previous research has reported that hepatic macrophage plays a key role in the HPC-mediated regeneration of hepatocytes. [8] Our work indicates that macrophages present potential interaction with HPCs through secreting high levels of TWEAK. TWEAK is a member of the TNF ligand superfamily and acts by binding to Fn14, the sole receptor of TWEAK, to initiate several intracellular signaling pathways, including NF-B. [23] Biologically, TWEAK has been shown to regulate numerous cellular processes including proliferation, differentiation, and cell survival and has also been described as a pro-angiogenic and pro-inflammatory factor. [24] In the chronic liver disease model, TWEAK-producing macrophages have been observed in close association with expanding ductal cells, demonstrating a primary role of macrophage-generated TWEAK in initiating the activation of HPCs. [25] The results from our and previous studies indicate that during the pathological process of liver injury, macrophages accumulate in the liver inflammatory environment and produce a high level of TWEAK, and create conditions that are favorable for the proliferation and aberrant differentiation of HPCs. mean ± SD. **p < 0.01, ***p < 0.001. The red arrow indicates the positive location. D) The expression of ID1 in HPC-derived organoids was inhibited by shRNA and the levels of cell proliferation-related signal pathways were examined in each group by western blotting. GAPDH was used as the internal reference. E) The expression of ID1 in HPC-derived organoids was inhibited by shRNA and the expression of stem cell markers was evaluated in each group by western blotting. GAPDH was used as the internal reference. F) The in vitro differentiation induction system was performed in HPC-derived organoids in each group. Protein levels of ALB and HNF4 (green) were examined by immunofluorescence analysis. Nuclei were stained with DAPI (blue). G) ID1 was overexpressed in HPC-derived organoids and the expression of PCNA and stem cell markers was evaluated in each group by western blotting. GAPDH was used as the internal reference. H) t-SNE plot showing the expression and regulatory activity of TCF3 in HPCs from the liver of rat HCC model by scRNA sequencing. I) HPC-derived organoids were isolated and cultured from three rats respectively and the ID1 was overexpressed in these HPC-derived organoids. Expression levels of E2A were evaluated in each group by western blotting. GAPDH was used as the internal reference. (#1, #2, and #3 represent three different HPC-derived organoids samples) J) The binding of ID1 and E2A was detected by IP assay. GAPDH was used as the internal reference. K) Top: the consensus binding site of E2A. Bottom: E2A binding sites in the HNF4 and Rap1GAP promoter, predicted by JASPAR. L) The expression of ID1 in HPC-derived organoids was up or down-regulated respectively and Rap1, Rap1GAP, and HNF4 expression was examined in each group by western blotting. GAPDH was used as the internal reference. M) Enrichment of the fragments containing the E2A binding sites within the HNF4 and Rap1GAP promoter in HPCs by CUT&Tag-qPCR (right) and agarose gel electrophoresis analysis (left). Fold enrichment is relative to the background DNA fragment pulled down by IgG immunoprecipitation. Data are presented as mean ± SD. ***p < 0.001. N) Enrichment of the fragments containing the E2A binding sites within the HNF4 and Rap1GAP promoter in HPCs from different groups by CUT&Tag-qPCR (right) and agarose gel electrophoresis analysis (left). Fold enrichment is relative to the background DNA fragment pulled down by IgG immunoprecipitation. Data are presented as mean ± SD. ***p < 0.001. ID1 belongs to the HLH family of transcription factors, which bind to the bHLH transcription factors and inhibit DNA binding by these factors. ID1 has been proven to control the proliferation and differentiation of stem cells. [14] It is also linked to tumorigenesis and is highly expressed in numerous types of cancers. [26] It exerts its tumor-promoting effects through different signaling pathways including the K-Ras, BMP, PI3K/Akt, STAT3, MAPK, and TGF-pathways. Our research found that ID1 is highly expressed in malignant transformed HPCs induced by TWEAK secreted by macrophages. Clinical investigation also verified that ID1 was correlated to the malignant transformation of HPCs, and a high level of ID1 in HPCs suggested a poor prognosis in HCC patients. Further studies indicate that the high expression of ID1 promotes proliferation through regulating Rap1, PI3K-AKT, MAPK, and TGF-signaling, and enhances the stemness of HPCs. We found that ID1 inhibits the transcription of HNF4 and Rap1GAP, which are the products of a bHLH target gene regulated by the E2A protein. HNF4 is a key mediator of HPC differentiation into hepatocytes. HNF4 deletion in hepatocytes is reported to cause hepatocyte differentiation defects and, in DENtreated mice, it induces accumulation of HPCs and formation of tumors showing HCC morphology. [27] The HNF4 -mediated hepatocyte differentiation program results in bipotential progenitors, creating a persistent pre-neoplastic state primed for transformation by additional oncogenic mutation. [28] Rap1GAP is a GTPase-activating protein that inactivates Rap1-GTP, which is the functional form of Rap1. Rap1GAP has been identified to be suppressed in cancers. [16,29] Our study indicates that Rap1GAP is inhibited by ID1, which leads to the upregulation of Rap1 signaling, finally promoting the proliferation of HPCs. Differentiation arrest and malignant proliferation ultimately lead to aberrant differentiation of HPCs into tumor-initiating cells. Taken together, we dynamically observe the transcription programs and signaling components in HPCs from the early stage until tumorigenesis in primary HCC by single-cell sequencing technology. The results suggest that macrophage-derived TWEAK promotes ID1 expression, which serves a key role in regulating the proliferation, differentiation, and malignant transformation of HPCs in hepatocarcinogenesis. ID1 suppresses differentiation and promotes cell proliferation in HPCs through the inhibition of HNF4 and Rap1GAP transcriptions. Finally, our findings indicate that the expression of ID1 in HPCs in clinical samples is also correlated with the recurrence of HCC patients. The results of our study provide a valuable resource, facilitate a deeper understanding of the mechanisms by which the hepatocarcinogenesis-associated microenvironment regulates HPCs function, and identify a potential biomarker for the prognosis and therapy of HCC patients. Experimental Section Animal Models and HCC Tissues: Male SD rats (8-10 weeks old, weighing 160-180 g) were obtained from Shanghai Laboratory Animal Center (Shanghai, China), and were housed in a pathogen-free animal facility. Rats received DEN at 95 mg L −1 concentration through drinking water. At different time points, rats were sacrificed to obtain liver samples. To deplete macrophages, a 1 mL injection (≈20 mg) of clodronateencapsulated liposomes (Clodronate Liposomes, Amsterdam, The Netherlands) was administered beginning at week 8 of DEN treatment and continuing once weekly until euthanasia. The animal protocols were approved by the Naval Medical University Animal Care Committee. Specimens of HCC tissues were obtained from 53 HCC patients who underwent hepatic resection at the Third Affiliated Hospital of Naval Medical University from 1997 to 2007. The clinical features were included in Table 1. All the specimens were subjected to immunofluorescence analysis. Fresh hepatobiliary resected tumors were collected with informed consent from patients who were enrolled at the Third Affiliated Hospital of Naval Medical University. Prior informed consent was obtained, and the study protocol was approved by the Ethics Committee of the Third Affiliated Hospital of Naval Medical University. Preparation of Single-Cell Suspensions: Liver and tumor tissues were processed immediately after being obtained from DEN- www.advancedsciencenews.com www.advancedscience.com treated rats and humans. Each sample was cut into small pieces (<1 mm), and the pieces were incubated with 1 mL of collagenase IV and 100 μL of DNase (Servicebio) for ≈15-30 min on a 37°C shaker. Subsequently, 4 mL DMEM was added to dilute the suspension, and then a 70-μm cell mesh was used to filter the suspension. After centrifugation at 250 g for 5 min, the supernatant was discarded, and the cells were washed twice with PBS. Then, the cell pellet was resuspended in 1 mL of ice-cold red blood cell lysis buffer and incubated at 4°C for 10 min. Next, 10 mL of icecold PBS was added to the tube, which was then centrifuged at 250 g for 10 min. After decanting the supernatant, the pellet was resuspended in 5 mL of PBS containing 0.04% BSA. Finally, 10 μL of suspension was counted under a microscope. Trypan blue was used to quantify liver cells. Single-Cell RNA Sequencing: Single-cell RNA sequencing was performed by Shanghai NovelBio Co., Ltd. Chromium Single Cell 3' Reagent v3 kits were used to prepare barcoded scRNA-seq libraries according to the manufacturer's protocol. The cell suspension was loaded onto a chromium single-cell controller instrument (10× Genomics) to generate single-cell gel beads in the emulsion (GEMs). Approximately 12 000 cells were added to each channel, the target cell recovery was estimated to be 8000 cells. After the generation of GEMs, reverse transcription reactions were used to generate barcoded full-length cDNA. The emulsions were disrupted using the recovery agent, and then cDNA clean-up was performed with DynaBeads MyOne Silane Beads (Thermo Fisher Scientific). Next, cDNA was amplified by PCR for the appropriate number of cycles, which depended on the number of recovered cells. For single-cell RNA-Seq library preparation, single-cell RNA-seq libraries were constructed using Single Cell 30 Library Gel Bead Kit V2. Sequencing was performed on the Illumina HiSeq XTEN platform (Illumina, 150-bp paired-end protocol), according to the manufacturer's protocol. Analysis of scRNA Sequencing Data: For all analyses, the rat genome (ensemble v93) was used as a reference. For quality control, three quality measurements were calculated, including the number of total genes and transcripts and the percent of mitochondrion genes. Cells that expressed over 25% mitochondrion genes and 40 000 transcripts or lower than 500 genes were removed. The normalized and batch-corrected data were imported into Seurat (v2.3.4) for downstream analysis and visualization. The dimensionality reduction was performed with principal component analysis (PCA). Unsupervised cell clusters of the same major cell type were selected for t-distributed stochastic neighbor embedding (t-SNE) analysis, graph-based clustering, and marker analysis to identify the cell subtypes. The marker genes were calculated using the Seurat package FindMarkers function with the Wilcox rank-sum test algorithm under the following criteria:1) logFC > 0.25; 2) p < 0.05; 3) min.pct > 0.1. To identify DEGs between each group, the Seurat package FindMarkers function using the Wilcox rank sum test algorithm was used under the following criteria:1) logFC > 0.25; 2) p < 0.05; 3) min.pct > 0.1. Enrichment analysis for the functions of the DEGs was conducted based on KEGG pathways and GO analysis. To identify cellular interactions, cell communication analysis was applied based on CellPhoneDB, [30] a public database of ligands, receptors, and their interactions. Membrane, secreted, and peripheral proteins of the cluster were annotated. The mean and cell communication significance (p < 0.05) were calculated based on the interactions and the normalized cell matrix was obtained by Seurat normalization. The total number of ligand-receptor pairs between two clusters was obtained, and interactions were visualized as dot plots. Nichenet was utilized to deeper understand cell-to-cell interaction. [12] This analysis included a large number of public databases (KEGG, ENCODE, PhoshoSite) to track the receptor's target in the provided dataset. Single-cell transcriptome datasets from liver tissues of healthy donors and patients with HCC were also collected. The liver tissue cells of healthy donors were from GSE136103, [31] and the liver tissue cells of patients with HCC come from GSE149614. [32] Culture and Establishment of Rat Adult Liver and Human HCC Organoids: Primary HPCs were isolated from SD rats after treatment with DEN. Rats were anesthetized with pentobarbital sodium (30 mg kg −1 ). The liver was removed by surgical excision and then kept cold at 4°C in basal medium in a 100-mm dish. The liver was minced into pieces of roughly 0.5 mm 3 using fine scissors and the tissue pieces were washed. 10 mL of digestion solution (0.1% type IV collagenase) prewarmed to 37°C was added, and the digestion mixture was incubated on a shaker at 37°C for ≈20-40 min. Then the supernatant was transferred to a fresh 50 mL centrifuge tube at 4°C. The previous digestion steps were repeated for the remaining tissue. The supernatant was filtered through 70 and 40 μm mesh filters. The cells were then seeded into Cultrex reduced growth factor BME2 (basement membrane extract, Type 2; Pathclear) and suspended in advanced DMEM/F12 medium supplemented with 1: Tumor tissue from HCC patients was minced and digested with 0.25% collagenase IV (Sigma) and 0.1 mg mL −1 DNase (Sigma) at 37°C. Tumor cells were then seeded into Cultrex reduced growth factor BME2 and added with advanced DMEM/F-12 supplemented with 1:50 B-27, 1:100 N-2, 10 mM nicotinamide, 1.25 mM N-acetyl-L-cysteine, 10 nM [Leu15]-gastrin I, 10 μM forskolin, 5 μM A83-01, 50 ng mL −1 EGF, 100 ng mL −1 FGF10, 25 ng mL −1 HGF, 100 ng mL −1 RSPO1, and 100 ng mL −1 Noggin (Peprotech). For drug treatment, sorafenib tosylate (Cat. No. S-8502) was purchased from GLPBIO, dissolved in DMSO at 10 mM aliquots, and stored at −20°C. Tumor organoids were plated at a density of 5 × 10 3 cells in 15 μL BME2 droplets in order to form organoids. On day 6, sorafenib was added to the medium, and cell viability was measured after 6 days. www.advancedsciencenews.com www.advancedscience.com In Vivo Tumorigenicity Experiments: Six-week-old male athymic BALB/c nu/nu mice were obtained from the Shanghai Experimental Animal Center, Chinese Academy of Sciences. Mice were maintained under pathogen-free conditions and treated in accordance with the institutional animal welfare guidelines of the Naval Medical University. For the assay to assess tumorigenicity, HPC-derived organoids from different time points of DEN-treated rats' livers were cultured within 2 weeks and 200 organoids were injected subcutaneously into the right axilla of the mice. At the end of 2 months, the mice were sacrificed for analysis. Immunohistochemical Staining and Immunofluorescence: The slides were deparaffinized in xylene and rehydrated through gradient alcohol. Endogenous peroxidase was then inactivated with 3% hydrogen peroxide at room temperature for 20 min (only for IHC). Next, the antigen retrieval was enhanced by autoclaving the slides in 0.1 mol L −1 citrate buffer (pH 6.0) for 2 min. After washing with PBS, the sections were blocked with 3% BSA at 37°C for 30 min. The slides were then incubated overnight at 4°Cwith primary antibodies. Subsequently, the HRP-conjugated goat antibody and DAB (Dako, Carpinteria, CA, USA) or fluorescentlabeled secondary antibodies were used. Images were captured with the microscope. At least three random areas per slide were selected to count the number of positively stained cells. IHC analysis was performed using the following antibodies: EpCAM (di- qRT-PCR: Total RNA was extracted using a HiPure Total RNA Plus Micro Kit (Magen, China) and reverse transcribed into cDNAs using Bestar qPCR RT Kit with a total reaction volume of 20 μL. qPCR was conducted using the Bestar one-step RT qPCR kit (Sybr Green) (DBI, China) according to the manufacturer's instructions. The running parameters for qPCR were set as follows: 95°C for 1 min (pre-denaturation), 95°C for 15 s (denaturation), 60°C for 30 s (annealing), and 72°C for 15 s (extension) for 40 cycles. GAPDH was used as an internal reference. Bulk RNA Sequence Analysis: Total RNA was extracted from each tissue sample using TRIzol (Life Technologies, Grand Island, NY, USA), according to the protocol provided by the manufacturer. Five micrograms of RNA of each sample were individually used for the construction of transcriptome libraries, using IlluminaTruSeq RNA Sample Preparation Kit (Illumina, San Diego, CA, USA), and sequenced using IlluminaHiSeq 2000, according to the manufacturer's instructions. Q20 was used as a quality control standard to filter raw reads. After filtering the lowquality reads, the adaptors of high-quality reads were removed, and then clean reads were aligned to the rat genome, using the UCSC rat reference [build Rn4]. The fragments per kilobase of exon model per million mapped reads (FPKM) values were calculated according to the counts and lengths of genes. The differentially expressed genes with the fold change (FC) ≤ 0.5 or FC ≥ 2 and p < 0.05 were selected. For gene GSEA analysis, normalized values of RNA-seq data (FPKM) were rank-ordered by fold change as input. The analysis was performed using GSEA (version: 4.2, https://www.gsea-msigdb.org/gsea/index.jsp) software. The sequencing was performed by Biomarker (Beijing, China). Transcriptome datasets were also collected from TCGA-LIHC. The transcriptome of 52 HCC adjacent non-tumor tissues was from GSE76427. [33] Quantitative Proteomics: The protein was extracted from HPC-derived organoids. Label-free quantitative proteomics analysis was performed by Jingjie PTM BioLab Co Inc. (Hangzhou, China). Systematic bioinformatics analysis was then performed on all identified proteins. The analysis mainly included quantification of protein expression and differential expression analysis. Then, based on the differentially expressed proteins, protein functions were classified by GO enrichment analysis and KEGG enrichment analysis. Macrophages Isolation and Culture: Rats were anesthetized with pentobarbital sodium (30 mg kg −1 ). The liver was removed and perfused in situ via the portal vein with warmed (37°C) Hanks' balanced salt solution (HBSS), followed by 0.1% collagenase IV. Then livers were minced into pieces and treated with digestion solution (0.1% type IV collagenase) at 37°C for 30 min. Then cells were filtered through 70 μm mesh filters. Nonparenchymal cells were separated from hepatocytes by three 2-min centrifugations at 50 g. Nonparenchymal cells were suspended in HBSS and layered onto a 60/30% two-step Percoll gradient (Sigma) and centrifuged at 1600 g at 4°C for 15 min. Macrophages in the middle layer were collected and allowed to attach to cell culture plates in DMEM with 10% FBS, 100 U mL −1 penicillin, and 100 μg mL −1 streptomycin at 37°C for 1 h. Nonadherent cells were removed by replacing the culture medium. Cells and conditioned medium (CM) were collected for further experiments. Enzyme-Linked Immunosorbent Assay: Conditioned medium (CM) was collected from macrophages. The levels of TWEAK in CM were determined using an enzyme-linked immunosorbent assay (ELISA) kit (Codino (Wuhan) Biotechnology Co., Ltd, China), according to the manufacturer's instructions.
2023-04-23T06:17:37.065Z
2023-04-21T00:00:00.000
{ "year": 2023, "sha1": "cfa6518047c6c695fbda1d8db634ae4e0d1001e5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202300350", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "c8990cdfe5d5f2341b4071a86164e6c0e51d5566", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247832140
pes2o/s2orc
v3-fos-license
Implementation of a required 3rd-year medical student surgical pathology clinical experience Few medical schools have required experience in surgical pathology during the clinical years. After introducing a pilot and preliminary surgical pathology clinical experience into the curriculum, we initiated a required 3rd-year medical student surgical pathology clinical experience that consisted of a one hour introductory lecture; one hour gross room, histology, and immunohistochemistry laboratory introduction; and one hour of one-on-one case sign-out preceptorship with a subspecialty surgical pathologist within the surgery and obstetrics/gynecology block. Concepts that were covered included specimen processing, intraoperative frozen section consultation, completing specimen requisitions, interpreting synoptic reports, and pTNM staging. Students evaluated the surgical pathologist from 1 to 5 (1 “poor/unhelpful,” 2 “marginal,” 3 “neutral,” 4 “good,” 5 “excellent/useful”). Ten multiple-choice questions (included as part of a perioperative services exam) and attendance were incorporated into students’ perioperative services rotation grade. From 2014 to 2018, 757 students participated in the required 3rd-year surgical pathology clinical experience. Thirty academic subspecialty pathologists acted as preceptors with an average of nine sessions per preceptor per year. Evaluation data from 316 students from 2015 to 2018 showed a mean preceptor rating of 4.8/5 (range 4.0–5.0). Students scored an average of 81% on the surgical pathology portion of the exam (range 21–99% for each question). We successfully implemented a required medical student surgical pathology clinical experience. At the clerkship’s conclusion, students demonstrated understanding of key concepts and rated their preceptorship experience highly. Most medical schools in the United States do not have required pathology experiences in the clinical years of medical school. [1][2][3][4][5] Despite surgical pathology accounting for the largest source of revenue in 78% of pathology practices, 6 there are few required clinical experiences in surgical pathology. 2 The College of Medicine at our institution, a United States allopathic medical school, planned to implement a new medical student curriculum in August of 2014. My goal was to create a required surgical pathology clinical clerkship experience within the new 3rd-year curriculum to create graduates familiar with basic principles in surgical pathology. The updated 3rd-year clerkship curriculum consisted of 3 core blocks, including surgery/obstetrics and gynecology, internal medicine/psychiatry/neurology, and family medicine/pediatrics. The surgery and obstetrics/gynecology 4 months block was targeted for the inclusion of the surgical pathology clerkship due to overlapping patient care themes (Fig. 1). A meeting was held with the director of the surgery and obstetrics/gynecology 4 months block, who was receptive to including surgical pathology within the new curriculum. Senior leadership with the pathology department, including the chair and anatomic pathology vice-chair, were very supportive of this initiative. The experience aimed to create a better understanding of surgical pathology and impart baseline knowledge of key principles of surgical pathology among all graduating medical students. The surgical pathology clinical experience objectives were as follows: (1) state information needed on the pathology requisition, (2) list steps and timing of specimen processing including accessioning, grossing, histology, immunohistochemistry, and case sign-out and intraoperative section analysis, (3) describe how to submit a surgical pathology specimen for routine processing, intraoperative consultation, and lymphoma work-up, and explain limitations of surgical pathology, (4) understand the role of a synoptic report, and (5) list the components of pTNM staging (Table 1). These objectives were designed to give medical students knowledge of surgical pathology processes and regulations applicable throughout the United States and as such, would be relevant in their future careers in medicine. 7 A pilot program was conducted during the 2012-2013 academic year from September to November of 2012. Each session included 1 to 4 students and occurred on a Friday for three hours and 45 min. Students reported at 1 p.m. on their assigned Friday and were given an overview of the gross room laboratory for 30 min, histology laboratory for 30 min, and immunohistochemistry laboratory for 15 min by the gross room manager and histology/immunohistochemistry histotechnologist manager. Students were then split into two groups for the following 2.5 h, spending one hour and 15 min in each activity, switching at the halfway point. One group actively engaged with individual pathologists' assistants in intraoperative consultation and gross prosection, while the other group was paired with an attending surgical pathologist. The attending surgical pathologist preceptor facilitated one-on-one sessions with the students. Students chose a surgical pathology subspecialty for the experience based on daily availability, utilizing our subspecialty sign-out model. Subspecialty options included breast, gynecologic, otolaryngic, thoracic, and urologic pathology. By using a multiheaded microscope typically within the attending surgical pathologist's office, active clinical cases were viewed and diagnosed while instructing the student throughout the completion of the cases, similar to the interaction with a pathology resident. Preceptors were required to include clinical cases with hematoxylin and eosin (H&E) slides, an immunohistochemical stain, a synoptic report, and pTNM staging to meet curriculum objectives. At the conclusion of the pilot program, students completed an anonymous evaluation given by the College of Medicine. Students were asked to rate the overall experience using a Likert scale (1 ¼ "Not useful at all," 2 ¼ "A little bit useful," 3 ¼ "Moderately useful," 4 ¼ "Very useful," 5 ¼ "Extremely useful") and a series of short answer questions ("What was the most helpful thing about the afternoon in pathology?" "Was there anything you did not learn on your afternoon in pathology that you had hoped to?" "What could be improved for next year?"). Students were evaluated via a multiple choice quiz of 10 questions worth 1% of their surgery grade and a narrative clinical performance evaluation. A total of 21 students and 4 attending surgical pathologists participated in the pilot program across 8 Friday sessions. Of the 21 students, 18 submitted an evaluation at the end of the experience with a mean rating of 4.0 ("Very useful"). Students responded to the short answer questions with a variety of answers, including, "I enjoyed seeing the process from start to finish and gaining a better understanding of what the process entails," "More clinical correlations," and "I think it was great that we had this afternoon to be exposed to this field, and I think the time devoted was about right to get a quick exposure and understanding of the field." Due to positive student responses from the pilot program, the pathology department was asked to begin the required surgical pathology clerkship experience for all students for the 2013-2014 academic year, rather than wait one year for the initiation of the new College of Medicine curriculum as was the original plan. For accommodating approximately 200 students, the experience was modified to include a one hour introductory lecture at the beginning of the block to 60 students and a two hour surgical pathology clerkship experience similar to the pilot with the following modifications. First, the surgical pathology clerkship experience occurred on Tuesdays from 2 pm. to 4 pm. Second, students engaged with the pathologists' assistant gross room manager in the gross room laboratory for 30 min, the histotechnologist manager in the histology and immunohistochemistry laboratories for 30 min, and the surgical pathologist for one hour. Finally, subspecialty options for the students to choose from increased to include dermatologic, gastrointestinal, neurologic, ophthalmic, and orthopedic pathology in addition to breast, gynecologic, otolaryngic, thoracic, and urologic pathology. A total of 18 surgical pathologists served as preceptors for 245 students during the 2013-2014 academic year. The mean number of sessions per preceptor was 14 (range 3-22), with a 4.9% student no-show rate for the year. Students failed to attend their surgical pathology preceptor session for several reasons, including academic failure, leave of absence, excused absences uncommunicated to the pathology department, or unexcused absences. The College of Medicine implemented the new curriculum for the 2014-2015 academic year. The required surgical pathology experience was incorporated into the new perioperative services rotation, which consisted of anesthesiology, radiology, and surgical pathology and was still within the surgery and obstetrics/gynecology block. The surgical pathology clerkship time slot was moved to Wednesdays from 2 pm to 4 pm. In the 2015-2016 academic year, the gastrointestinal surgical pathology fellow participated as an additional preceptor. After the experience, students completed a written evaluation. Students ranked their experience with the surgical pathologist on a scale of 1-5 and provided overall positive and negative feedback. At the end of each academic year, surgical pathology preceptors received an individual summary teaching evaluation, which was included in each faculty member's departmental annual review. Data from these evaluations have been compiled from 3 academic years (2015-2016, 2016-2017, and 2017-2018). Students were evaluated via 9-10 multiple choice questions included in the perioperative services rotation exam (12% of the perioperative services rotation grade) and attendance (5% of perioperative services rotation grade). In total, the surgical pathology experience accounted for 17% of the student's perioperative services rotation grade. The perioperative services rotation grade (designated as honors, letter of commendation, satisfactory, or unsatisfactory) was included in the Medical Student Performance Evaluation (MSPE) letter for application to residency, alongside all other 3rd-year rotation grades (Fig. 2). Table 1 Surgical pathology clinical experience objectives. 1. State information needed on the pathology requisition 2. List steps and timing of specimen processing, including accessioning, grossing, histology, immunohistochemistry; case sign-out; and intraoperative consultation 3. Describe how to submit a surgical pathology specimen for routine processing, intraoperative consultation, and lymphoma work up and explain the limitations of surgical pathology 4. Understand the role of a synoptic report 5. List the components of pTNM staging Fig. 2. Sample medical student performance evaluation (MSPE). The MSPE is sent to all residency programs to which a student applies. The surgical pathology experience is included in the perioperative services rotation, which receives a clerkship grade that is represented alongside all other required 3rd-year medical student clerkships. A total of 27 surgical pathology attending physicians and 3 gastrointestinal surgical pathology fellows, ranging from 20 to 23 preceptors in a given year, partook in the surgical pathology experience for the 4 academic years from 2014 to 2018, precepting a total of 757 students. The mean number of sessions per preceptor each year was 9 (range 1-15). The student absence rate has been variable, ranging from 0.5 to 4.8%, with a modest improvement more recently (Table 2) Table 2). Qualitative feedback is shown in Table 3. Exam data was collected from 2014 to 2018. In total, 711 students took the exam over the 4 year period with an average score of 81.3% on the surgical pathology portion of the exam. Sample exam questions are shown in Table 4. The average percentage correct for each surgical pathology question ranged from 20.7 to 98.7%. The most significant resource that the clerkship experience required was preceptor time and participation. For each class of approximately 200 students per year, we enlisted 20-23 academic subspecialty surgical pathologists in our department to act as preceptors. Each preceptor contributed less than one hour per month (nine hours on average each year) to the clerkship. The faculty mentors who participated were provided with student feedback as a part of their department's annual review. Student evaluations, which have been almost uniformly positive, were also incorporated into promotion and tenure dossiers. Most of the faculty in our department has a limited role in the education of medical students, and thus, this experience provides a beneficial opportunity for faculty to demonstrate involvement and excellence in medical education. Additional resources required to run the clerkship included laboratory staff (i.e. the gross room manager, histotechnologists, and pathologists' assistants), who were required to give instructional sessions in the laboratory for one hour each week, and the time of the surgical pathology rotation director, with assistance from the pathology education coordinator, who organized student and faculty scheduling and communicated with the College of Medicine. For departments with fewer surgical pathologists at the site, strategies could be utilized to still accommodate a large number of medical students. For example, each attending can precept multiple students at the same time, particularly if there is access to a multiheaded microscope or if there is a camera mounted on a microscope, which can display images in real time. Fellows and residents can also be utilized as preceptors. During the implementation of the experience, the largest barriers to success that arose were unexpected student and faculty absences. We addressed the issue of preceptor absences by sending the faculty schedule via email a few weeks in advance with email reminders 3 days prior to, as well as on the day of the faculty member's participation. Additionally, after implementing a system to give the faculty direct feedback and comments from students, the largely positive evaluations from the students provided positive reinforcement and motivation for improved faculty engagement. Both of these strategies have helped to decrease the preceptor absence rate. Student absences were largely due to poor communication between the College of Medicine and the pathology department (i.e. failure of the College of Medicine to update the coordinator regarding excused absences or the status of students no longer in medical school) and students not remembering their assigned session. This issue was addressed by requesting improved communication with the College of Medicine and implementing a strict policy to no longer allow make-up sessions, which was emphasized to students during orientation and by the College of Medicine. These strategies modestly improved the student absence rate, although the issue has not been completely resolved, with the most recent student absence rate at 4%. Essential to the success of the program was support from senior leadership in the department (chair and anatomic vice chair of pathology). Leaders can appreciate the many potential benefits of this new rotation experience, such as increased visibility of the department across the medical center, increased medical student direct contact teaching hours, which could result in College of Medicine remuneration, opportunities to showcase medical student teaching in promotion and tenure documents, creation of future colleagues with a better understanding of pathology, and improved recruitment into pathology. "Helped show me key findings in recent cases and explained importance of reportwhere clinicians should read." "Great to learn about how to thoroughly read pathology reports and how to correctly submit specimens. Thanks!" "He showed me a number of slides and explained the importance of pathology reports and how we as future clinicians can better serve our patients by giving and understanding the correct diagnosis." "Having a slide to look at normal and abnormal pathological specimen, as well as an experienced pathologist available to walk through the histology greatly expedites the learning process. I felt I learned a lot in that one hour, almost more so than during my first two years of medical school." "Very helpful! It's good to finally learn what happens after specimens leave the OR." "I feel I'll be more comfortable as a clinician interacting with pathologists/reports as a result." "This was a fascinating opportunity to see the hard work that surgical pathologists put into making an accurate diagnosis. I firmly believe that it will help me work as part of a more effective team with my colleagues in the future. Thank you!" "Helpful to know what pathologists need to know about the specimen. Good session." Table 4 Sample exam questions. 1. You are performing a nephrectomy on a 36-year-old female. Intraoperatively, you notice a lesion in the abdomen for which you would like an intraoperative consultation. The best way to submit a specimen from the OR for intraoperative frozen section is which of the following? a) In glutaraldehyde (0.1%) b) In formalin (6.5%) c) In RPMI (0.3%) d) In saline (0.3%) e) Dry or on a Telfa pad moistened with saline (89.7%) 2. A cirrhotic liver is surgically removed from a 59-year-old female. The specimen is submitted to pathology and is described, sectioned, and selected portions submitted for histologic examination. This process is which of the following? a) Intraoperative consultation (2.0%) b) Histologic analysis (11.8%) c) Immunohistochemistry (0.3%) d) Grossing (68.9%) e) Accessioning (17.0%) 3. A 35-year-old male with hematuria and a history of metastatic testicular germ cell tumor is currently undergoing a bladder biopsy. Which of the following is required to note on the pathology specimen requisition? a) Current oral medications (1.0%) b) History of chemotherapy/radiation (95.9%) c) Mental status (0.1%) d) Family history (0.8%) e) Surgical history (2.1%) Correct answers bolded and percentage of student responses noted by each response. By familiarizing future physicians with surgical pathology, we hoped to address and improve several issues relevant to pathologists. One problem that pathologists are commonly presented with is a deficiency of relevant clinical information on pathology requisition forms. A study by Nakhleh et al. revealed deficiencies in identification or accessioning in 6% of accessioned cases, with the most common deficiency (40%) identified as "no clinical history or diagnosis present on the requisition slip". 8 In order to address this issue, we educated students on the proper methods of submitting requisitions and specimens for analysis through several modalities, including an introductory lecture, hands-on lab didactics with the discussion of specimen/requisition submission, and one-on-one interaction with an attending pathologist. The second issue in pathology addressed was the overutilization of intraoperative frozen section consultation. A few reports from large academic medical centers revealed 5-12% of intraoperative frozen section consultations did not influence patient care and were considered inappropriate. 9,10 Through our clinical experience, students learned the indications and limitations of frozen section interpretation and how to correctly submit specimens. Students were tested on these two key practice gaps through written examination, with 94% of students correctly answering questions on these topics and an overall exam question average of 81%. Scores from these questions and attendance contributed 17% of the perioperative service rotation grade (also including anesthesiology and radiology), which was reported alongside other required 3rd-year rotations in the MSPE, a critical part of the residency application. In addition to having quantitative data on students' understanding, it is our belief that formally scoring a student's performance imparts a uniform and higher expectation than a formative experience without such an assessment. We aimed to improve communication between pathologists and future physicians via facilitating one-on-one interactions between students and surgical pathologists. Published studies have shown that there are disparities in interpretation between pathologists and clinicians regarding terms communicating diagnostic uncertainty in pathology reports. 11,12 Through one-on-one interaction of the student with the surgical pathologist, students learned how to read and interpret a synoptic report and observed as the pathologist created reports in real time and explained the content to the student. Furthermore, direct interactions with a surgical pathologist early in medical training may help future clinicians feel more comfortable interacting with and utilizing pathologists as part of the healthcare team throughout their careers. The focus of the clinical experience we created was not to increase recruitment into pathology (although that may be an unintended effect) but to impart a basic understanding of surgical pathology in all of our medical students to utilize in whichever field they pursue. Some may view increasing student exposure to pathology as a method to address the issue of decreased recruitment of medical students into the field of pathology, which is perceived to be due to the relatively recent lack of a dedicated pathology course in the preclinical years. Previously, medical schools usually required clinical experience in family medicine, internal medicine, obstetrics and gynecology, pediatrics, psychiatry, and surgery. However, the National Resident Matching Program (NRMP) data demonstrates the increasing competitiveness of other specialties that were not previously required rotations or dedicated courses. [13][14][15][16] For example, fields such as physical medicine and rehabilitation and dermatology were not classically required clinical rotations or dedicated courses within the preclinical years in medical school, and thus, students typically have limited exposure. Despite this limited exposure, NRMP data has shown increasing competitiveness of these specialties through metrics such as increasing Step 1 and 2 scores, a growing applicant pool, and an increasing number of research experiences per applicant. [13][14][15][16] Additionally, McCloskey et al. showed that participating in a separate pathology course did not increase pathology as a residency choice but experiences within the final two clinical years, including clinical experience in pathology did. 17 In conclusion, we have successfully created and implemented a required clinical experience in surgical pathology for all of our 3rdyear medical students. We believe exposure to surgical pathology during the clinical years is vital to creating more well-rounded physicians with a basic knowledge of the application of surgical pathology to patient care. The experience we have created was well-received by students (mean 4.8/5) and faculty in our department of pathology and did not require excessive redistribution of student time or departmental resources. We encourage other medical schools to implement similar required experiences for their medical students to educate future physicians about surgical pathology and address key practice gaps. Declaration of competing interest The Authors declare that there is no conflict of interest.
2022-03-31T16:50:08.641Z
2022-03-25T00:00:00.000
{ "year": 2022, "sha1": "ceb1fb25e6231fbe6b4a02dd5892c8092c7c8a83", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.acpath.2022.100027", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2e0316fedbaff41aa0f3e5f7a34d2d1844fab04", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258784707
pes2o/s2orc
v3-fos-license
SP07. Mechanical Stimulation Improves Functional Recovery after Skeletal Muscle Injury in Rats METHODS: Patients included in this study underwent RPNI surgery between 2014 and 2021 for the treatment or prevention of postamputation pain and had at least one-year follow-up. A retrospective review was performed to collect preoperative demographic and clinical information. Outcomes for neuroma pain and PLP were assessed from the medical records at both the 1-year and the most recent follow-up visits. Postoperative pain was compared to preoperative pain and categorized into the following outcomes: no symptoms, improved symptoms, stable symptoms, or worsened symptoms. Opioid prescriptions were obtained through the EMR at preoperative and follow-up visits and converted to morphine milliequivalents (MME) based on dosage and type of opioid medication. PURPOSE: With the advent of personalized cancer treatments, patient-specific in vitro models of the breast malignancies are critical for modeling the complex in vivo milieu and to better assess therapeutic efficacy. A primary challenge to developing such platforms is isolating and successfully co-culturing the many primary cell types that constitute the tumor microenvironment. In addition, optimizing for cost-effectiveness of such platforms is critical to making the technology translational. Using patient-derived cells and machine learning-based image analysis, we have developed a low-cost 3D biomimetic platform that allows for the study of cell behavior in a highly organotypic model of breast malignancy. METHODS: Stromal vascular fraction, organoids and mature adipocytes were isolated from breast tissue obtained from healthy female patients undergoing reduction mammoplasty. Isolated cells were embedded in non-ribosylated collagen, creating a biomimetic milieu that approximates the patient-specific in vivo cellular environment ("biomimetic collagen"). 3D collagen constructs consisting of a bottom layer of plain collagen or collagen embedded with RFPtagged MDA-231 tumor cells followed by a layer of plain or biomimetic collagen were built in a 96-well plate. GFPtagged human umbilical vascular endothelial (HUVEC) cells were plated in a monolayer layer on top of the collagen to mimic the endothelial barrier. Constructs were imaged with confocal microscopy on days 0 and 7 to assess for endothelial cell organization and migration. Image analysis was completed in Imaris. ANOVA tests for statistical significance were completed in R Studio. RESULTS: Confocal microscopy at day 0 revealed ~800μm thick constructs with distinct tumor and non-tumor collagen layers and a robust HUVEC top monolayer. Confocal images at day 7 were notable organization of the HUVEC layer into 2D tubular networks with some HUVEC below the top monolater indicating migration through the collagen matrix. Our machine-learning based approach was able to reliably and reproducibly parse HUVEC cells from the surrounding cells and collagen, allowing for quantification of cell migration in the Z-axis. Statistical analysis revealed a significant difference in migration across treatment groups (p < 0.01), with endothelial cells in the tumor and biomimetic collagen platform migrating 20-40 microns further than endothelial cells in the other construct designs, a finding retained on intergroup pairwise comparison (p < 0.01). CONCLUSION: This biomimetic platform design and image analysis allows for reliable, reproducible study of discrete endothelial cell interactions with the tumor microenvironment. The finding of enhanced HUVEC migration in the platform with both tumor and biomimetic cells is consistent with the hypothesis that stromal cells, adipocytes and malignant cells enhance the angiogenic capacity of endothelial cells. Importantly, this utilization of patient-derived stromal vascular fraction and mature adipocytes in this platform makes it inherently personalized, a vital component of any future clinically-relevant in vitro cancer platform. Jennifer C. Lee, MSE, Nishant G. Kumar, MD, Stephen W.P. Kemp, PhD, Paul S. Cederna, MD, Theodore A. Kung, MD University of Michigan, Ann Arbor, MI, USA. PURPOSE: Regenerative peripheral nerve interface (RPNI) surgery is a validated method for mitigating pain in postamputation patients. However, the long-term efficacy of RPNI surgery has yet to be fully characterized. The purpose of this clinical study was to assess long-term pain outcomes and opioid medication intake in amputation patients who have undergone RPNI surgery for the treatment or prevention of postamputation pain. METHODS: Patients included in this study underwent RPNI surgery between 2014 and 2021 for the treatment or prevention of postamputation pain and had at least one-year follow-up. A retrospective review was performed to collect preoperative demographic and clinical information. Outcomes for neuroma pain and PLP were assessed from the medical records at both the 1-year and the most recent follow-up visits. Postoperative pain was compared to preoperative pain and categorized into the following outcomes: no symptoms, improved symptoms, stable symptoms, or worsened symptoms. Opioid prescriptions were obtained through the EMR at preoperative and follow-up visits and converted to morphine milliequivalents (MME) based on dosage and type of opioid medication. RESULTS: Seventy-seven patients met inclusion criteria in this study. Thirty-seven patients underwent RPNI surgery to treat preexisting postamputation pain (treatment group) and 40 patients underwent prophylactic RPNI surgery at the time of amputation (prophylactic group). Fifty-five patients (26 in the treatment group and 29 in the prophylactic group) had additional follow-up visits after the one-year postop time frame. Average time from surgery at most recent follow-up was 50 months (4.2 years). For the treatment group, at time of most recent follow-up, favorable neuroma pain and PLP outcomes (no reported symptoms or improved symptoms) were seen in 77% and 61% of patients, respectively. For patients treated prophylactically, 97% of patients had no recorded neuroma pain or PLP at time of most recent follow-up visit. Average MME at time of most recent follow-up was also lower for both the treatment group (-12 +/-83) and prophylactic group (-40 +/-103). CONCLUSIONS: This study demonstrates the long-term clinical benefits of RPNI surgery on improving neuroma pain and PLP as well as a reduction of opioid intake for amputation patients who underwent the procedure. These findings suggest that the benefits of RPNI surgery on pain outcomes are highly favorable, and thus this surgery should be considered for all amputation patients. Hiroshi Fujimaki, MD, PhD, Nicole Ayres, BS, Eddy Rios, BS, Ryan Chen, BA, Jenna Lambert, BS, Jaeyoung Lee, BS, Giorgio Giatsidis, MD, PhD UMass Chan Medical School, Worcester, MA, USA. PURPOSE: Skeletal muscle injury (SMI) cause by trauma or surgery can result in permanent disability and loss of function. Physical therapy is the current standard of care for SMI but long-term recovery of muscle strength has shown to be inadequate in severe SMI. Initial evidence suggests that in animals invasive mechanical stimulation can increase up to 3-fold the tetanic torque after SMI. Here, we hypothesized that mechanical stimulation can improve functional recovery after SMI by stimulating skeletal muscle regeneration and by mitigating fibrosis. METHODS: A standard excisional muscle injury (8mm ∅) was created in the left Tibialis Anterior (TA) muscle of female adult (200-250 grams) Sprague Dawley rats (n= 10/group). Post-injury, animals were either followed up with no treatment (control group) or subjected to controlled mechanical stimulation of injured muscles (experimental group). Functional recovery was measured at 14 and 28 days post-injury (PID) by measuring the TA tetanic torque and the animals' endurance on a treadmill run. In addition, as a measure of not challenging physical activity, the distance traveled was measured by fixed-point observation using a camera.At PID 28, samples of injured TAs were processed for histology (Masson) and immunohistochemistry (markers: MHC, Col-1, CD31, CD68) to measure myocyte/fibrosis percentage composition. RESULTS : At PID 28, the tetanic torque in the experimental group was significantly higher than in controls (73.1±18.7% of pre-injury baseline vs. 47.0±23.7%, p=0.014).The resistance to fatigue in the experimental group was significantly higher than in controls (88.7±24.9% of pre-injury baseline vs. 56.6±27.5%, p=0.014).Endurance on a treadmill run was also significantly higher in the experimental group compared to controls (97.8±58.5% of pre-injury baseline vs. 40.6±35.4%; P=0.016) at the same timepoint.At histology, the treatment group showed less fibrosis and more myocyte percentage composition. CONCLUSION: In rats, mechanical stimulation promotes improves functional recovery after skeletal muscle injury. Validation of these findings in large animal models and in humans might help develop novel treatment for patients with muscle injuries caused by trauma or surgery.
2023-05-20T05:04:53.331Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "2d79be8a2371cd6aa0b20d4007c7ad8beeec3914", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "646c34969c51d45b6540ec03c9919fee3a6476b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
59354343
pes2o/s2orc
v3-fos-license
Analysis on Dust Devil Containing Loess Dusts of Different Sizes Dust devil in convective boundary layer (CBL) was simulated by Euler-Lagrange approach. By means of large-eddy simulation method and smoothly stretched grid, flow fields of the small scale whirlwind were obtained. Movements of 20,000 dust particles were tracked by computing external forces and velocities with the aid of the simulated high-resolution air flow fields. Characteristics of the simulated airflow were in good accordance with field observations. Distribution of particles reproduced the shape of dust devil. Statistics of particle trajectories revealed basic properties of the movement of dust particles. Small particles with diameter of 0.04 mm were able to form a rolling cylinder and to be lifted easily to a certain height. 0.1 mm dust particles spiraled upwards like a cone with a small cone angle. Particles with diameters of 0.16 mm and 0.3 mm were obviously thrown outward with limited lifting height and fell back to the ground. The negative vertical pressure gradient in the dust devil strengthened particle lifting, unlike in the horizontal wind systems where the vertical pressure gradient isn’t the major driving force. Numerical results showed why total suspended particulates (TSP) concentration exceeded the standard value greatly all the year round on the Loess Plateau, where the loess dust from the local ground was one of the major sources of air pollutants. 90% loess dusts were smaller than 0.04 mm in diameter, which made them easily lifted to a high altitude by dust devils even without obvious horizontal wind. Because thermal plumes are common in CBL, dust devils can occur frequently in multi-locations of the Loess Plateau. According to nature circumstances of the Loess Plateau and thermodynamic characteristics of dust devil, the dust-devil-scale simulation indicated one source of the background TSP on the Loess Plateau and how they were lifted into the atmosphere. INTRODUCTION A small whirlwind in the convective boundary layer (CBL) occurring in sand-rich areas can lift small particles and form visible dust column, or the so-called dust devil, as shown in Fig. 1.Recent researches and evidences have manifested that dust devil is a universal dust transport system (Metzger, 1999;Leovy, 2003;Gu et al., 2006).Vertical vortex of air comes into being when hot air near the ground rises, the vorticity makes the nearby air inflow spiral to the low pressure center under the thermal (Smith and Leslie, 1976;Renno et al., 1998;Kanak et al., 2000), and the fast spinning air can be strong enough to lift dust and sand as shown in Fig. 2. It is observed that temperature raises 4-8 K, and pressure drops 2.5-4.5 hPa in the core of the dust devil as compared with ambient air (Ives, 1947;Sinclair, 1973;Hess and Spillane, 1990;Xu, 2002).Both vertical and tangential velocities can reach 15 m/s.The dimension of the visible dust devil column varies with the whirlwind.Sinclair (1973) reported that the dimension was tens of meters in diameter and 600 meters in height in the deserts of Arizona.Hess and Spillane (1990) observed dust columns of 32-141 m in diameter and 300-600 m in height in Australia.Most dust devils last only a few minutes, traveling with the wind.Therefore, single dust devil is definitely a micro-meteorological phenomenon, and many aspects of dust devils (including the inner structure, the environmental impacts, and the detailed picture of dust devil evolution, etc) were at large obscured.Preliminary estimation of the weight of dust-devil-lifted dust/sand (Hall, 1981;Metzger, 1999) showed that the contribution of dust devils to floating dusts is about one magnitude larger than vehicles in arid lands.Greeley et al. (2003) and Gu et al. (2006) had discussed particle threshold of dust lifting in dust devils.These studies pointed out the importance of detailed researches on dust devils which would benefit evaluating the total impacts of dust devils on environment.et al. (2006).Mathematical models and implementation were given in Section 2, and numerical results and relative discussions were presented in Section 3, and we ended with a summary in Section 4. LES model for the gas phase Three dimensional LES governing equations for incompressible air are where the overbar denotes filtering, u i is velocity component in i direction in Cartesian coordinates, p pressure, e inner energy, t time, T temperature, c ρ reference density, T c buoyancy reference temperature, β thermal expansion coefficient, ij δ Kronecker delta. Subgrid-scale (SGS) motions, i.e. turbulent eddies smaller than the filter width, are parameterized via Germano's dynamic model (Germano et al., 1991).In the Germano's model, the SGS viscosity SGS ν , together with molecular viscosity ν , affects the flow field through the following two SGS terms 1 ( ) 3 where Pr is Prandtl number, ν is molecular viscosity, SGS Prandtl number Pr SGS is 0.33, and the SGS viscosity SGS ν is calculated by Germano's dynamic model. Dust movement model for the particle phase Dust movement is tracked using the Lagrange approach.Dust particle, the dispersed phase, is assumed to be spherical.Trajectory of each particle is determined by Newton Second Law as where m is particulate mass, s u particulate velocity vector, F overall external forces acting on particle (Fan and Zhu, 1998).Main external forces considered in the present study are the drag force, the gravity and the buoyancy force, the pressure gradient force, and the added mass force. The drag force can be calculated in the form The gravity and the buoyancy force can be calculated as 3 ( ) 6 The force due to pressure gradient is And the added mass force is Farrell (Farrell et al., 2004;Farrell et al., 2006) paid close attention to electrostatic field and implied that electrostatic force should also be considered.However, mechanism and quantitative evaluation of static electricity in dust devils is obscure, and the number of particles which can be tracked at the same time is limited, electrostatic force isn't taken into consideration.The major external forces considered in the present study are enough to represent basic characteristics of dust movement in dust devil. Implementation The simulation domain is a cylinder with 1200 m in height, 400 m in diameter, which contains the column of dust devil and the nearby region as illustrated in the bold dashed-lined rectangular region in Fig. 2. In the vertical direction, the vertical grid size is smoothly stretched with the first grid 0.1 m above the surface.The local grid refinement is also applied in the center of dust devil in the horizontal directions to ensure that the detailed structure of dust devil can be obtained.Spatial and temporal discretization employs common second-order schemes. Dust-devil scale simulation is quite different from the CBL scale simulation.The latter includes the whole region shown in Fig. 2, aiming to reveal the emergence, development and attenuation of thermal convection.The dust-devil scale simulation, however, concerns the dust-devil column in a single thermal cell in the near ground region.Therefore, the initial and boundary conditions of dust-devil scale LES were set according to the background field of the thermal plume in CBL (Gu et al., 2006).The initial radial and vertical velocities are set to zero.The initial temperature below 800 m was 313K, and the region above 800 m was preset as the inversion layer to buffer the violent updraft as many CBL scale simulations did.The surface temperature was 343K to represent typical effect of solar heating on the ground in hot summer afternoons.The ground and the upper boundaries were no slip boundaries.The lateral face was pressure inlet boundary (Zhao et al., 2004).Twenty thousand dust particles were released at the surface in the mature stage.Restricted by the computing capability, the number of released particles was smaller than reality, and particle collision was negligible.Nevertheless, the released particles were enough to show characteristics of dust movement.The true density of dust is 2,600 kg/m 3 .A large amount of dust particles with different sizes, 40, 100, 160, and 300 µm, were simulated and tracked in order to give the general shape of dust devil, and to compare trajectories and velocity of particles.According to Table 1 and Table 2, diameters of most loess dusts are less than 30 µm.Therefore, dust particles with a diameter of 40 µm can represent most loess dusts.Large sand particle typically has a diameter of over 200 µm, and particles with a diameter of 300 µm can also be adopted to represent large sand particles.So in the present study, particles with these 4 different sizes can show the overall behaviors of dust devils. RESULTS AND DISCUSSIONS When the air flow reaches the mature stage, air strongly spirals up in the core of the thermal plume.The three-dimensional vortex structure results in a low pressure region in the center of the dust devil, especially in the near-surface region of the center.Fig. 3 shows the pressure whirlwind in the near-surface region and turns upward in the lower part of the whirlwind.The air updraft can reach 15 m/s in the region with large pressure drop.The weak downdraft in the upper central part of the dust devil forms the whirlwind eye and also impedes the spread of dust particles.The present numerical results of pressure drop and velocity vector in the whirlwind correspond well with observations (Ives, 1947;Hall, 1981;Hess and Spillane, 1990).Dust devil, as an aeolian transport system, has been studied by Metzger (1999).Gu et al. (2006) gave more detailed picture of dust devil to study the dust lifting mechanism.By means of calculating the forces exerted on lots of individual dust particles and therefore their velocities, the movement of particles in whirlwind and the overall shape of dust devil can be simulated by Euler-Lagrange approach. Fig. 4 shows spatial distributions of dust particles with different diameter in the simulated dust devil.Fig. 4a compares spatial distribution of 40 µm and 160 µm particles in three-dimensional view, and Fig. 4b compares spatial distribution of 100 µm and 300 µm particles in x-z plane.Only particles lying in the center of the ground (the region with significant air updraft in whirlwind) can be lifted.In Fig. 4a, 40 µm dusts are lifted and form a cylindrical shape, while 160 µm particles a conical shape.The outward scattering of 160 µm particles isobvious and makes most of 160 µm particles lying out of 40 µm dusts.Similar result can be found in Fig. 4b in which the outward scattering of 300 µm particles makes them look like a bowl.The difference in elevation height of particles is also shown in Fig. 4. The appearance of the simulated dust devil is quite similar to observed dust devils (Fig. 1).The main column of dust devil contains most of small particles.Large particles are lifted only to a limited height, spread outward and form the "skirt" around the base of the main column. Real shapes of dust devils vary with the whirlwind and properties of dust/sand, but the overall patterns of dust devils are the same as simulated dust devils.The negative vertical pressure gradient in the dust devil helps particle to be lifted, unlike in the horizontal wind systems where the negative vertical pressure gradient isn't a major driving force.Once dusts leave the ground in the dust devil, the revolution of particles is mainly maintained by the pressure gradient force exerted by the air.The centrifugal force makes big particles easy to be thrown outwards, while small dusts can spin up to a height of hundreds of meters.Fig. 5 gives projections of velocity vectors of 100 µm and 300 µm particles.For clarity, 300 µm particles in the front half of the computational domain, y < 0, are drawn, while 100 µm particles in the back half are drawn.Vectors are drawn every 20 particles.Fig. 5a is x-z projection for 100 µm particles in the back half region; Fig. 5b is x-z projection for 300 µm particles in the front half region; and Fig. 5c is x-y projection for both 100 µm and 300 µm particles.Both the direction and relative magnitude of velocity vector reveal properties of particle movements.Most particles with high speed locate at the central near-surface region.Small dusts (100 µm) can be lifted to over 30 m in height and still maintain positive vertical velocity.Large sand grains (300 µm), however, can be lifted to about 15 m high only.Vertical speeds of many large grains are quite low.And large grains, which are thrown out of the main body of whirlwind, drop down to the ground gradually. Instantaneous statistics of 100 µm and 300 µm particles distributions in the vertical direction are shown in Fig. 6.Percentage of the number of particles every 1 m in the vertical direction is given in Fig. 6a.The vertical percentage distribution of 100 µm particles is smoother than that of 300 µm particles.Percentage of 300 µm particles increases with height rapidly below 5 m, and then decreases over the height of 5 m.The maximum percentage of 300 µm particles at a height of 5 m indicates that it's not easy to move large particles to a long distance.Fig. 6b gives the mean vertical speed computed by the following expression where W m is the mean vertical speed, w i the vertical speed of i-indexed particle, n the number of particles located at heights ranging from x-1 to x m (x = 1, 2, …, 30) .Similar to Fig. 6b, Fig. 6c gives the mean elevation angle of velocity vector, with elevation angle defined as Figs. 5. Velocity vectors of 100 µm and 300 µm particles in the central 30 × 30 × 40 m region, (a) x-z projection for 100 µm particles, (b) x-z projection for 300 µm particles, (c) x-y projection for both 100 µm and 300 µm particles.For clarity, 300 µm particles in the front half of the domain, y < 0, are drawn, and 100 µm particles in the back half, y > 0, are drawn. The maximum mean vertical speed and mean elevation angle of 300 µm particles occur just above the ground.When 300 µm particles move to a height over 3-5 m, mean vertical speed and mean elevation angles are close to zero, and the emergence of small negative value indicates the decline of large particles. On the other hand, 100 µm small particles maintain positive mean vertical speed and mean elevation angle in the whole 30 m above the ground.Maximum mean vertical speed of 100 µm particles locates at about 2 m above the ground, while maximum mean elevation angle locates at about 8 m above the ground.Variations of statistics of 100 µm particles are moderate in the region above 10 m, where 300 µm large particles are sparse.Mean elevation angle distribution of 300 µm particles (Fig. 6c) shows that 300 µm large particles have reached their maximum elevation height, 15 m (Fig. 6a).Owning to the short integral time, 100 µm particles at the highest altitude still have positive elevation angle, which means that the simulated dust devil can raise small particles much higher than 30 m if the simulation continued. According to the simulation results of dust devil, trajectories of particles with different diameters in dust devils, as shown in Fig. 7, cannot be perfectly described by the common mechanism of horizontal wind shear transport system.In the simulated whirlwind system, particles with diameter smaller than 40 µm can spiral up in the cylindrical shape illustrated as the curve (a) in Fig. 7, and form the major visible part of dust devil.Particles with a diameter close to 100 µm are lifted in the conical shape with small cone angle illustrated as the curve (b) in Fig. 7.The curves (c) and (d) represent dust-lifting patterns for 160 µm and 300 µm particles, respectively.often form the outer skirt around the base of dust devil.The strength of the whirlwind can influence its dust-lifting capability.Nevertheless, these typical simulated dust-lifting patterns remain and conform to reality.The whirlwind with a low-pressure heat core makes dusts lifted easily from the ground, as compared with horizontal wind system.Dust devils often occur in the convective boundary layer with small mean horizontal wind speed.Since thermal plumes are the basic property of CBL, there are many potential spots where dust devil can take place.Although the total contribution of dust devils to floating dusts is difficult to evaluate till now, it's believed that dust devil is an important way to lift dusts into the air. Loess Plateau is the largest loess deposition zone in the world.Mean diameter of loess dust is small.Basic properties of loess dust are given in Table 1 and Table 2.More than 90% loess dusts are smaller than 40 µm in diameter.According to the present simulation results, most loess dusts can be easily lifted to a high altitude through dust devils.These small loess dusts can be suspended and diffuse in the air for a long period, and have been proved as the major source of PM 10 and TSP.The phenomenon that the concentration of TSP exceeds the standard value in most of days on the Loess Plateau is a result of natural circumstances.Dust devil provides a key channel to transport dusts from the ground to the air in convective boundary layer.Because loess dusts are small and easy to be lifted and dust devils can occur frequently in CBL, especially in direct sunlight, dust devil plays an important role in the formation of the background PM 10 (including TSP) on the Loess Plateau. SUMMARY Dust devil in the convective boundary layer is simulated by Euler-Lagrange approach.By means of large-eddy simulation method, smoothly stretched grid and carefully set boundary conditions in the specific circumstances, flow fields of the small scale whirlwind are obtained.Movements of 20,000 dust particles are tracked by computing external forces and velocities with the aid of simulated high-resolution air flow fields.Dust-devil-scale numerical simulation of this gas-particle two-phase flow gives an insight into the appearance and nature of dust devils. Characteristics of the simulated airflow are in good accordance with field observations.Statistics of particle trajectories give striking shapes of dust devils.Small particles with diameter of 0.04 mm can be lifted easily to a high level and form a rolling cylinder.0.1 mm dust particles spiral upwards like a cone with a small cone angle.Both 0.16 mm and 0.3 mm particles are obviously thrown outward with limited elevation height and fall back to the ground.By virtue of figures of the three-dimensional velocity vector of dust particles, it is revealed that a wide range (< 0.3 mm) of dust particles can be lifted at the surface-atmosphere interface by the small whirlwind. Numerical results explain why TSP/PM 10 concentration exceeds the Standard greatly all the year round on the Loess Plateau, where the loess dust coming from the local surface is the major source of TSP.90% loess dusts are smaller than 0.04 mm in diameter, which makes them lifted easily to a certain height by dust devils even without obvious mean horizontal wind.Because the thermal plumes, which can lead to whirlwinds and thereafter dust devils, are common in CBL, dust devils come into being frequently in multi-locations of the Loess Plateau, especially in direct sunlight.According to the nature circumstances of the Loess Plateau and the thermodynamic characteristics of dust devil, the dust-devil-scale simulation indicates one source of the background TSP/PM 10 on the Loess Plateau and how they are lifted into the atmosphere. Fig. 2 . Fig. 2. Sketch of thermal plumes and dust devil in the convective boundary layer. Figs. 6 . Figs. 6. Comparisons of statistics of 100 µm and 300 µm particles, (a) percentage of number of particles, (b) mean vertical speed, (c) mean elevation angle of velocity vector. Table 1 . Main chemical components of Loess dust. Table 2 . Diameter distribution of Loess dust.
2019-01-01T22:07:48.468Z
2008-03-01T00:00:00.000
{ "year": 2008, "sha1": "5d6233561c9940cffb30f1be28cd51df9a7d1ef9", "oa_license": "CCBY", "oa_url": "https://aaqr.org/articles/aaqr-07-03-oa-0026.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5d6233561c9940cffb30f1be28cd51df9a7d1ef9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
119138910
pes2o/s2orc
v3-fos-license
Classification of extremal vertex operator algebras with two simple modules In recent work, Wang and the second author defined a class of `extremal' vertex operator algebras, consisting of those with at least two simple modules and conformal dimensions as large as possible for the central charge. In this article we completely classify the potential character vectors of extremal VOAs with two simple modules. We find 15 candidates, all but one of which is known to be realized by a VOA. We discuss the remaining potential character vector, corresponding to a VOA with central charge 33, along with its connection to a new holomorphic VOA with central charge 32. The primary tool is the theory of vector-valued modular forms, in particular the work of Bantay and Gannon. Introduction In 1988, a foundational article of Mathur, Mukhi, and Sen [MMS88] pioneered an approach for the classification of characters of rational conformal field theories using what are now called modular linear differential equations (MLDEs). The study of this classification problem, via MLDEs and other methods, has continued over the last two decades, and in recent years there has been significant activity surrounding classification of characters of chiral CFTs with two characters. We organize this classification problem as follows. We take vertex operator algebras as a mathematical model for chiral CFTs, and for a sufficiently nice ('strongly rational') VOA V of central charge c, we consider its representation category C = Rep(V ), which is a modular tensor category [Hua08]. One can recover the equivalence class of the central charge c mod 8 from C, and motivated by this we define an admissible genus to be a pair (C, c) consisting of a modular tensor category and a number c in the appropriate class mod 8 (c.f. [Höh03]). It is natural to approach the problem of classification of VOAs and their characters by restricting to genera where both C and c are sufficiently small in an appropriate sense. We will take the rank of an MTC (i.e. the number of simple objects) as a measure of its size, although other options are possible. The smallest MTC is the trivial one, Vec, and a genus (Vec, c) is admissible when c ≡ 0 mod 8. There are a total of three VOAs in the genera (Vec, 8) and (Vec, 16), all coming from lattices. The problem becomes interesting at (Vec, 24), where the classification of characters of VOAs can be read off from Schellekens' famous list [Sch93]. The classification of VOAs in (Vec, 24) is almost complete, but the uniqueness of a VOA with the same character as the Moonshine VOA has not been established. At (Vec, 32) the explicit classification problem of characters is already intractable, even just for the simplest examples coming from even unimodular lattices. This difficulty propagates to the study of genera (C, c) with c large, as for any fixed V ∈ (C, c) one obtains from every W ∈ (Vec, 32) a new VOA V ⊗ W ∈ (C, c + 32). Thus if one wishes to consider classification problems for higher central charge and rank(C) > 1, it is necessary to restrict to a class of VOAs which excludes VOAs like V ⊗ W . There is a natural notion of 'primeness' that one could consider in this context, but we will consider something slightly different. The twist of a simple module M ∈ Rep(V ) is given by θ M = e 2πih , where h is the lowest conformal dimension of states in M . Thus for any (hypothetical) VOA V ∈ (C, c), we can recover the conformal dimensions of simple objects mod 1. Moreover, there is an a priori bound [MMS88,Mas07] with rank(C) > 1 is called extremal [TW17] if h j is as large as possible for c in light of (1.1). This is analogous to the extremality condition introduced by Höhn for holomorphic VOAs (i.e. VOAs V with Rep(V ) = Vec) [Höh95]. Since the h j are determined mod 1 by C, extremality is equivalent to the condition < 6. The classification of (characters of) extremal non-holomorphic VOAs appears to be a tractable piece of the unrestricted general classification problem. In [TW17], it was demonstrated that when rank(C) is 2 or 3, then the characters of a VOA are determined by its genus, and a list of potential character vectors was obtained up to central charge 48. In this article we give a complete classification of characters of extremal VOAs V with rank(Rep(V )) = 2, with no restriction on the central charge. Main Theorem. There are 15 potential characters of strongly rational extremal (i.e. < 6) VOAs V with exactly two simple modules. These characters are listed in Table A.1. The theorem appears in the main body of the text as Theorem 3.13. Of the 15 potential characters in Table A.1, 14 of them have been realized by VOAs. The remaining case corresponds to the genus (Semion, 33), and we strongly believe that there is an extremal VOA in this genus. In Section 3.6 we describe a strategy for constructing this VOA which was described to us by Lam and Yamauchi, but at present the question remains open. The construction would depend on a certain interesting c = 32 holomorphic VOA with no weightone states. The study of holomorphic VOAs with no weight-one states is a natural extension of Monstrous Moonshine and constitutes an important frontier in the theory of VOAs. The experimentally derived c = 32 candidate could serve as a first test case for attempts to construct such VOAs, and such a construction would be all the more satisfying for closing the last remaining case in the classification presented in our Main Theorem. This article fits into a recent cluster of activity regarding classification of VOAs with two simple modules (or, more generally, two characters). Just recently, Mason, Nagatomo, and Sakai [MNS18] used MLDEs to establish a classification result for VOAs with two simple modules satisfying certain additional properties, in the = 0 regime. The VOAs covered by their classification are four affine VOAs and the Lee-Yang model; our c = 33 candidate has = 4. In contrast, our paper uses techniques of Bantay and Gannon [BG07,Gan14] to compute fundamental matrices for spaces of vector-valued modular forms. Our approach is computational, in that we derive an explicit recurrence between potential character vectors in the genus (C, c) and those in (C, c ± 24). By studying the long-term behavior of this recurrence, we are able to obtain effective bounds on the possible central charges of extremal VOAs. This article is an adaptation of the undergraduate thesis [Gra18] of the first author, which obtained a classification of characters for extremal VOAs with two simple modules, and which focused on the case c, h ≥ 0. Not long after the thesis was published online, an article [CM18] in the physics literature used MLDEs to obtain a classification similar to the one presented here, without having been aware of [Gra18]. The article is organized as follows. In Section 2, we review the classification of rank two modular tensor categories and modular data from the perspective of VOAs. In Section 3.1, we review the tools from [BG07] which we will use to describe character vectors of VOAs, particularly the fundamental and characteristic matrices associated to a suitable representation of PSL(2, Z) and a bijective exponent matrix. In Section 3.2, we derive a recurrence relation which describes how characteristic matrices change under the transformation c → c ± 24. In Sections 3.3 and 3.4, we study the long-term behavior of this recurrence in the positive c and negative c situations, respectively, and in Section 3.5 we put these tools together to obtain our main theorem. In Section 3.6, we describe in more detail our interesting c = 33 candidate characters. Finally, in Appendix A we give tables of numerical data used in the proof of the main theorem, as well as all 15 extremal characters in rank two. Rank two modular tensor categories In this article we will consider VOAs which are simple, of CFT type, self-dual, and regular (or equivalently, rational and C 2 -cofinite [ABD04]). For brevity, we will use the term strongly rational to describe such VOAs. We refer the reader to [DLM97,ABD04] for background on the adjectives under consideration, but we will explain here the consequences which are relevant for our work. A strongly rational VOA V possesses finitely many simple modules V = M 0 , M 1 , . . . , M n . We denote the category of V -modules by Rep(V ), and write rank(Rep(V )) for the number of simple modules n + 1. We will assume throughout that every module M j is self-dual, as it simplifies the exposition and is satisfied in the rank two case. We are primarily interested in the characters of V , where as usual q = e 2πiτ , c is the central charge of V , h j is the smallest conformal dimension occurring in M j , and M j (n + h j ) is the space of states of conformal dimension n + h i . The foundational work of Zhu [Zhu96] demonstrated that the characters ch j define holomorphic functions on the upper half-plane, and that their span is invariant under the action of the modular group. Thus if we set there exists a representation ρ V : PSL(2, Z) → GL(n + 1, C) such that for all γ ∈ PSL(2, Z) (recall that we assumed each M i to be self-dual). Here γ · τ denotes the natural action of PSL(2, Z) on the upper half-plane. By the work of Huang ([Hua08], see also [Hua05]), Rep(V ) is naturally a modular tensor category, and based on Huang's work Dong-Lin-Ng [DLN15] showed that Zhu's modular invariance is encoded by the S and T matrices of Rep(V ) (see [EGNO15] for more detail on the S and T matrices of a modular tensor category). Recall that the normalization of S is only canonical up to a sign, and that for each choice of S the normalization of T is only canonical up to a third root of unitary. By [DLN15,Thm. 3.10] (based on [Hua08]), we have that ρ V ( 0 −1 1 0 ) coincides with a normalization of the categorical S matrix of Rep(V ), and it is straightforward to check directly that We now consider strongly rational VOAs V such that rank(Rep(V )) = 2. We will sometimes write h instead of h 1 for the non-trivial lowest conformal dimension, and similarly we will sometimes write M instead of M 1 . A complete list of normalized S matrices for modular tensor categories of rank 2 is given by [RSW09, Thm. 3.1]: where ε 2 = 1 and α 2 = 1 + α. Observe that and thus the phase of ch j (i) is independent of j. By (2.1), ρ ( 0 −1 1 0 ) ch(i) = ch(i), and thus ρ ( 0 −1 1 0 ) fixes a vector all of whose entries have the same phase. This observation allows us to refine (2.3) and conclude that if rank(Rep(V )) = 2 then ρ ( 0 −1 1 0 ) must be one of: 2 where we use positive square roots and ϕ = 1+ √ 5 2 is the golden ratio. By the classification of [RSW09], there are exactly two modular tensor categories realizing each of 2.4 as a normalization of its S matrix, and these two are related by a reversal of the braiding. Fix one of these 8 modular tensor categories C and its normalized S-matrix from (2.2). We wish to see how much information about a hypothetical V with Rep(V ) = C we may recover. By definition, the non-normalized T matrix of C is the diagonal matrix e 2πih j δ j,k , and thus the equivalence class of h mod 1 is determined by C. Observe that if (S, T ) are generators of a representation of PSL(2, Z) then (S, ζT ) again generate a representation only if ζ 3 = 1, and thus from (2.2) we can see that c mod 8 is determined by C as well. We summarize the 8 cases in Table 2.1. Each row corresponds to a modular tensor category, giving its normalized S matrix from (2.4), the equivalence classes of central charge and minimal conformal weight of a hypothetical VOA realization, as well as a familiar name for the category and a VOA realizing the category, where appropriate/known. Lee-Yang Lee-Yang at c = − 22 Lee-Yang None The genus of a strongly rational VOA V is the pair (Rep(V ), c). In [TW17], the second author and Zhenghan Wang defined an extremal (non-holomorphic) VOA to be one with rank(Rep(V )) > 1 and such that the minimal conformal weights h j were as large as possible in light of a certain a priori bound [MMS88,Mas07]. See [TW17, §2.2] for more detail. When rank(Rep(V )) = 2, then V is extremal when The quantity := 1 + c 2 − 6h is always a non-negative integer, and has been used frequently in the study of VOAs (e.g. [MMS88, MNS18, GHM16] among many others). The purpose of this article is to provide a list of all possible characters of extremal VOAs with rank(Rep(V )) = 2. Given a rank two modular tensor category C and central charge c in the appropriate class mod 8 (as in Table 2.1), there is a unique rational number h ext in the appropriate class mod 1 satisfying (2.5). When C is fixed we will write h ext (c) to emphasize the dependence on c. The pair (C, c) of a modular tensor category and appropriate choice of c is called an admissible genus [Höh03]. For every admissible genus (C, c) described by Table 2.1 there is a representation ρ c : PSL(2, Z) → U (2, C) whose S matrix is given by the entry of the table, and whose T matrix is obtained by rescaling the categorical T matrix by e −2πic/24 . These representations are simply a choice of normalization of the categorical S and T matrices, and their existence do not depend in any way on vertex operator algebras. However, they are defined in such a way that if there is a strongly rational VOA V with central charge c and Rep(V ) = C, then ρ V = ρ c . 3. Characters of VOAs with two simple modules 3.1. Characters and vector-valued modular forms. The work of Bantay and Gannon [BG07,Gan14] on vector-valued modular forms provides powerful tools for studying the characters of vertex operator algebras. We briefly recall the points which will be most important for us, and refer the reader to these references, especially [BG07,§2], for more detail. Let ρ : PSL(2, Z) → GL(d, C) be an irreducible representation of the modular group, and assume that ρ ( 1 1 0 1 ) is diagonal with finite order. Let X : H → C be a holomorphic function on the upper half-plane which satisfies for all γ ∈ PSL(2, Z) and τ ∈ H. Choose a diagonal matrix Λ such that ρ ( 1 1 0 1 ) = e 2πiΛ , called an exponent matrix. For any choice of exponent matrix, we may Fourier expand = 0 for n sufficiently negative (observe that this does not depend on the choice of Λ). Given a choice of exponent Λ, we define the principal part map . . , d} let e ξ ∈ C d be the corresponding standard basis vector. Given a choice of bijective exponent matrix, let X (ξ) ∈ M(ρ) be the function with P Λ X (ξ) = q −1 e ξ . In this case, X (1) , . . . , X (d) form a basis for M(ρ) as a free C[J]-module, where J = q −1 + 196884q + · · · is the J-invariant. The fundamental matrix Ξ is given by The characteristic matrix χ is given by the constant terms of Ξ taken in the q-expansion (shifted by Λ as in (3.2)). That is, Now fix as in Section 2 a modular tensor category C of rank two, and a choice of real number c in the appropriate class mod 8. From this data we specified a representation ρ c of PSL(2, Z) with the property that if there exists a VOA V with central charge c and Rep(V ) = C, then its character vector (ch j ) satisfies (ch j ) ∈ M(ρ c ). The key observation of [TW17] is that is a bijective exponent for ρ c , where h ext is the real number lying in the appropriate class mod 1 which satisfies (2.5). Thus by the definition of fundamental matrix we have: 6 Theorem 3.1 ([TW17, Thm. 3.1]). Let C be a modular tensor category of rank two, and let c be a real number in the appropriate class mod 8. If V is an extremal VOA with central charge c and rank(Rep(V )) = C, then its character appears as the first column of the fundamental matrix corresponding to the bijective exponent Λ(c). Let χ(c) = (χ(c) ij ) 1 i,j=0 be the characteristic matrix taken with respect to Λ(c). Thus if V is an extremal VOA with central charge c (and rank(Rep(V )) = 2) we have χ(c) 00 = dim V (1). We will determine the possible values of c for which there exists an extremal VOA by showing that for |c| sufficiently large, one of χ(c) 00 or χ(c) 10 is not a non-negative integer. 3.2. General recurrence. They key idea [Gra18] is to derive a recurrence relating the pair (χ(c + 24), h ext (c + 24)) to (χ(c), h ext (c)), and then study the long-term behavior of this recurrence. In fact, to handle the case c → +∞, one may derive a simple recurrence involving only the diagonal entries of χ [Gra18, Lem. 6.4]. To handle the case c → −∞ we will use all of the entries of χ, and the relation will be slightly more complicated as a result. Let M − 2×2 be the set of 2 × 2 complex matrices whose bottom-left entry is non-zero, and let M + 2×2 be the set of matrices whose top-right entry is non-zero. Define functions By direct computation, one may check that these functions are invertible and f −1 ± = f ∓ . We will show that f ± take characteristic matrices to characteristic matrices, but first we must check: Lemma 3.2. Let χ be the fundamental matrix corresponding to a 2 × 2 bijective exponent Λ. 3.3. Recurrence for large positive c. We will show that for n sufficiently large, χ(c + 24n) 00 < 0, and moreover we will obtain an effective bound on such an n. We will do this by iterating f + , although in fact a simpler function will suffice. Lemma 3.4. Let g : and let g n denote its n-fold iterate. Then Proof. This follows by a straightforward induction. Proof. Set a = χ(c) 00 , d = χ(c) 11 , and h = h ext (c). By Lemma 3.3 and Lemma 3.4, we have Since we assume h > 0, when n ≥ 0 we have h + 2n − 1 > 0. Thus χ(c + 24n) 00 < 0 if and only if The right-hand side of (3.17) is a quadratic polynomial in n which is concave down. Thus (3.17) holds when n exceeds the largest real root of that quadratic (and it holds trivially if the quadratic has no real roots). The conclusion of the lemma now follows immediately from the quadratic formula. The purpose of Lemma 3.5 is to reduce the question of classifying extremal VOAs to a finite one. We apply it 24 times to obtain the following. Theorem 3.6. For every rank two modular tensor category C, there is an explicitly computable c max such that there are no extremal VOAs in the genus (C, c) when c > c max . The values are given in the following table, and the numbering of categories is the same as Proof. Let us first take C to be the Semion MTC. In this case, c ≡ 1 mod 8. We consider first the case c ≡ 1 mod 24. For c = 1, we can compute the characteristic matrix χ(1) = 3 26752 2 −247 , for example using the method of [TW17] (based on [BG07]). We can compute h ext (1) = 1 4 from the definition of h ext and the fact that h ≡ 1 4 mod 1. Applying Lemma 3.5 with this data, we see that χ(1 + 24n) < 0 when n > 0.298 . . .. Thus if n max = 0, we have χ(1 + 24n) < 0 when n > n max . By Theorem 3.1, there are no extremal VOAs in the genera (C, 1 + 24n) when n > n max . We can repeat the above exercise for the values c = 9 and c = 17, and three times again for each row of Table 2.1. The resulting characteristic matrices, h ext , and n max are given in Table A 3.4. Recurrence for very negative c. We will show that for n sufficiently large, we have |χ(c − 24n) 10 | < 1. Since χ(c−24n) 10 = 0 by Lemma 3.2, this will guarantee that χ(c−24n) 10 is not an integer. As with the case of very positive c, we will avoid finding an explicit expression for f n ± [χ(c), h ext (c)]. Instead, we extract the following pieces of the data which will be easier to work with. Let α(c) = χ(c) 00 − χ(c) 11 and β(c) = χ(c) 10 χ(c) 01 . The utility of studying β(c) is the following. The see how β(c) depends on c, we introduce the function given by This function was chosen so that: Lemma 3.8. Let (C, c) be an admissible genus from Table 2.1. Then Proof. This follows by direct algebraic manipulation applied to Lemma 3.3 and the formula for f − . It is now an algebra exercise to determine the long-term behavior of α(c − 24n) and β(c − 24n). Lemma 3.9. The n-fold iterate of k is given by Proof. The formula may be verified by a straightforward induction using the definition of k. We carefully examine the expression obtained in Lemma 3.9 to obtain a criterion to bound |β(c + 24n)| > 1. Through straightforward manipulation of the formula for β n , we have r(n) = r 1 (n) + r 2 (n) where r 1 (n) =2985968n 4 + 5971936(1 − h)n 3 + Since h < 0 and β > 1, every term of r 1 (n) is positive. Thus to show r(n) > 0 it suffices to find a term of r 1 (n) which controls r 2 (n). To show 2985968n 4 + r 2 (n) > 0, it suffices to show This will follow from the simple estimate Lemma 3.11 below with A = 2985968, B = (α − 120(1 − h)) 2 , and C = 1 − h, provided This would follow from or equivalently This is an immediate consequence of our assumption (3.19). We used the following simple observation in the proof of Lemma 3.10. Lemma 3.11. Let A,B, C, and n be positive real numbers with n ≥ 1. Then if it follows that Proof. It suffices to show An 4 > 2B(n 2 +C 2 ), or equivalently (An 2 −2B)n 2 > 2BC 2 . Instead, we may show An 2 − 2B > 2BC 2 since n ≥ 1 and An 2 > 2B. This follows immediately from our hypothesis. We now apply Lemma 3.10 in 24 cases to obtain a lower bound on the central charge of extremal VOAs. 13 Theorem 3.12. For every rank two modular tensor category C, there is an explicitly computable c min such that there are no extremal VOAs in the genus (C, c) when c < c min . The values are given in the following table. The numbering of categories is the same as Proof. As in the proof of Theorem 3.6, we will work through the necessary computation when C = Semion and obtain a bound which holds for c ≡ 1 mod 24. Since h ext (1) > 0, we must instead consider c = −23 in order to apply Lemma 3.10. We compute h ext (−23) = − 7 4 using (2.5), and we compute χ(−23) = 713 11 57264144384 11 1 26752 − 3397 11 , and from there α(−23) = 4110 11 and β(−23) = 23546112 121 . Thus by Lemma 3.10 we have |β(−23 − 24n)| > 1 when n > 0.13 . . .. Taking n max = 0, we have |β(−23 − 24n)| > 1 when n > n max . As |χ(−23 − 24n max ) 10 | < 1, we conclude that |χ(−23 − 24n) 10 | < 1 for all n > n max , and thus by Theorem 3.1 there cannot be an extremal VOA in the genus (C, c) when c < −23 and c ≡ 1 mod 24. We repeat this argument for the other two equivalence classes of c mod 24, and the value c min from (3.20) is the minimum of the allowed values. We apply the above procedure to each of the 8 modular categories appearing in Table 2.1. The data from each of the cases is given in Table A.3. 14 3.5. Main result. Combining Theorem 3.6 and Theorem 3.12, we obtain for every rank two modular tensor category C a pair of numbers c min and c max such that if V is an extremal VOA in the genus (C, c), then c min ≤ c ≤ c max . We can now compute the characteristic matrix of every remaining pair (C, c) (e.g. by Lemma 3.3), and throw away any for which the first column does not consist of positive integers. Theorem 3.13. Let V be a strongly rational extremal VOA with two simple modules. Then it lies in one of the following genera (and its character vector is given in 3.6. The next monster? There is exactly one set of potential characters for an extremal VOA with two simple modules which has not yet been realized, which corresponds to the (Semion, c = 33) row in Theorem 3.13: q −33/24 1 + 3q + 86004q 2 + · · · q 9 4 (565760 + 192053760q + · · · ) . We would have V = W ⊗ W c ⊕ N ⊗ M 2 , where N is the non-trivial simple W -module. We now describe a strategy for constructing W c which was described to us by Ching-Hung Lam and Hiroshi Yamauchi. In order to find the c = 32 VOA W c , it may be easier to look for its holomorphic extensionṼ = W c ⊕ M 3 . This VOA would have character vector chṼ = q −32/24 (1 + 0q + 139504q 2 + 69332992q 3 + · · · ). This is akin to Evans and Gannon's suggestion that a (not yet realized) Haagerup VOA might find a natural home inside the Moonshine VOA with c = 24 [EG11, §5.2.2]. Lam and Yamauchi propose to construct such aṼ by first finding another c = 32 holomorphic VOÃ V with dimṼ (1) = 0, along with an involution θ ∈ Aut(Ṽ ) such that the unique θ-twisted V -module has lowest conformal dimension 7/4. The associated simple current extension of the θ-invariant vectors is thenṼ . It appears to be possible thatṼ can be constructed by a twisted orbifold of the rank 32 Barnes-Wall lattice, taking advantage of the theory of framed VOAs [LY08], but that is unclear at this point. One difficulty in this approach is the fact that dimṼ (1) = 0. The most powerful tools for the theory of existence and uniqueness of VOAs rely on a large weight-one space, so that the VOA can be controlled by its affine ('classical') part. This is exemplified by the fact that the only remaining case in the classification of c = 24 holomorphic VOAs is to establish that there is a unique realization of the Monster VOAs character. Thus we find ourselves in a position analogous to the early days of Monstrous Moonshine, with the benefit of modern context. We have the character vector of a proposed holomorphic VOAṼ with dimṼ (1) = 0, now with c = 32. We also have the suggestion that it can be built (indirectly) out of an interesting unimodular lattice. It is an important challenge to develop the theory of holomorphic VOAs with no weight-one states, and we believe that this example provides a small, but interesting, candidate to use as a test case.
2018-11-06T06:19:03.000Z
2018-11-06T00:00:00.000
{ "year": 2018, "sha1": "a55cc6e5a20ce8dbeab863d4711974e25bf597be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.02180", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a55cc6e5a20ce8dbeab863d4711974e25bf597be", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
234616172
pes2o/s2orc
v3-fos-license
Cutaneous leishmaniasis mimicking sarcoidosis in Libyan patient: A case report Leishmaniasis is vector–borne disease caused by parasitic protozoans belonging to the genus Leishmania and is transmitted by infected phlebotomine sand flies . Sarcoidosis is a multisystemic disorder of unknown cause characterized by the formation of immune granulomas in affected organs. Clinical symptoms, severity and evolution of sarcoidosis are highly heterogeneous and can lead to other diseases with similar clinical and pathologic presentations. In this report we present a case of 77 years-old diabetic Libyan male, with chronic erythematous indurated plaques, and nodules on the face. The patients were treated by multiple physicians with topical and systemic corticosteroids for 25 years without improvement. Microscopic examination of Giemsa stained smears from all lesions showed numerous Leishmania amastigotes in and outside monocytes. Leishmania tropica was identified as causative species. The patient was treated with combination of oral rifampicin (600 mg/day) and isoniazide (300 mg/day) and followed up for 9 months until skin-slit smears and PCR turned negative. In conclusion, CL can be misdiagnosed clinically with any granulomatous skin lesions which are compatible with sarcoidal type granuloma. Molecular diagnosis of CL by implementing Leishmania -specific PCR approaches should be performed routinely in any granulomatous skin lesion. Introduction Leishmaniasis is a vector-borne disease caused by parasitic protozoans belonging to the genus Leishmania and is transmitted by infected phlebotomine sand flies. The clinical manifestations is polymorphic and range from cutaneous (CL) and muco-cutaneous leishmaniasis which is characterized by localized lesions on the skin and in mucous membranes, to visceral leishmaniasis which is the most severe form and mostly fatal in many developing countries if left untreated. 1 These manifestations (dermotropic and viscerotropic) depend on the causative Leishmania species, [2][3][4] and the immunological response of the infected host. 5 CL may appear as unusual clinical and histopathological forms and mimic other co-morbidities that causes diagnostic challenge and may lead to misdiagnosis. Sarcoidosis is a multi-systemic disorder of unknown cause characterized by the formation of immune granulomas in affected organs. The most affected organs are the lungs and lymph nodes. However, it can affect every organ of the body from the skin and eyes to be most common extra pulmonary organs that develop serious clinical manifestations. [6][7][8] Estimated mortality attributable to sarcoidosis is between 0.5% and 5%. Most of these deaths result from pulmonary, cardiac and central nervous system. 9,10 Clinical symptoms, severity and evolution of sarcoidosis are highly heterogeneous and is affiliated to other diseases with similar clinical and pathologic presentations. The majority of cases are time limiting and may show spontaneous resolution within 1-3 years. However, serious and prolonged course may be experienced in some patients. 9 Sarcoidal granuloma in this study, we reported for the first time in Libya a patient with CL symptoms mimicking sarcoidosis. Case report In this report we present a case of 77 years-old diabetic Libyan male, with chronic erythematous indurated plaques, and nodules on the face and ears for more than 25 years duration. According to his clinical and histological characteristics, the patient was misdiagnosed with sarcoidosis and was treated by multiple physicians with topical and systemic corticosteroids for 25 years without improvement. The patient was admitted to our department in March 2015. On examination there were several small painless popular lesions with indurations. The lesions showed central ulceration with erythematous borders over the forehead and cheek ( Figure 1A). Skin smears and biopsies were taken under local anesthesia and sent for Lab analysis. Microscopic examination of Giemsa stained smears from all lesions showed numerous Leishmania amastigotes in and outside monocytes ( Figure 2). Leishmania tropica was identified as causative species by amplification of the Internal transcribed spacer 1 (ITS1) PCR followed by RFLP with HaeIII restriction enzyme. The patient was treated with combination of oral rifampicin (600 mg/day) and isoniazide (300 mg/day) and followed up for 9 months until skin-slit smears and PCR turned negative ( Figure 1B). The cutaneous lesions of the face disappeared and there has been no evidence of recurrence after three years ( Figure 1C). Abstract Leishmaniasis is vector-borne disease caused by parasitic protozoans belonging to the genus Leishmania and is transmitted by infected phlebotomine sand flies. Sarcoidosis is a multisystemic disorder of unknown cause characterized by the formation of immune granulomas in affected organs. Clinical symptoms, severity and evolution of sarcoidosis are highly heterogeneous and can lead to other diseases with similar clinical and pathologic presentations. In this report we present a case of 77 years-old diabetic Libyan male, with chronic erythematous indurated plaques, and nodules on the face. The patients were treated by multiple physicians with topical and systemic corticosteroids for 25 years without improvement. Microscopic examination of Giemsa stained smears from all lesions showed numerous Leishmania amastigotes in and outside monocytes. Leishmania tropica was identified as causative species. The patient was treated with combination of oral rifampicin (600 mg/day) and isoniazide (300 mg/day) and followed up for 9 months until skin-slit smears and PCR turned negative. In conclusion, CL can be misdiagnosed clinically with any granulomatous skin lesions which are compatible with sarcoidal type granuloma. Molecular diagnosis of CL by implementing Leishmania-specific PCR approaches should be performed routinely in any granulomatous skin lesion. Discussion Sarcoidal granuloma may appear with heterogeneous presentations that mimic several infectious diseases such as tuberculoid leprosy, lupus vulgaris, lupoid rosacea, granuloma annulare and coetaneous leishmaniasis. [10][11][12] This appearance is challenging in deferential diagnosis between theses infectious diseases and sarcoidosis which should be always remembered as diagnosis of exclusion. The emergence and development of modern diagnostic techniques has led to better discrimination of these cases. Patients who were previously diagnosed with sarcoidosis were tested positive fordifferent coinfections. [13][14][15][16] However, few cases of CL associated with sarcoidosis have been described previously. [17][18][19] Leishmania parasites were isolated from significant number of patients with single and/or multiple nodular lesions of the skin. Chronic forms of nodular skin lesions represent a diagnostic dilemma in clinical settings and from a histological aspect. 17,20 In case of unusual CL forms and coinfections, Leishmania amastigotes are either difficult to trace in these chronic skin lesions or is altogether absent. In clinical practice, these forms may be misinterpreted as sarcoidosis, particularly in cases where microscopic screening for Leishmania amastigotes from skin-slit smears from these lesions reported negative. This misdiagnosis of CL with other mimicking diseases such as sarcoidosis has led to implementation of treatment strategy which is in complete contradiction to CL treatment. Our patient received a long-term therapy of corticosteroids which significantly modulated the immune response and increasing the patient's susceptibility to parasitic infections such as CL. 21 Administration of immunosuppressive therapies such as corticosteroids has been described as a risk factor associated with CL reactivation. 21 Previous experimental studies have shown that CL is usually being controlled predominately by T-cell mediated immunity with a preferential Th1 pattern. 22 Hence, the incidence of CL in these immune-compromised patients may be due to inadequate cellular immune response. Moreover, the causative agent of CL in our patient belonged to L. tropica species. To the best of our knowledge, the phenomenon of CL mimicking sarcoidosis due to L. tropic has not been reported before in Libya.The advancement of molecular diagnostic techniques for CL in Libya has led to an accurate and rapid detection of Leishmania amastigotes where microscopic and histological diagnosis failed to identify them. Moreover, it has enabled a precise identification of the causative species, better deferential diagnosis from comorbidities, and consequently better treatment and recovery of patients. [23][24][25] The sensitivity of molecular diagnostic methods for CL including kDNA and ITS1 PCR was higher compared to other diagnostic approaches. These differences in sensitivity become more significant in patients with chronic forms of nodular skin lesions. 17,22 In conclusion, CL can be misdiagnosed clinically with any granulomatous skin lesions which are compatible with sarcoidal type granuloma. From Histopathological point of view, CL may be misinterpreted as sarcoidosis especially when Leishmania amastigotes are not seen microscopically. Considering their epidemiological features, endemicity, and geographical distribution, molecular diagnosis of CL by implementing Leishmania-specific PCR approaches should be performed routinely in any granulomatous skin lesion.
2021-01-07T09:06:29.082Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "6b5b8353b4eac7b5221ce5e0acf4f3679d5ac728", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/JMEN/JMEN-08-00304.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dbfcf34511527f459933d9dec6149f8f5af3f37b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252465074
pes2o/s2orc
v3-fos-license
“They Were Saying That I Was a Typical Chinese Mum” : Chinese Parents’ Experiences of Parent-Teacher Partnerships for Their Autistic Children Effective parent-teacher partnerships improve outcomes for autistic students. Yet, we know little about what effective partnerships look like for parents of autistic children from different backgrounds. We conducted interviews with 17 Chinese parents of autistic children attending Australian kindergartens/schools to understand their experiences. Parents appreciated the acceptance, opportunities and supports they received in Australia. They had high expectations of children; expectations not often shared by educators. Parents were respectful of teachers’ expertise and polite and undemanding in interactions. Nevertheless, parents were frustrated by inconsistent teaching quality and inadequate communication. Navigating systems was also challenging and parents faced discrimination from teachers and their community. Recommendations include fostering open home-school communication, proactively seeking parents’ expertise about children and explicitly scaffolding parents’ self-advocacy. children and young people's success in and out of school (Sheridan et al., 2012).As autistic students may benefit from consistent approaches across home and educational settings (Azad & Mandell, 2016;Simonoff et al., 2012), such partnerships may be especially important for them (Lilley, 2019).Unfortunately, parents of autistic children often report substantial challenges with their children's education, highlighting a lack of access to autism-specific knowledge, expertise and support for their children, poor communication channels, adversarial relationships with teachers and ineffective collaboration with teachers and education settings (Lilley, 2019;McNerney et al., 2015).Furthermore, despite feeling they know their children best, parents often describe feeling not listened to and excluded from classrooms, resulting in them feeling isolated and unsupported (Lilley, 2014;Makin et al., 2017).It is likely that additional challenges exist which further impact family engagement in parent-teacher partnerships for Culturally and Linguistically Diverse (CALD) parents of autistic children but we have limited data, especially in an Australian context. Effective partnerships between teachers and caregivers - namely those with open communication, trust, advocacy and respect (Turnbull et al., 2015) -can substantially improve Australian Demographics Australia is a broadly Westernised but highly multicultural country, with 30.0% of the population (7.6 million residents) born overseas (Australia Bureau of Statistics, 2021).Behind the United Kingdom and India, China now provides the third highest number of overseas-born residents to Australia, comprising 2.5% of the 2020 population, with twice the number of Chinese-born people living in Australia in June 2020 (650,640) compared to a decade earlier (Australia Bureau of Statistics, 2021).International migrationwhen one moves from one country to another (Sinha, 2005) -is itself a challenging experience and, for various reasons, migrants who have an autistic child face increased stressors (Kim et al., 2020;Lim et al., 2020). As Australia is home to a large proportion of migrants from Chinese backgrounds, we need to understand how best to assist these families, especially as they may be uniquely different to other migrant groups.For example, China does not permit dual nationality so these migrants can be less willing to forgo Chinese citizenship and risk future exclusion from their home country (Stevens, 2018).Indeed, recent data highlights low rates of naturalisation for Chinese migrants living in Australia when compared to other overseas-born groups (Pan, 2020).Consequently, Chinese migrants may be more likely to experience ongoing disruptions to their family and work lives during their extended periods in limbo between countries (Stevens, 2018).Moreover, trying to maintain relevancy and reduce marginalisation in both countries concurrently may also pose a challenge (Gao, 2006).When families have an autistic child, this temporariness may be even more impactful in relation to accessibility and consistency of funding and/or familial supports.Since education is a universal and extended experience for all parents and children, understanding what might improve parent-teacher partnerships is an important first step in supporting migrant families of autistic children.So, here we focus on experiences of parent-teacher partnerships for migrant families of autistic children from Chinese backgrounds living in Australia. Culture, Caregiving, Education and Autism Exploring culture in the context of caregiving, autism and parent-teacher partnerships is also important since culture influences people's views and experiences in each of these spheres.Migrant Chinese parents have been found to be respectful of teachers and observant of distinct role boundaries (Collignon et al., 2001;Denessen et al., 2007;Lai & Ishyama, 2004).These parents have likewise reported feeling uneasy dealing with teachers (Lai & Ishyama, 2004) and reticent to voice concerns when dissatisfied with services for their children with disabilities (Liu & Fisher, 2017).Additionally, migrant parents are often faced with language barriers, fewer social supports and unfamiliarity with education systems and teaching approaches (Haines et al., 2018;Lai & Ishyama, 2004;Wang & Casillas, 2012). Cultural views of caregiving and autism similarly add complexity.Whilst parents are highly respectful of teachers' roles, they still feel acutely responsible for ensuring their children's progress, especially academically (Li & Yeung, 2017;Shorey et al., 2020;Wang & Casillas, 2012).Frustration with inconsistencies in teachers' experience and skills and low/inappropriate expectations of autistic students has previously been reported by migrant Chinese parents of children with disabilities (Lai & Ishyama, 2004).Chinese parents also face autism stigma (Kim et al., 2020;Tang & Bie, 2016), with the concept of 'losing face' (i.e., the reduction of interlinked familial and individual dignity and status) playing a prominent role in stigmatisation in native and migrant Chinese populations (Huang & Zhou, 2016;Liao et al., 2019).Since Australian teachers are largely white, female and monolingual (Australian Institute for Teaching and School Leadership, 2020;Evans, 2011), migrant parents also face potential discriminatory treatment from teachers whose views about parent involvement and education are likely to reflect their own Western-centric experiences and training (Bakker et al., 2007). All of the aforementioned factors are likely to contribute to what effective parent-teacher partnerships looks like for Chinese parents, as well as how these partnerships develop.Despite theoretical knowledge of the difficulties Chinese parents of autistic children can face when interacting with teachers and schools, there is virtually no research examining their experiences in Australia.Recent research has explicitly articulated how positive relationships between teachers and Chinese parents can be a source of parental support as well as ensuring autistic children's success in schools (Zhao & Fu, 2022), so finding out how to foster positive parent-teacher relationships is important for several reasons.In this study we sought to elicit the first-hand accounts of Chinese parents living in Australia as they navigated schooling for their autistic children. Community Involvement Participatory or co-produced research supports collaboration across researchers, practitioners and community members (Hickey et al., 2018).This type of research aims to ensure studies are respectful, ethical and responsive to the needs, preferences and principles of the communities at the centre of the research (Collins et al., 2018).This study adopted a participatory approach, which operated at several levels.To begin, autistic scholars and advocates who were themselves also autistic parents of autistic children (GH, MH, WL) worked collaboratively with non-autistic researchers (JS, SR, PD, RL and EP) to secure the research funding for this project and design the initial study.Next, the team -and JS in particular -worked together with ED -to assemble a Chinese-specific parent Advisory Group (AG), consisting of five Chinese parents of autistic children (4 mothers, 1 father, all co-authors on the study), researchers and professionals, including a Mandarin and Cantonese speaking interpreter.This AG met four times over the duration of the project, overseeing the recruitment of participants, as well as study design, implementation and dissemination.Their involvement safeguarded that the project was sensitive to, and relevant for, the Chinese autism community (i.e., ensuring the suitability of questions asked of parents and the appropriate messaging and disseminating of research findings).Chinese AG members were paid for their time and expertise. Recruitment and Participants Participating parents had to be ≥ 18 years and self-describe as being from a Chinese background.There were no English-speaking requirements.All children of participating parents had received an independent clinical diagnosis of autism and were engaged in education (early education, primary or high school, or home-schooling).Participants were recruited through formal and informal networks (i.e., word of mouth, Chinese community groups etc.).All recruitment and interview materials (i.e., demographics questions, parent interviews and recruitment flyers) were available in English, Simplified Chinese and Traditional Chinese.Eighteen parents were recruited and interviewed (including one mother-father couple) with one interview omitted from analysis as it was subsequently revealed that the autistic student had recently completed school. Of the 17 parents, most were female (n = 14; 82.4%) and had post-school qualifications.Almost half (n = 7) of parents were sole parenting.Most parents were born in China (n = 12; 70.6%), with the remaining born in Hong Kong (n = 4; 23.5%) and Malaysia (n = 1; 5.9%).Together, parents had 19 autistic children (n = 15 males, n = 4 females).At the time of interview, autistic children were on average eight years of age (range 2-17, SD = 3.91).Children largely attended mainstream kindergarten/school settings (n = 14) with two children in special schools and the remaining three children in other/dual settings, (i.e., where children split their weeks between different types of educational setting).See Table 1 for parent, child and family characteristics. Procedure This study was conducted from January to December 2021, during the second wave of the COVID-19 pandemic in Australia.Once parents had provided, informed consent, they each completed a background questionnaire (either online, over the phone or at the beginning of the interview).Each parent then took part in an in-depth Zoom interview, as pandemic-related restrictions at the time precluded the possibility of face-to-face interviews.Parents were asked about their experience of their child's kindergartens/schools, interactions/involvement with teachers and ideal parentteacher partnerships.Parents were also asked how they felt the Chinese community understood autism (see Supplementary Table 1 for full interview schedule).Parents were interviewed in their preferred language (Mandarin, n = 7; Cantonese, n = 6; English, n = 4) by someone from their cultural background [PL].A separate interpreter was hired to translate Mandarin/Cantonese interviews.Interview recordings were transcribed verbatim using a transcription service.Parents were reimbursed for participating. Data Analysis We followed Braun and Clarke's (2006) method for reflexive thematic analysis within an essentialist framework in which our goal was to report the meanings and experienced reality of the participants.Once all interviews had been transcribed, one senior researcher [JS] immersed themselves in the data, taking notes on striking and recurring observations and applying codes to each transcript (managed in NVivo, version 12).To begin, JS developed and applied codes in discussion with EP.Next, JS generated a draft thematic map showing potential themes and subthemes and this map along with all relevant quotes was revised during multiple discussions with EP.Finally, the revised thematic map was reviewed by the broader team [SR, PD, GH, MH, WL, RL and NS] as well as members of the Chinese AG [ED, LC, ED, PL, EM, RW, JY and CY] prior to being finalised.Analysis was therefore iterative and reflexive in nature (Braun & Clarke, 2006, 2019). Despite expecting much of Australian teachers, parents expressed disappointed in their work ethic when compared to teachers in China/Hong Kong.One parent explained: "In Australia, the working culture is you just finish your job, you go home when your shift ends.But in China, you stay in your office until you finish" [64].Another parent also felt that Australian teachers think: "'Oh, time to go' but in Hong Kong they will try to stay back to finish" [10].Parents felt that teachers could and should work harder for their children: "I can't say they're irresponsible, but I feel like they can do more" [48].One parent reflected: "Maybe if I were in China, I would have expected more" [67].This sentiment was not just about putting in the hours but extended to how much they felt they cared about their children: "They might regard teaching just as a job… they normally just follow the standard procedures and don't spend more energy or passion in finding out special needs from the kids with autism" [73]. Even though parents were the experts on their children, they maintained distinct boundaries between home and school: "I just leave this to the school, because I am not a professional" [46] (subtheme 1.4).This delineation was reportedly common in China: "You don't need to take the initiative; the teacher would take the initiative and talk to you" [59].Parents firmly believed teachers were the "professionals" [39] with "particular qualifications to be able to work in the sector" [20] -and thus expertise and experience that was distinct to their own.As one parent remarked: "Parents are just ordinary people, so they rely on the school to teach the parents how they educate their kids" [48].Accordingly, they felt that they had no "authority on what to teach at school" [73] as schools "follow guidelines" [42] and "have the school curriculum" [64].This was despite parents being quietly aware that they had expertise and experience that might be beneficial to share with teachers: "I knew very clearly what my child could do.If you ask me for experiences, I mean, I can give you a lot of examples" [64]. Parents were also profoundly "respectful" [39] of parent and teacher roles.They were acutely aware of not wanting to be perceived as "overbearing" [14] in their interactions with teachers.They often reported feeling "too shy to ask" [48] for things, or "ask for too much" [59] and reported "always Theme 1: "Children are Your Heart" Parents were so grateful that Australian teachers were "very patient with the children and respect the children" [59] and that they were "very kind and helpful" [46].Overall, parents just wanted "the best" for their children [42].As one parent put it: "Every family has only one or two children and those children are your heart" [39].As part of this childcentric view, parents felt autistic children's progress was the "responsibility of the parents" [10] (subtheme 1.1).So, if children were not progressing then it was up to parents "to actually pay more attention to their child", "to observe the child and find the problem" [64].To support their children effectively, parents felt that they needed to take "initiative to do things… to rely on [themselves]… and do self-learning" [70].This sense of parental accountability was reportedly common in China: "Chinese teachers… they won't attribute anything to the children themselves.They would think that it's the parents' responsibility" [73].Parents' devotion to their children and their child's learning, meant that they "spent a lot of time" [67] "just focused on [their] child" [59].Although parents were "happy to invest a lot of time" [64] in their children, it required parents to "work very hard" [48].As one parent emphasised: "I wouldn't discount as parents how hard we have tried" [39]. Parents went on to explain that "Chinese parents, they still want their children to do really well in academics" [39] (subtheme 1.2).High academic expectations were explicitly linked by our respondents to Chinese cultural values.Parents often mentioned how competitive schooling was in their home countries: "In China, the competition is very fierce" [27]; "In Hong Kong, there is a lot of competition" [46].Parents therefore felt it was important to support their children to achieve their potential: "As parents, we know his abilities, so we try to push hard on every single task" [39].Even for parents who were not as focused on academic achievement per se, accomplishment was still prized: "I am not a high-pressure parent, but we still have expectations" [23].They reported how school learning was regularly supplemented through learning at home: "Asians go to tuition, coaching" [34].One parent explained that he "sent [his son] to coaching" because Australia was "academically, too tolerant, too relaxed" [48].Parents often helped their children "with study or academic performance" [46] at home. Consonant with high expectations for their children, "Asian parents expect the teacher to do a lot for their kid" [10] (subtheme 1.3).Parents wanted their children to have a "high standard education", to attend "a very high-quality school" [39].If parents felt their child's educational setting was not suitable, they would readily move children: "We changed kindergartens five times; he went to different kindergartens because we were not satisfied" [65].Parents reading and comprehension… in China, they could actually give him a lot of pressure and then he could study harder and then eventually he may have the chance of going to university [59]. Parents also complained that all they "hear from school is the good news" [39] (subtheme 2.3).While they appreciated "positive feedback", they expressed the view that "Australian teachers don't want to talk about the bad things" [42].They contrasted this experience with their experience of schooling in China where teachers would "tell you your kids are doing good in school, but they then would also tell you your kids may not be doing quite well" [27].Parents even said "it is quite common" [65] for caregivers in China to stay with their children at school all day.Yet, parents felt discouraged from attending Australian schools: "They [teachers] just tell you that the classroom is not for you; it's for the students and the teacher" [70].One parent recounted being told "okay, go home, mum" [23]. With teachers in Australia perceived to be avoiding telling them "any negative things" [14], parents felt that they were not getting a "a true reflection of how [their] kids are doing in school" [27].Parents also described "inconsistency" across teacher feedback.One parent described how the "generic report is generally very positive" but "individualised feedback" had made it clear that their child "didn't do as well as [they] had thought" [70].Another reported explicitly requesting that the teacher "give [her] some negative things" about her child [59].The incomplete picture parents felt that they had of their child's education was compounded by having "zero idea" [34] about what their child learns at school and no "clue how to find out" [70].This experience was described as distinctly different to Chinese education, which was described as providing many more opportunities to learn first-hand information about their children -either through direct communication between parents and teachers ("we have this WeChat group… so the teachers can post updates any time, and the parents can discuss issues more frequently" [20]) or through "technical things like apps" [64] or "live cameras so that the parents can monitor what's going on in the classroom" [20]. It was not just the perceived absence of interaction between parents and teachers that was challenging; parents also found that the parent-teacher "communication channel was very frustrating" [34] in general.When some parents "asked for [school] contact details" they were told that they "need to speak to the office" [27].Parents stressed how important their own "privacy" [65] and their "child's privacy" [59] was, so using generic communication channels was problematic for several parents.One parent reflected: "You can't e-mail the teacher, you have to send an e-mail to the reception" so "with the psychologist report and everything, I have to send it to the reception.And it's exposed apologising" [23] and being careful "not to interrupt" [39], and not "arguing with the teacher" [14].Nor did they want "to be impolite" [42].When they were unsure of the "expectations", they worried about "doing the wrong thing" [34].Parents clearly did not want to impose on teachers' time or be a burden.One parent described how he was expecting an invitation to the school but, when none arrived, he decided not to follow up as he "did not want to push and add to [their] load" [48].Even when parents did speak out, they made sure they only "push(ed) in a respectful way" [39]. Theme 2: Parents Lack Confidence and Trust in Teachers Despite their respect for clearly defined roles, parents often reported frustration that getting a good teacher every year "all depends on luck" [42] and effective leadership (subtheme 2.1).The idea of good fortune in relation to good teachers was common: "I was lucky.I got one teacher… she's got 20-years' experience" [10]; "If you have a good teacher then you are lucky, and if you have a bad teacher, you are unlucky" [42].Parents valued effective leadership but, again, this was perceived to be largely a matter of chance: "I spoke to the headmaster; I spoke to the person who was responsible for allocating teaching aids and I was lucky" [59].Although variability across teachers' experience and skills is to be expected, parents found it frustrating that schools appeared aware of differing teacher quality: "I talked to the vice-principal, and the principal told me that, don't worry about it because next year he will give me a better teacher" [42].Parents perceived teacher quality as having a knock-on impact on children's experience and desire to engage in education: "For the mainstream school, I found that she's reluctant.She's finding excuses of not going to this school" [27].Another parent said, "He loved going to (previous) school but… I feel like he is less keen.If he has a choice, he would choose not to go to school" [48]. Parents were disappointed by what they perceived to be teachers' low academic expectations of their children (subtheme 2.2).This was especially in the context of disabilityspecific schools with one parent stating that her son had not "done much academics in special school because they're only doing the behaviour correction stuff" [39].Another parent reported being told by the teacher that "you cannot ask autistic children to do much" [59].One parent simply said: "The teacher gave up on my son" [70].Parents also spoke of how their children were "bored… not engaged" [39] due to not being challenged academically at school.Parents reflected on whether expectations of their children would be higher if they were still in China: If [child] were in China, I would have expected him to go to university.Although my child has some problems in in the quality of teaching staff, whether professionals were allowed into schools varied across schools -"Some schools welcome visitors especially from a speech pathologist… [my] school didn't want this to happen" [70]) -and different contexts within the same school -"For other meetings, I always brought with me the psychologist, and then for this meeting I couldn't bring with me the psychologist" [42]. Speaking English as a non-native speaker also made advocacy difficult (subtheme 3.2): One parent stated: "When I go to the school and pick up the child, I seldom talk to the teacher, because my English is limited" [65].Another parent mentioned that when parents "form a group… it's easier to advocate for your child" but forming a parent group was difficult for them because their "[English] ability is limited" [46].Parents were sometimes offered interpreters: "For the meeting with the teacher, they ask whether you'll need an interpreter or not" [67].Some found these interpreters valuable -"They arranged an interpreter, and it was quite good" [65]) -while others felt meetings were too short -"maybe it is only 15 or ten minutes" [27] -to use interpreters in a meaningful way, especially because having an interpreter "doubles the time" [48]. Navigating school and funding systems was hard for parents (subtheme 3.3), with parents' limited knowledge about systems, as well as limited understanding about autism (both pre-and post-diagnosis) contributing to these difficulties.Limited understanding of the Australian education system, including knowing their own and their child's rights, meant parents could not be proactive: "Sometimes I don't know what to advocate for and we don't know the pathway" [23].One parent mentioned that she did not know how to "enrol [her child] in a (mainstream government) school" so by the time she did enrol, the school "was full" [10].Another parent had her child's mainstream government school enrolment rejected… "so he stayed at home for the whole entire month" since she "didn't know [she] had a right to ask for public school" [39]. Regarding autism knowledge, one parent stated that, in Chinese, "autism is a literal meaning of the phrase itself.It means isolated from the external world and not communicating with other people" [73] -so that is how community members perceived autism.Parents also described different beliefs that they, their family or their community held about the causes of autism.Some felt that "autism is kind of like a genetic problem, and then if your child has autism, maybe one of your parents has some extent of autism as well" [42], while others felt that it was due to some post-natal injury -"My husband believes that because they used forceps (in labour), it might have caused brain damage to my child, that's why my son has this autism" [70] -or environmental factor(s) -"I think autism is caused by different causes -60-70% from environmental causes and the rest, 30%, to everyone.Oh my God, I want to keep that private" [34].Since parents felt they were neither given open and honest feedback about their children, nor encouraged to attend school in person, one alternative way for them to gather information was to "volunteer in the school" [59] where: "I can observe and find out how she's doing in school" [46].As one parent described: "I actually took a very active role in taking part in the school's parent helper programme… I have the chance to see how my kid is going with the school's life" [73]. There were several reports of parents experiencing stigma which shook their confidence in teachers even further.When one parent tried to advocate for her son during a school enrolment interview, she described being racially stereotyped by the school coordinator: "So, at the beginning, when I dealt with him, they were saying that I was a 'typical Chinese mum' (subtheme 2.5) … what he said and what he did was kind of making me upset" [42].Another mother reported being judged about her single parent status: They told me that they were happy to apply for some benefits, but then they told me that because I was a single parent, even if you took these benefits, the child would not have a father to enjoy.This made me quite angry… I don't know why they discriminate against us [67]. Theme 3: "I Just Feel That I Can't Do It" [27]. Advocacy was especially difficult for these parents since they had few supports (subtheme 3.1).Many parents worked "full-time" [10] and "long hours, over ten hours of work each day" [48], often with "both the mum and dad needing to work " [64].This meant parents frequently struggled to attend schools, especially during working hours.One parent said: "I need to work; I don't have much time" [67].Another agreed: "I'm working full-time, I can't contribute much" [34].While some parents were helped by family members living in Australia ("We have work, during weekdays, they [grandparents] will come to help us" [73]), others were not: "Normally if you have a child, you have your old parents, like a grandma or grandpa, who could actually look after your child, or support you" [65].Many parents (mainly mothers but some fathers) were also solo parenting in Australia so were especially impacted by a lack of extended family support.The pandemic decreased access to supports even further as people (including family members) could not easily enter Australia: "We came to Australia about two years ago.My husband is still in Hong Kong so it's three of us here" [70]. Parents' lack of familial support was further compounded when other supports within schools were not approved: "The psychologist cannot come to observe in the classroom.That's not allowed" [34].As with inconsistencies also spoke about a greater acceptance of difference: "The Australian community is more tolerant and welcoming to diversity" [39]. Although parents often reported feeling responsible for their own integration into Australian society -"I am actually an immigrant to Australia, so my feeling is that I should get assimilated to the Australian culture" [42] -they still felt it was important for schools to acknowledge their culture (subtheme 4.3).Parents appreciated when their culture was recognised in schools.One parent relayed how her school had asked whether it would be helpful if someone at the school could "learn some Chinese and talk to [her son]", which she felt was a "great approach" [39].Another articulated the benefits of cultural appreciation, observing that when teachers "can understand the Chinese culture, and the child's background, they might be in a better position to provide care or support" [65].Although not expected, parents still valued "translated e-mails or materials" [67] and "Chinese speaking teachers" [46].They were keen for more to be done to promote this sense of cultural safety including celebrating "cultural festivals, like the Chinese New Year or the Monkey [King] Festival… if they could arrange such activities, of course I would be very interested" [65]. Optimism also related to their children's happiness because, above all "for autistic children, the most important thing is whether they are happy in school" [42] (subtheme 4.4).Another parent said: "I want my children to get a high score, Because we are from a Chinese culture.But I don't want them to purely focus on academics" [39].Parents reported being content "as long as the child's happy in the school, as long as [they] can learn something" [64].Hence, when able, parents chose schools that they believed best suited their children, regardless of whether they were disability-specific or mainstream settings.One parent reflected: "I can see [her daughter] is happy to go there because in the special school, they focus on developing their skills and then they provide a lot of programmes catered to their needs.They don't need to actually focus on their study or academic performance" [27]. Discussion This research provides first-hand accounts of experiences of education for Chinese parents of autistic children educated in Australia.Our parents were devoted to their children and felt responsible for their progress and happiness.They had high expectations of children, especially academically, but they felt these expectations were not often shared by educators.Parents were profoundly respectful of parent/teacher roles and described themselves as polite and undemanding in interactions.Parents were frustrated by is natural causes" [64].They also reported an apparently widely-held belief that autism "will get better naturally by itself" [46], that it is something the child will "outgrow" [34] and will "go away when [the] child grows up" [67]. Stigmatisation of autism diagnosis also meant parents "keep it a secret" [70] (subtheme 4.5).Parents reported wanting to "save face" [46] and were "afraid to see [their] friends (in China) because [they] don't want to get embarrassed" [59].This shame was sometimes reinforced by the feeling of being ostracised by others in their community because of their child's diagnosis: "I can feel parents, once they know my son's condition, they try to stay away from us, which is very heart-breaking" [39].In response to being asked what specific factors might influence a Chinese parent not seeking or accepting a diagnosis, one parent simply answered "pride" [34]. Theme 4: Parents Overwhelming Sense of Optimism Despite all these challenges, there was a strong sense of optimism in parents' responses.They believed that Australian schools were not academically challenging enough for their children, but were nevertheless grateful that schools focussed on 'soft skills' (subtheme 4.1) -on "kids' personal ability, personal development, and they encourage them to cook, to live independently… for kids with disability, it's very important for them to grasp these practical skills" [23].Another parent echoed how Australian teaching was well rounded: "They try to develop all abilities for the children… what they teach children, the value, the ethics, is way more than academics can measure, which is better" [39].Moreover, some parents felt that non-academic skills should be more of a focus as children move through school: "For my child, he needs to develop academically but then he's now a teenager and he has different psychological or mental needs… they need to provide more support in this respect, especially for the autistic children" [59]. Similarly, parents reported being truly grateful for the supports and opportunities afforded by Australia (subtheme 4.2).They were thankful for the financial support provided that they felt would otherwise not have been available to them: "If I were in China, I would have gone bankrupt because I just couldn't afford to look after a child like this.I'm so lucky that I'm in Australia" [67].Parents were also sympathetic to the fact that "teaching is very stressful" [23] so were thankful for teachers' compassion: "We want to show our appreciation for how good they have treated my child" [39].Parents also felt that there were more options for their children in Australia, as they moved into adulthood -"In Hong Kong I just cannot see our future, I cannot see my child's future, but here, in Australia, I hope that my child can find a job and I can see her future" [46].They children because of the limited availability of supports (Liu & To, 2021;Zhao & Fu, 2022), it was the perceived low academic expectations within schools that led many parents in our study to provide more home-based educational activities (i.e., tuition, coaching and homework).Perhaps provision of more educational activities within the home was also driven by parents' unfamiliarity with education systems. Recent reviews in this field have similarly found difficulty accessing/navigating services key challenges for migrant parents of autistic children (Kim et al., 2020;Lim et al., 2020;Papoudi et al., 2020).Past research with nonmigrant parents of autistic children has also reported how these parents struggled to navigate school systems too (Lilley, 2013;McNerney et al., 2015).What was noteworthy in this study was that parents' unfamiliarity with school systems directly impacted children's lawful right to access and participate in education -including the right to attend school full-time and receive reasonable adjustments (See Disability Standards for Education; Australian Government, 2005).A key difference for individual families therefore is the availability (or lack thereof) of different sources of information they can rely on for support when they struggle to navigate their autistic child's education, and the corollaries of that access to, or absence of, knowledge. Parents' unfamiliarity and limited knowledge was compounded by inadequate, opaque communication from schools and limited opportunities to observe their children first-hand.Inadequate communication from schools is another common frustration shared by many parents of autistic children (Azad et al., 2018;Galpin et al., 2018;Lilley, 2019;Makin et al., 2017).Our Chinese parents both wanted and expected more frequent parent-teacher interactions through which they could gain important, timely information about their children.As with past research with non-CALD parents of autistic children, parents wanted to celebrate their children's successes but also be informed about challenges (Stoner et al., 2005).Yet where non-CALD parents can feel schools focus too much on the negative aspects of their autistic children (Azad et al., 2018;Lilley, 2019), interestingly, Chinese parents felt the opposite to be true here.Effective parent-teacher communication is therefore not a one-size fits all. Unfortunately, inadequate communication from schools, coupled with experiences of discrimination and inconsistent teaching quality, eroded parents' trust and confidence in schools.Asian migrants of non-autistic children living in New Zealand have similarly indicated a lack of confidence in teachers for various reasons, including mistrust of teachers and communication barriers (Guo, 2005).To safeguard that their children were managing at school, parents gathered information themselves (i.e., through volunteering in the school).This increased watchfulness has similarly been inconsistent teaching quality and inadequate communication from schools.They also often faced stigma and discrimination from both teachers and the Chinese community, and had few resources to rely upon.Nonetheless, parents were extremely grateful for the supports and opportunities afforded by Australia and valued that their children received a holistic education.And, whilst they did not expect schools to provide culturally-specific resources, they spoke about the benefits of acknowledging their culture in schools. Conflicted Feelings Towards Education Parents were ambivalent towards Australian teachers and schools.On the one hand, they were grateful that Australian teachers accepted their children and were caring and respectful towards them.They also valued that schools tried to equip their children with life skills -a sentiment shared by parents of autistic children in the UK (Makin et al., 2017;McNerney et al., 2015).Echoing earlier research with migrant Chinese parents of children both with (Lai & Ishyama, 2004;Liu & Fisher, 2017) and without disabilities (Collignon et al., 2001;Denessen et al., 2007), they were deferential towards teachers, respectful of role boundaries and reluctant to voice concerns too.Jegathessan (2009) similarly found that migrant Asian parents in the United States (US) were grateful for opportunities afforded by their adoptive country and did not want to be perceived as asking for more. On the other hand, parents were disappointed by teachers.They experienced discrimination from teachers and their efforts to enrol their autistic children in schools were often met with resistance, aligning with reports from Somali-Canadian parents (Kediye et al., 2009).Parents were also frustrated by inconsistencies in teachers' experience and skills and by their low/inappropriate expectations of autistic students.These issues have previously been voiced by myriad groups of parents of autistic children, including migrant Chinese parents living in Canada (Lai & Ishyama, 2004), British (McNerney et al., 2015) and Australian parents (Hodges et al., 2020;Lilley, 2013Lilley, , 2014)).What has not been so clearly articulated before is how parents felt that their children's desire to engage in education was directly impacted (positively or negatively) by variable teacher quality, in relation to teacher's knowledge, skills and experience of autism and dedication to supporting autistic children. 'Child-centredness' is commonly reported by Asian parents of autistic children (Shorey et al., 2020).We found Chinese parents felt accountable for their children's educational progress and had high academic expectations for their children (Li & Yeung, 2017;Shorey et al., 2020;Wang & Casillas, 2012).Where parents still living in China reported taking on the responsibility of educating their autistic 2019).Where possible, schools should also use simplified English and avoid acronyms.Finally, Australian teachers have reported low levels of cultural competency training and use of multicultural aides yet they reported that having access to professional development in this area would be useful (Syeda & Dresens, 2020).Autism-specific training, including teaching about the interplay of autism and culture, is recommended for all teachers to ensure educators have the skills to effectively educate all autistic children, in all settings, from all backgrounds.We must also further explore the best mechanisms for educating teachers about culture and autism to ensure knowledge and skills are translated into classrooms and schools. Limitations This study has several limitations.First, our sample included a highly educated group of parents, many of whom spoke English, so the importance of language supports may have been understated.That said, most interviews were conducted in Cantonese/Mandarin suggesting that community languages were still preferred.Second, most children attended mainstream schooling so we may similarly have an underrepresentation of views from parents with autistic children in specialist settings.Finally, the concept of 'losing face' (Huang & Zhou, 2016;Liao et al., 2019) may have played a role in parents being reluctant to share personal information, especially related to their challenges, so we may have overlooked some latent themes here.However, because our interviews were all conducted by someone from the parents' own cultural background, who has a strong presence in the Chinese autism community, the impact of this issue on our findings may have been reduced. Conclusion These findings are important in showing how Chinese cultural values shape migrant Chinese parents' expectations of education for their autistic children, and their interactions with teachers.We highlight how we might better foster parent-teacher partnerships for migrant Chinese parents and their autistic children.We hope this study contributes to the provision of targeted supports for these parents in order to strengthen partnerships with educators. Funding Open Access funding enabled and organized by CAUL and its Member Institutions Declarations Ethical Approval Ethical approval was granted by Macquarie University Human Research Ethics Committee (Ref No: 5,202,196,412,836).Informed consent was gained from all participants. reported by non-CALD parents of autistic children when trust in professionals has been lost (Stoner et al., 2005).Yet our parents had few other resources and supports (i.e., extended family/partners, available time during school hours etc.) so may be more likely to experience increased psychological impacts over time related to the burden of balancing childcare responsibilities and paid work with little familial/social support (Zhao & Fu, 2022). How Can We Develop Effective Parent-Teacher Partnerships for Chinese Parents? As with past models of parent-teacher partnerships for non-autistic children (Keyes, 2000;Turnbull et al., 2015), successful partnerships for Chinese parents included good communication between parties (especially updates about children's progress), professionals having appropriate knowledge and skills, as well as mutual respect and consideration of each other's culture and values.Non-CALD parents of autistic children have previously described valuing proactive consultation about their wants and needs, believing ongoing consultation promoted effective parent-teacher partnerships (Galpin et al., 2018).Since our Chinese parents were deferential and undemanding towards teachers, parents may benefit from teachers themselves pre-emptively seeking parents' views and expertise about their autistic children.Before schools consult with parents, it may be worthwhile considering whether current school communication channels (i.e., generic email addresses) make parents feel comfortable sharing confidential information about their children. Self-education -especially awareness of rights in relation to school policies and procedures -has been suggested as integral for all parents of autistic children to prepare and encourage them to advocate (Boshoff et al., 2018;Lilley, 2019) -and Chinese parents are no exception here.Moreover, explicit teaching of other advocacy strategies, such as leadership and communication (Test et al., 2005), may be especially important for CALD parents who may not perceive themselves to be as self-efficacious as non-CALD parents (Galpin et al., 2018).It is important that the 'hidden curriculum', including ways in which parents can share their opinions with schools in an appropriate manner is important for the development of trusting and respectful partnerships.CALD families may also need additional time and other considerations to advocate effectively.Future research should focus on how we effectively support self-advocacy skills in CALD parents of autistic children. Interpreters and translated materials should be available to parents.For equity, extended meeting times are recommended when using interpreters, as well as employing interpreters who have autism-specific knowledge (Sakai et al., Table 1 Characteristics of Chinese Families involved in the Study (n = 17 1 parents, n = 19 children)
2022-09-24T06:18:27.825Z
2022-09-23T00:00:00.000
{ "year": 2022, "sha1": "95f08c893a0e08e77053ddfa3cebe1181d228ec5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10803-022-05748-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b9f914c1b461dfc0025c7951aebe7df19025e216", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
155101318
pes2o/s2orc
v3-fos-license
Role of Mitochondrial DNA Damage in ROS-Mediated Pathogenesis of Age-Related Macular Degeneration (AMD) Age-related macular degeneration (AMD) is a complex eye disease that affects millions of people worldwide and is the main reason for legal blindness and vision loss in the elderly in developed countries. Although the cause of AMD pathogenesis is not known, oxidative stress-related damage to retinal pigment epithelium (RPE) is considered an early event in AMD induction. However, the precise cause of such damage and of the induction of oxidative stress, including related oxidative effects occurring in RPE and the onset and progression of AMD, are not well understood. Many results point to mitochondria as a source of elevated levels of reactive oxygen species (ROS) in AMD. This ROS increase can be associated with aging and effects induced by other AMD risk factors and is correlated with damage to mitochondrial DNA. Therefore, mitochondrial DNA (mtDNA) damage can be an essential element of AMD pathogenesis. This is supported by many studies that show a greater susceptibility of mtDNA than nuclear DNA to DNA-damaging agents in AMD. Therefore, the mitochondrial DNA damage reaction (mtDDR) is important in AMD prevention and in slowing down its progression as is ROS-targeting AMD therapy. However, we know far less about mtDNA than its nuclear counterparts. Further research should measure DNA damage in order to compare it in mitochondria and the nucleus, as current methods have serious disadvantages. Introduction Reactive oxygen species (ROS), including free radicals, play important roles in cellular signaling, being an important element of organismal homeostasis [1]. On the other hand, ROS are implicated in the pathogenesis of many human diseases, and, in fact, it is not easy to find a disorder without ROS in its pathogenesis. Moreover, ROS are directly or indirectly implicated in both normal (physiological) and accelerated aging [2]. Therefore, it is not surprising that ROS are reported to play an important role in the etiology of several age-related diseases [3]. There are many sources of ROS in the cell, including enzymes of cytochrome P450 and other enzymes as well as the mitochondrial electron transport chain (mtETC) [4]. The latter is especially significant for several reasons-it produces ROS in its normal functioning, and the amount of ROS may increase greatly in malfunctioned mtETC, which in turn increases total ROS level (mitochondrial vicious cycle) [5]. In this way, mitochondria can be implicated in the process of aging, both normal and accelerated, as they involve ROS accumulation [6]. Moreover, mitochondrial reactive oxygen species (mtROS) are involved in the regulation of several other physiological and pathological processes [7]. The involvement of mtROS in the pathogenesis of many diseases has led to the idea of targeting them in therapy of various disorders [8][9][10][11]. It seems that age-related diseases are especially suited for this, as they are associated with mitochondria through the process of aging. However, at present, we do not know exactly many fundamental aspects of the involvement of mtROS in pathogenesis of age-related diseases. First, the cause of production and disposition of ROS by mitochondria is not completely known. Similarly, the question of how increased ROS levels induce pathophysiological events involved in a disease onset and/or progression still needs answering. The next question is: What element in mitochondria is primarily responsible for pathogenic ROS overproduction? In this review, we address these problems in relation to age-related macular degeneration (AMD). Age-Related Macular Degeneration-An Eye Disease with the Critical Role of ROS in Its Pathogenesis Age-related macular degeneration is a complex, progressive eye disease which is the main reason for legal blindness and vision loss in the elderly worldwide [12]. The estimated global pooled prevalence of AMD in 2013 was about 17%, and the number of individuals affected by AMD is 196 million and projected to increase to 288 million in 2040 [13,14]. The global costs of AMD are €101.1 million in the UK, €60.5 million in Italy, €91.4 million in Germany, and €51.3 million in France [15][16][17]. Therefore, the burden of AMD is an emerging element of global vision loss. Despite the prevalence and high cost of medical care, available therapeutic options for AMD are very limited. This is likely due to the complexity of the disease and incomplete knowledge of the mechanisms underlying its pathogenesis. Therefore, studies on molecular aspects of AMD are justified and needed. However, such studies encounter major problems. First, molecular studies in live human subjects are limited, and postmortem research may add some misinformation. Second, animal models of the disease are often criticized as inadequate due to a substantial difference between human and rodent retinas [18]. Third, various cellular models of AMD may represent some features which cannot be observed in live retinas, such as retinal pigment epithelium (RPE) cells that do not proliferate in situ due to spatial constraints and other limitations. The main clinical symptom of AMD is the impairment of central vision, which may eventually result in complete vision loss ( Figure 1). Chronologically, AMD can be categorized as early and late. The early AMD is typified by the presence of and increase in deposits of extracellular debris between Bruch's membrane and RPE. These debris are called drusen, and their presence emerges with AMD progression [19]. Late AMD may be manifested in two forms, atrophic (dry) and neovascular (wet). The former is characterized by the development of geographic atrophy (GA) and worsening of central vision. There is not any efficient treatment for dry AMD. Wet AMD causes more rapid and pronounced changes than observed in dry AMD and is characterized by choroidal neovascularization (CNV). New blood vessels often leak into the retina and cause hemorrhage, retinal detachment, and disciform scars. Currently, wet AMD is treated with repeated intravitreous injections of anti-vascular endothelial growth factor A (VEGFA) [20]. Presented are color pictures of fundus for normal retina and retina with changes typical for dry and wet AMD, two of its basic, clinically distinguished categories. Dry AMD is typified by the presence of drusen, yellowish objects between choroid (Ch) and Bruch's membrane (BM), and photoreceptor (PR) loss. Wet AMD is associated with abnormal angiogenesis (choroidal neovascularization), leading to bleeding resulting in lifting up the macula from its normal position. Individuals affected by AMD in its advanced stage may experience problems with central vision. It is hypothesized and supported by many studies that the pathogenesis of AMD starts in the RPE, a single layer of cells located between the neuroretina and Bruch's membrane [21]. The RPE plays the crucial functions in the maintaining of retinal homeostasis [22]. Age-related macular degeneration is a complex disease, and for the majority of complex traits, the mechanisms underlying their pathogenesis are not exactly known [23]. These mechanisms likely include interactions between genetic, environmental and lifestyle risk factors that may lead to aberrant processes occurring in the retina [24]. Advanced age is by definition the main AMD risk factor. The main genetic factors are associated with several loci, including histocompatibility locus antigen (HLA) and the alternate complement pathway-CFH, CFB, CFI and C3, as well as the C2 and ARMS2/HTRA1 gene regions [25,26]. Other frequently questioned risk factors are female sex, white race, smoking, diet rich in polyunsaturated acids and blue light exposure [26]. Damage to RPE is often observed in early stages of AMD, although in some cases it may be preceded by photoreceptor loss [27,28]. Many experiments and clinically relevant data support that RPE damage is directly or indirectly caused by oxidative stress [29][30][31][32]. It is not easy to determine the source of this stress, as Presented are color pictures of fundus for normal retina and retina with changes typical for dry and wet AMD, two of its basic, clinically distinguished categories. Dry AMD is typified by the presence of drusen, yellowish objects between choroid (Ch) and Bruch's membrane (BM), and photoreceptor (PR) loss. Wet AMD is associated with abnormal angiogenesis (choroidal neovascularization), leading to bleeding resulting in lifting up the macula from its normal position. Individuals affected by AMD in its advanced stage may experience problems with central vision. It is hypothesized and supported by many studies that the pathogenesis of AMD starts in the RPE, a single layer of cells located between the neuroretina and Bruch's membrane [21]. The RPE plays the crucial functions in the maintaining of retinal homeostasis [22]. Age-related macular degeneration is a complex disease, and for the majority of complex traits, the mechanisms underlying their pathogenesis are not exactly known [23]. These mechanisms likely include interactions between genetic, environmental and lifestyle risk factors that may lead to aberrant processes occurring in the retina [24]. Advanced age is by definition the main AMD risk factor. The main genetic factors are associated with several loci, including histocompatibility locus antigen (HLA) and the alternate complement pathway-CFH, CFB, CFI and C3, as well as the C2 and ARMS2/HTRA1 gene regions [25,26]. Other frequently questioned risk factors are female sex, white race, smoking, diet rich in polyunsaturated acids and blue light exposure [26]. Damage to RPE is often observed in early stages of AMD, although in some cases it may be preceded by photoreceptor loss [27,28]. Many experiments and clinically relevant data support that RPE damage is directly or indirectly caused by oxidative stress [29][30][31][32]. It is not easy to determine the source of this stress, as most, if not all, AMD risk factors can be associated with overproduction of ROS, elevated levels of which are observed in oxidative stress. Many studies performed on human subjects, experimental animals, and cell cultures clearly show oxidative damage to RPE and choriocapillaris. However, the precise mechanism of RPE damage, source of elevated levels of ROS and, most importantly, the exact association between oxidative effects occurring in RPE and the onset and progression of AMD still need explanation [33]. In this review, we present arguments that mitochondrial dysfunction underlined by damage to mitochondrial DNA (mtDNA) may be the reason for increased ROS production in RPE associated with AMD onset and progression. This concept is not entirely new, but we present certain novel arguments and update some previous data. Mitochondria-A Central Structure in AMD Pathogenesis Retinal pigment epithelium cells in the central retina do not proliferate in situ due to spatial constraints, and adult stem cells have not been identified in that structure. Therefore, if some of the cells in the central retina are damaged, they can be replaced by cells from the periphery that can proliferate, as they are more relaxed than cells in the central region. However, if these peripheral cells are affected by stress-induced senescence, this mechanism fails [34]. The closer to the center damaged cells are located, the less chance there is to replace them by proliferating cells from the periphery. This mechanism may indicate cellular senescence as a major effect associated with atrophy of the retina observed in dry AMD and may explain, at least in part, why the macula and not the peripheral retina is prone to degeneration by AMD pathogenesis factors. Dieguez et al. created a dry AMD model by superior cervical ganglionectomy in C57BL/6J mice to explain such localized susceptibility to AMD [35]. This model limits the area of the occurrence of effects to the temporal region of the RPE/outer retina. These authors found that the temporal region was characterized by a lower melanin content, thicker basal infoldings, higher mitochondrial mass, and higher levels of antioxidant enzymes as compared with its nasal counterpart. Superior cervical ganglionectomy resulted in a lower efficacy of the antioxidant system and a lower mass of mitochondria. Damage to mitochondria was observed exclusively in the temporal region of RPE. This model was created not to explore the role of mitochondria in AMD pathogenesis but to study its general mechanisms. However, it unequivocally indicates that mitochondria are main players in AMD pathophysiology. Moreover, damage to mitochondria in AMD may contribute to topological features of the disease, in particular its macular localization. Zhao et al. showed that postnatal inhibition of oxidative phosphorylation in mice RPE mitochondria resulted in many mechanistic targets for rapamycin (mTOR) changes typical for degenerated RPE in AMD [36]. Feher et al. observed a decrease in the number and electron microscopy area of mitochondria and loss of cristae and matrix density in aging human RPE, but these changes were more pronounced in individuals with AMD than in subjects free of this disease [37]. These studies revealed that the extent of morphological changes in RPE observed in AMD could be reached in non-AMD patients after 10-15 years since their first appearance, suggesting that AMD may be characterized by accelerated aging. A proteomic analysis performed in the Ferrington laboratory revealed changes in the expression of proteins involved in mitochondrial refolding and trafficking in RPE of AMD patients as compared with non-AMD subjects [38]. In a subsequent work from that laboratory, an alternated expression of the α-, βand δ-subunits of the catalytic portion of ATP synthase, subunit VIb of the cytochrome c oxidase complex, mitofilin, mtHsp70, and the mitochondrial translation factor Tu was observed [39]. A positive correlation between changes in expression of these genes and the progressive stage of AMD was observed. Ferrington et al. used primary RPE cells obtained from AMD donors and control subjects within 24 h of death [40]. They observed that AMD RPE had lower respiration and ATP production than controls. However, treatment with a high concentration of hydrogen peroxide did not affect ATP production in AMD patients, in contrast to controls, who displayed about a one-third decrease in ATP synthesis, suggesting a resistance to oxidative stress in RPE cells from AMD donors. This was also supported by an even more pronounced decrease in maximal respiration and spare capacity in control than AMD samples. Moreover, AMD cells were more resistant to oxidant-induced death. No difference in mtDNA content was observed, but RPE cells from AMD donors had higher levels of the transcriptional coactivator, peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α). In general, a higher activity of PGC-1α is associated with better mitochondrial functions [41]. The authors did not provide convincing arguments to explain this discrepancy. That work showed for the first time that AMD may be associated with an energy crisis located in mitochondria. Therefore, mitochondrial dysfunction could play a central role in AMD pathogenesis. This role is closely associated with ROS overproduction, which can be considered as both cause and consequence of mitochondrial dysfunction. Increased PGC-1α levels can result from challenging mitochondrial quality control to face oxidative stress. The resistance to oxidative stress in AMD is in an apparent conflict with the causative role of the stress in AMD pathogenesis and adaptive mechanisms. Ferrington et al. concluded that the cells they used, i.e., cultures of primary RPE cells isolated from AMD subjects, were a good cellular model to study AMD pathogenesis. Prior to Ferrington's paper, Golestaneh et al. showed that the susceptibility of cultured human RPE cells obtained from AMD donors did not differ from cells obtained from donors without AMD after a 24 h incubation with hydrogen peroxide, but after 48 h incubation, RPE cells from AMD donors were more susceptible to oxidative stress-induced cell death [42]. These authors also showed that mitochondria of RPE cells from AMD donors were dysfunctional, producing lower levels of ATP, whereas the ATP produced by glycolysis was higher, suggesting that ATP was essentially produced by glycolysis rather than mitochondrial activity, further supporting the hypothesis of the significance of energy crisis allocated to mitochondria in AMD pathogenesis. Generation and Regulation of ROS by Mitochondria Reactive oxygen species are continuously produced in the cell during its normal metabolism and play an important role in intracellular signaling. Normal level of ROS is controlled by the cellular antioxidant system, containing antioxidant enzymes, small molecular weight antioxidants, and DNA repair proteins. However, in some intra-and extracellular conditions, ROS level can surpass the antioxidant potential of the cells, leading to the state described as oxidative stress. Mitochondrial electron transport chain (mtETC), along with cytochrome P450, nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (NOX), and xanthine oxidase (XO) are major sources of intracellular ROS [43]. Mitochondrial ROS seem to be of particular interest in the field of intracellular ROS metabolism and signaling, as the antioxidant system in mitochondria is far less known than its counterparts in the rest of the cell. The mitochondrial electron transport chain produces ROS during its normal functioning, and they play an important role in cellular signaling [1,44]. However, the extent of ROS produced by mtETC may increase during mtETC malfunctions. These extra ROS are mainly produced by the I and III complexes of mtETC in coupling with induced proton leak [45]. A small imbalance in mtETC functions may lead to a transient accumulation of ROS, which could damage mtDNA in genes encoding mtETC components. Expression of these damaged genes may lead to synthesis of malfunctioned proteins of mtETC, further accumulation of ROS, and even more massive damage to mtDNA, leading in turn to the synthesis of faulty mtETC proteins and further ROS overproduction in repeated cycles [46]. This state is referred to as "mitochondrial vicious cycle", which is considered as an important element of normal and premature aging [5,47]. However, mtDNA damage does not always result in increased ROS production [48]. Moreover, the sites of ROS production and mtDNA location, which is principally attached to the matrix side of the inner mitochondrial membrane, overlap, and so mtDNA must be repaired under ROS "bombardment", affecting DNA repair proteins and lowering the efficacy of DNA repair [49]. Moreover, such a situation creates an opportunity to form mtDNA-protein crosslinks mediated by ROS, which are one of, if not the most serious form of DNA damage [50]. Therefore, the maintenance of mtDNA can be crucial to the proper functioning of mitochondria and can play an important role in the pathogenesis of mitochondria-or ROS-related diseases. Unrepaired or misrepaired damage to mtDNA may contribute to aging, so DNA damage response in mitochondria can also be important in age-related diseases. DNA Damage Response in mtDNA Human mtDNA is a double-stranded, closed DNA with 16,569 base pairs (bps). It is usually referred to as "circular", although the probability of adopting the structure of a perfect circle by mtDNA is negligibly low. However, diagramming mtDNA as a circle is useful for presenting and analyzing its structure and function. In reality, mtDNA may adopt complex structures with supercoils and interwounds. Mitochondrial DNA, contrary to its nuclear counterpart, contains almost exclusively coding sequences. It has genes coding for 13 polypeptides that are all components of mtETC and several functional RNA species. The major non-coding region of mtDNA, the mitochondrial control region (CR), is important for mtDNA replication and transcription and has the highest mutational rate in mtDNA, which has not been exactly determined, which may follow from yet unrecognized source(s) of such hypervariability in CR [51]. One of the reasons for a high mutation rate in CR is the presence of a replication initiation site (origin) for the heavy mtDNA strand, which is denaturated in each replication cycle and prone to DNA-damaging factors. There are many deletions and point mutations in mtDNA, and some of them are associated with serious human disorders, such as ophthalmoplegia, migraine, dysphagia, sensorineural hearing loss, cognitive decline, and others [52,53]. Each nucleated human cell may contain many molecules of mtDNA of different sequences, which leads to the state referred to as "heteroplasmy." In this state, mutated mtDNAs coexist with their normal counterparts, and usually a pathogenic mutation must occur at a level high enough to contribute to a pathological phenotype-in several cases that level is determined as 85% [54]. Phenotypic consequences of mtDNA damage are determined by several factors, including the number of affected mitochondria, environmental conditions, and mechanisms of mtDNA maintenance [55]. There is no reason to state that mtDNA itself, i.e., as a chemical molecule, is differently susceptible to DNA damage than its nuclear counterpart. However, the subcellular localization and organization of mtDNA are substantially different than nuclear DNA (nDNA), which, along with different DNA damage responses in mitochondria and the nucleus, determines differences in the DNA damage spectrum between mitochondrial and nuclear DNAs. These differences also influence the precision of measurements of mtDNA damage, which, in general, is lower than nDNA. Environmental damage to mtDNA is induced by essentially the same factors as in nDNA, but they may present different mechanisms of action. The main reason for this is the different metabolism of these factors and their intermediate products in these two organelles [46]. The main difference between mtDNA and nDNA damages arises from the exposure to endogenous factors. Due to the close proximity of mtETC, mtDNA is prone to oxidative damage, which may take the form of small modifications to the nitrogen bases and the deoxyribose ring, apurinic/apyrimidinic (AP) sites, strand breaks, chemical adducts of bases, and others [56]. Hydrogen peroxide, a frequently used inducer of oxidative stress, induces mainly AP sites in human cell cultures [57]. Apurinic/apyrimidinic sites can be converted to single-strand breaks (SSBs) and together can be the principal form of mtDNA damage [58]. Moreover, damage to the genes encoding mtETC components results in dysfunction of these components, leading to increased ROS production by mtETC, which may induce further damage to these genes-the mitochondrial vicious cycle [5]. Mitochondrial metabolism and the composition of the mitochondrial membrane underlie the production of reactive aldehydes in mitochondria, which may contribute to the formation of mtDNA adducts [59]. Damages specific to mtDNA result mainly from specificity of mitochondrial systems to deal with them. Base excision repair (BER) operates quite efficiently in mitochondria, and it can remove 8-oxo-7,8-dihydro-2 -deoxyguanosine (8-oxoG), a major oxidative modification of mtDNA, but if it fails, 8-oxoG can be further oxidized to produce its more stable and mutagenic forms, which may interfere with DNA replication [60,61]. Other aspects of mtDNA damage following its metabolism are presented in the next section. Similar to the nucleus, damage to mtDNA may affect its replication and transcription as well as expression of mitochondrial genes. If non-repaired or misrepaired, mtDNA damage may turn into mutation, which can be maternally inherited. DNA damage response (DDR) is an evolutionary reaction to DNA damage, which may interfere with the process of sending genetic information from one generation to the next. DDR has been recognized as less efficient in mitochondria than in the nucleus, contributing to a higher mutation rate in mtDNA than in nDNA [62,63]. DDR in mitochondria (mtDDR) is coupled with mitochondrial quality control (mtQC), a system responsible for mitochondria's proper maintenance and functioning, including mitochondrial biogenesis and mitophagy [64]. One of the nuclear DDR pathways-apoptosis-is regulated by a controlled release of cytochrome c from mitochondria [65]. Similar to the nucleus, DNA repair is the main reaction in mtDDR ( Figure 2). Initially, the mitochondrial DNA repair system was considered much poorer than its nuclear counterpart. At present, an emerging similarity between these two DNA repair systems has been recognized, which is supported by the recent discoveries in that field [66]. First, mtDNA is not so "naked" as it used to be believed because it is associated with several proteins involved in its replication, transcription, and maintenance. They are: Polymerase gamma (PolG, the mitochondrial replicase); single-stranded DNA binding protein 1 (mtSSB); twinkle mtDNA helicase; transcription factor A, mitochondrial (TFAM); prohibitin (PHB). Other proteins are added to that list to form a structure called nucleoid [67]. Therefore, lack of association with proteins is no longer an argument supporting a higher rate of mutations in mtDNA than in nDNA. Furthermore, that association is itself a controversial issue in mutagenesis, as DNA-damaging factors, usually a quantum of radiation or a molecule of a chemical compound, are small in size and can find their way even to DNA tightly associated with proteins. On the other hand, when highly organized DNA is damaged, its repair may be difficult as relatively large DNA repair proteins do not have direct access to the sites of damage. Similar to the nucleus, damage to mtDNA may affect its replication and transcription as well as expression of mitochondrial genes. If non-repaired or misrepaired, mtDNA damage may turn into mutation, which can be maternally inherited. DNA damage response (DDR) is an evolutionary reaction to DNA damage, which may interfere with the process of sending genetic information from one generation to the next. DDR has been recognized as less efficient in mitochondria than in the nucleus, contributing to a higher mutation rate in mtDNA than in nDNA [62,63]. DDR in mitochondria (mtDDR) is coupled with mitochondrial quality control (mtQC), a system responsible for mitochondria's proper maintenance and functioning, including mitochondrial biogenesis and mitophagy [64]. One of the nuclear DDR pathways-apoptosis-is regulated by a controlled release of cytochrome c from mitochondria [65]. Similar to the nucleus, DNA repair is the main reaction in mtDDR ( Figure 2). Initially, the mitochondrial DNA repair system was considered much poorer than its nuclear counterpart. At present, an emerging similarity between these two DNA repair systems has been recognized, which is supported by the recent discoveries in that field [66]. First, mtDNA is not so "naked" as it used to be believed because it is associated with several proteins involved in its replication, transcription, and maintenance. They are: Polymerase gamma (PolG, the mitochondrial replicase); single-stranded DNA binding protein 1 (mtSSB); twinkle mtDNA helicase; transcription factor A, mitochondrial (TFAM); prohibitin (PHB). Other proteins are added to that list to form a structure called nucleoid [67]. Therefore, lack of association with proteins is no longer an argument supporting a higher rate of mutations in mtDNA than in nDNA. Furthermore, that association is itself a controversial issue in mutagenesis, as DNA-damaging factors, usually a quantum of radiation or a molecule of a chemical compound, are small in size and can find their way even to DNA tightly associated with proteins. On the other hand, when highly organized DNA is damaged, its repair may be difficult as relatively large DNA repair proteins do not have direct access to the sites of damage. In general, DNA damage in mitochondrial DNA (mtDNA) can be repaired or tolerated. Highly damaged mtDNA can be degraded, and mitophagy can contribute to this process although its exact nature is unknown. BER-base excision repair; Lp and Sp-long and short patch, respectively; MMR-mismatch repair; AER-alternative excision repair; SSBR and DSBR-single-and double-strand break repair, respectively; NHEJ-non-homologous end joining; HRR-homologous recombination repair; TLS-translesion synthesis. Remodeling of mitochondrial nucleoid has not been shown as a mtDDR pathway, but it can be assumed that it occurs if needed. The mechanism of DNA damage tolerance is hardly known in mtDNA, symbolized by a question mark. In general, DNA damage in mitochondrial DNA (mtDNA) can be repaired or tolerated. Highly damaged mtDNA can be degraded, and mitophagy can contribute to this process although its exact nature is unknown. BER-base excision repair; Lp and Sp-long and short patch, respectively; MMR-mismatch repair; AER-alternative excision repair; SSBR and DSBR-single-and double-strand break repair, respectively; NHEJ-non-homologous end joining; HRR-homologous recombination repair; TLS-translesion synthesis. Remodeling of mitochondrial nucleoid has not been shown as a mtDDR pathway, but it can be assumed that it occurs if needed. The mechanism of DNA damage tolerance is hardly known in mtDNA, symbolized by a question mark. Although mtDNA is much smaller than nDNA, the mechanism of its replication is poorly understood, and ribonucleotides can be present in mtDNA after it completes its replication [68]. Polymerase gamma has a proofreading activity and is supported by the action of PrimPol (primase and DNA-directed polymerase), providing primers for PolG, playing a role in mtDNA damage tolerance, and being required for reinitiation of replication stalled by mtDNA damage [69]. There are two distinct features of mtDNA maintenance distinguishing it from nDNA. First, highly damaged mtDNA can be degraded as there is no reason for the cell to stop the cell cycle and begin apoptosis, as can be observed in the nucleus when the extent of DNA damage exceeds the cell repair capacity. However, the precise mechanisms of mtDNA degradation is not known, and mitophagy, mitochondrial endonucleases, and PolG can be involved [70]. Moreover, a not only extensive but also small, yet persistent, damage to mtDNA may induce its degradation [48]. The imbalance in the number of mtDNA copies may be associated with pathogenesis of several mitochondrial diseases [71]. Second, nucleotide excision repair (NER), the most versatile DNA repair pathway, has not been proven to act in mitochondria, although the presence of some proteins playing a role in a form of nuclear NER has been observed in mitochondria [72,73]. Base excision repair acts in short-and long-path modes in mitochondria [74]. This is probably underlined by the great number of oxidative modifications to mtDNA bases resulting from a high concentration of ROS in mitochondria. Alternative excision repair (AER) may partly compensate the lack of removal of UV-induced DNA damage by NER in mitochondrial DNA, as shown in yeast [75]. DNA double-strand breaks (DSBs) belonging to the most serious DNA lesions, can, similar to the nucleus, be mainly repaired by homologous recombination repair (HRR) and non-homologous end joining (NHEJ), but the assortment of proteins involved in these pathways and mechanisms of their action can be different in mitochondria compared to the nucleus [76,77]. Moreover, microhomology-mediated NHEJ, which can also be seen as a functional variant of HRR, seems to dominate in DSB repair in mitochondria. Due to high concentration of ROS, DNA repair enzymes may be crosslinked with mtDNA, which is the case for PolG acting on AP sites oxidized at C1' [78]. The crosslink can be induced by 2-deoxyribonolactone (dL), a product of the attack of hydroxyl radical on C1' in a mtDNA nucleotide. Obviously, this is not the only possibility of forming mtDNA-protein crosslinks when attempting to repair damage to mtDNA. This situation is not very specific for mtDNA, because nDNA can also be crosslinked with DNA repair proteins. However, the mechanism of repairing such crosslinks in mitochondria is poorly understood and is expected to be less effective than in the nucleus. This problem has been recently reviewed by Caston and Demple [50]. If DNA damage in the nucleus cannot be repaired before replication or mitosis, the cell cycle is stopped to give the cell more time to repair, and it may activate a programmed death pathway or damage can be tolerated. Translesion synthesis (TLS) is a pathway enabling the cell to replicate its DNA despite damage, and it is a major mechanism of DNA damage tolerance, an important component of DDR [79]. There are TLS polymerases that specialize in bypassing DNA damage and DNA synthesis beyond damage-"extenders" and "inserters", respectively. So far, such DNA polymerases have not been identified in mitochondria but, somewhat surprisingly, it has been suggested that TLS-like mechanisms in mitochondria are associated with PolG, the mitochondrial replicase [80]. As stated above, PolG can be supported in its action by PrimPol [68]. mtDNA Damage and Repair in AMD Accumulation of mtDNA damage in mitochondria may result from several mechanisms, listed in the previous sections. This process can be associated with both normal and accelerated aging as well as several other pathological conditions [81]. Accumulation of the common 4977 bp deletion in mtDNA (∆mtDNA 4977) was observed in aging but not in the fetal human RPE and neural retina [82]. Therefore, aging in the retina is linked with accumulation of mutations in mtDNA, leading to increased instability of mtDNA and dysfunctions of mitochondria (vicious cycle) and the retina. Comparative analysis of variation in mtDNA in different tissues of AMD patients and non-AMD controls was performed in several studies. Jones et al. found that the mitochondrial haplogroup H was protective against DNA and soft drusen development, whereas the U haplogroup was associated with pronounced general detrimental changes in RPE [83]. Those studies were performed on blood obtained from a cohort of AMD patients and controls enrolled in the Blue Mountains Eye Study [84]. Udar et al. studied mtDNA haplogroups in the retinas of 10 AMD patients and 11 control subjects [85]. They found that the mt1626T>C and mt73A>G of the J and T haplogroups single-nucleotide polymorphisms (SNPs) occurred more frequently in retinas of AMD patients than controls. This association was confirmed in the blood of 99 AMD patients and 92 controls. Cantar et al. observed an association between the mt4917A>G polymorphism belonging to the T haplogroup and AMD occurrence [86]. This polymorphism is located in the genes encoding NADH dehydrogenase, so the authors concluded that the association they observed was characterized by increased ROS production. A similar conclusion was drawn by SanGiovanni et al., who observed an association between AMD occurrence and the mtA11812A>G polymorphism in the NADH ubiquinone oxidoreductase gene [87]. The protective effect of the H haplogroup against AMD and a higher risk associated with the J haplogroup were confirmed by Kenney et al. [88]. To investigate the effects of different haplogroups, these authors created ARPE-19-based cybrids containing identical nuclei but different mtDNA variants. They observed a difference in the energetic profile between H and J haplogroups and concluded that this might underline the mtDNA-nDNA interaction, resulting in a change in the expression of seven mitochondrial and eight nuclear genes. The latter included genes encoding proteins of alternative complement, inflammation, and apoptotic pathways, with the potential to play an important role in AMD pathogenesis. Transmitochondrial cell hybrids were also used by these authors to demonstrate that cybrids having AMD mitochondria displayed reduced viability, decreased number of mtDNA copies, downregulated genes involved in metabolism of mtDNA and antioxidant defense. They are implicated in apoptosis, autophagy, and ER stress as well as doing more damage to mtDNA [89]. The general conclusion on the protective effect of H and increasing risk of J was confirmed in another study with 200 wet AMD Austrian patients [90]. However, such association was not confirmed in a large French cohort (1224 wet AMD patients and 559 individuals with normal fundus) [91]. Ballinger et al. observed damage to mtDNA in human transformed RPE cells exposed to hydrogen peroxide [92]. That damage was not completely repaired after a 3 h repair incubation. Moreover, these authors did not observe any DNA damage in three nuclear loci they investigated, and based on that observation, they concluded that there was preferential damage to mtDNA under hydrogen peroxide treatment. Furthermore, these authors concluded that there was a decreased redox function in RPE cells on hydrogen peroxide treatment on the basis of MTT reduction. However, these conclusions should not be generalized, as other works have shown that RPE cells, including ARPE-19 cell line, are prone to H 2 O 2 -iduced damage to their nuclear DNA [93]. Karunadharma et al. compared mtDNA damage typical for aging with that for AMD in human donor eyes obtained from an eye bank [94]. They noted that normal aging was associated with an increase in the common deletion region in mtDNA, but AMD was linked with elevated levels of mtDNA damage as compared with age-matched subjects without AMD. Based on the analysis of two nuclear genes, the authors noticed that mtDNA accumulated about eight times more damage than its nuclear counterpart. These authors concluded that damage in mtDNA may be an important element of AMD pathogenesis, as they may underline RPE dysfunction, crucial for AMD. Godley et al. showed that mitochondria isolated from primary human RPE cells exposed to blue light produced hydroxyl radical, superoxide and singlet oxygen [95]. As a result, these authors observed an increased mtDNA damage in RPE cells exposed to blue light as compared with cells in the dark. Study with antioxidant suggested that superoxide anion might be primarily responsible for the observed mtDNA damage. That exposure was also associated with a small loss of mitochondrial activity. Therefore, blue light, a risk factor for AMD, may contribute to its pathogenesis through the induction of mtROS and mtDNA damage. Similar to the study of Ballinger et al., these authors did not observe damage to nuclear DNA, which was assessed by qPCR. The use of a small set of low molecular weight antioxidants to draw a definite conclusion on the involvement of a particular ROS in observed effects is somewhat uncertain due to the relatively low specificity of these compounds. Moreover, these authors did not present unequivocal evidence of a causative relationship between antioxidant-induced ROS scavenging and an observed decrease in mtDNA damage. The lack of nDNA damage cannot be generalized due to the experimental technique employed and data obtained in other laboratories. These and other studies showing a greater extent of specific damage to mtDNA in the retinal cells than to nDNA usually do not contain a comparative analysis of other tissues. It is known that the rate of mutations in mtDNA is about an order higher than in nDNA. These mutations are preceded by DNA damages, which, in general, are less efficiently repaired in mitochondria than in the nucleus. Therefore, it is not surprising that the extent of DNA lesion in the retina observed in mtDNA is greater than in its nuclear counterpart. However, this may be different in other tissues and organs, and it is important to assess it in the retina in relation to other sites. This problem was addressed by Kenney et al., who showed a higher number of DNA rearrangements and deletions in mtDNA in retinas of AMD patients than non-AMD subjects [96]. Moreover, these authors observed a higher number of mtDNA changes in neural retina than in blood for both AMD patients and controls. They reported previously that the D-loop in AMD retinas had more genetic variations than in normal subjects [85]. The D-loop contains all three promotors for transcription of mitochondrial genes and the origin of replication of the heavy strand. Therefore, these results may suggest that disturbance in the replication and transcription of mtDNA caused by its damage/variability may contribute to AMD pathogenesis. However, these studies included a relatively small number of retina samples (13), which were compared with a higher number of blood samples (133 and 138). Moreover, retina samples taken postmortem were compared with blood samples obtained from live subjects. There is a great difference between the structure and size of mtDNA and nDNA, resulting in a lack of a reliable technique to measure DNA damage in both organelles. To evaluate damage to mtDNA, a quantitative real-time polymerase chain reaction (qRT-PCR) is performed of two mtDNA fragments, one "long" (about 1000 bp) and the other "short" (up to 100 bp), and the ratio of DNA damage in these fragments is calculated [97]. Several fragments can be evaluated in this way, covering a significant portion of the entire mtDNA. This is practically impossible for nDNA, where qRT-PCR can be applied to measure DNA damage in specific regions of genes rather than to provide information on gross genomic DNA damage. Only damage, which stops DNA polymerase, can be detected in this way and, somewhat paradoxically, 8-oxo-G and some other small oxidative modifications to DNA bases do not stop DNA synthesis. It is out of the scope of this review to discuss all problems associated with the measurement of DNA damage in the nucleus and mitochondria, but all analyses comparing the extent of DNA damage in these two organelles should be considered skeptically. Lin et al. showed that the extent of mtDNA damage in RPE cells was positively correlated with age of eye donors [98]. However, the repair capacity was inversely correlated with donors' age. Furthermore, damages to mtDNA were preferentially located in the macular region rather than the periphery. A positive correlation was observed between the extent of mtDNA damage and AMD grading, which in turn was negatively correlated with DNA repair capacity. As stated above, several attempts have been made to address the question as to why the central retina is particularly susceptible to initial factors of AMD pathogenesis. Terluk et al. investigated distribution of mtDNA damage in various regions of RPE and neural retina taken from AMD subjects [99]. They observed that mtDNA damage was limited to RPE, where its distribution did not differ between the macula and peripheral regions. These results are not in line with those obtained by Lin et al. [98]. Moreover, Terluk et al. concluded that damages to mtDNA were localized in the mitochondrial genome regions that may affect mitochondrial functions. In fact, due to almost complete packaging of mtDNA with coding sequences, it is hard to find its regions without potential consequences for mitochondrial function. Wang et al. observed an increased concentration of 8-oxoG in aged rodent RPE and choroid and higher levels of DNA damage in mitochondria than the nucleus [100]. These authors found a decreased mRNA expression of some DNA repair enzymes in aged RPE and choroid, but this relationship was confirmed on protein levels for 8-oxoguanine-DNA glycosylase 1 (OGG1) and mutY homolog (MYH), which are primarily involved in removing oxidative DNA damage in the nucleus. In their subsequent work, these authors confirmed preferential damage to mtDNA and a decrease in the expression of some DNA repair enzymes also in the neural retina-photoreceptors and retinal ganglion cells [101]. Almost all mitochondrial proteins, including all DNA repair proteins, are encoded by nuclear genes. Therefore, it is clear that changes in nDNA may be reflected in mtDDR. The reverse relationship could also be considered a mutation in mtDNA, leading to ROS overproduction and damage to nDNA. Is this of any relevance to AMD? The problem was addressed by Miceli and Jazwinski, who showed that ARPE-19 cells deprived of their mitochondria changed the pattern of expression of some nuclear genes in a way similar to that observed in AMD [102]. That interesting study has several drawbacks, however-first, there was no control group, which is important because ethidium bromide, which has been used to eliminate mtDNA, has a strong affinity with double-strand DNA and could intercalate nDNA and affect expression of nuclear genes. Secondly, cells without mitochondria are deprived of a main source of energy, which is not always the case in mitochondrial dysfunctions, which in turn is apparently reflected in expression of nuclear genes. Importantly, both DNA intercalation and energy deficit should rather not result in any preference in affecting expression of particular genes. Conclusions and Perspectives In this review we showed that mitochondria might be a serious, if not the most serious, source of ROS playing a major role in AMD pathogenesis. We also discussed the main role of damage to mtDNA in the production of ROS. At least two hypotheses supporting the central role of mtROS and damage to mtDNA in AMD pathogenesis can be considered (Figure 3). One is based on the assumption that initial factor is a ROS-inducing agent, the other that it damages mitochondria in a ROS-independent mode. Both scenarios lead to mitochondrial dysfunction that results in ROS overproduction and vicious cycle, leading to energy deficit and disturbances in cell functions, including such basic processes as replication and transcription. If cellular defense systems, which are also affected by increased ROS concentration, are not able to counterbalance detrimental changes resulting from mitochondrial dysfunction, the cell may accumulate pathological changes and ultimately die. However, the initial portion of ROS may also induce an adaptive response to oxidative stress, and a cell may further function in a pathological state. Such a cell is likely senescent, as senescence, and not cell death, could be primarily associated with RPE degeneration observed in AMD [34]. Therefore, the role of mitochondrial dysfunction in stress-induced senescence needs further studies. PGC-1α is potentially an important element at the crossroad of all these pathways-oxidative stress, mitochondrial dysfunction, mitochondrial defense, and senescence [103]. Satish et al. showed that activation of PGC-1α in ARPE-19 cells resulted in upregulation of mitochondrial genes and enhanced mitochondrial function in RPE cells by increasing basal and maximal respiration rates [104]. Another possibility is that some cells are intrinsically resistant to this stress. Moreover, mtDNA can also be considered a central player in AMD pathogenesis, as it can be a primary target for an initial factor, independently of whether it directly generates ROS or not. Damaged mtDNA likely results in disturbances in mitochondrial gene expression, as mtDNA damage may occur in either a coding sequence or a regulatory sequence. As we stated above, no adult stem cells have been identified in the human retina. However, Salero et al. identified a subpopulation of human RPE cells that, after exposure to growth factors, displayed properties of adult stem cells [105]. Therefore, an alternative mechanism of regeneration of damaged RPE, and thus an alternative mechanism of AMD pathogenesis, can be considered. Consequently, mitochondrial metabolism in the RPE cells subpopulation indicated by Salero et al. should be investigated. Possible involvement of mitochondrial reactive oxygen species (mtROS) and damage to mtDNA in the pathogenesis of AMD. Initial factor, which can belong to known AMD risk factors, may induce production of reactive oxygen species (ROS) or damage mitochondria. Both possibilities may result in damage to mtDNA, which in turn may initiate the mitochondrial vicious cycle, leading to energy deficit and ultimately to cell death. Initially and finally produced ROS can be neutralized by the cellular antioxidant system, and mitochondrial dysfunction in general and mtDNA damage in particular can be ameliorated by mitochondrial quality control (mtQC), but both are repressed by ROS production. Some cells may adapt to stress conditions and survive it, but they may display stress-induced premature senescence, leading to an inability to replace degenerated cells. As we stated above, no adult stem cells have been identified in the human retina. However, Salero et al. identified a subpopulation of human RPE cells that, after exposure to growth factors, displayed properties of adult stem cells [105]. Therefore, an alternative mechanism of regeneration of damaged RPE, and thus an alternative mechanism of AMD pathogenesis, can be considered. Consequently, mitochondrial metabolism in the RPE cells subpopulation indicated by Salero et al. should be investigated. Study of the molecular aspects of AMD pathogenesis suffers from at least two disadvantages. The first is the lack of possibility of conducting research on target tissue in live subjects. The other is the lack of a reliable animal model of AMD, as this disease has significantly different phenotype in rodents than in humans. Several cellular models have been proposed, including cells taken postmortem from retinas of AMD patients. Recently, we have proposed an AMD model with cells with double knockout in the NRF2 (nuclear factor-erythroid 2-related factor-2) and PGC-1a genes [106]. Earlier, Zhang et al. showed that mice with repressed PGC-1α fed with a high-fat diet provide a promising model to study AMD pathogenesis [107]. The RPE of these mice displays several abnormalities, including decreased mitochondrial activity and increased levels of ROS [107]. This model can be enriched by RPE cells obtained from human induced pluripotent stem cells taken from . Possible involvement of mitochondrial reactive oxygen species (mtROS) and damage to mtDNA in the pathogenesis of AMD. Initial factor, which can belong to known AMD risk factors, may induce production of reactive oxygen species (ROS) or damage mitochondria. Both possibilities may result in damage to mtDNA, which in turn may initiate the mitochondrial vicious cycle, leading to energy deficit and ultimately to cell death. Initially and finally produced ROS can be neutralized by the cellular antioxidant system, and mitochondrial dysfunction in general and mtDNA damage in particular can be ameliorated by mitochondrial quality control (mtQC), but both are repressed by ROS production. Some cells may adapt to stress conditions and survive it, but they may display stress-induced premature senescence, leading to an inability to replace degenerated cells. Study of the molecular aspects of AMD pathogenesis suffers from at least two disadvantages. The first is the lack of possibility of conducting research on target tissue in live subjects. The other is the lack of a reliable animal model of AMD, as this disease has significantly different phenotype in rodents than in humans. Several cellular models have been proposed, including cells taken postmortem from retinas of AMD patients. Recently, we have proposed an AMD model with cells with double knockout in the NRF2 (nuclear factor-erythroid 2-related factor-2) and PGC-1a genes [106]. Earlier, Zhang et al. showed that mice with repressed PGC-1α fed with a high-fat diet provide a promising model to study AMD pathogenesis [107]. The RPE of these mice displays several abnormalities, including decreased mitochondrial activity and increased levels of ROS [107]. This model can be enriched by RPE cells obtained from human induced pluripotent stem cells taken from AMD patients with a genetic susceptibility to this disease, as shown by Golestaneh et al., who also noted an important role of PGC-1α in AMD pathogenesis [108]. However, Saint-Geniez et al. showed that PGC-1α regulates VEGF in the retina and is required for normal and pathological neovascularization [109]. This important work confirms a significant role of PGC-1α in AMD pathogenesis and shows that a tight regulation of this gene is crucial for retinal health and function. When antioxidants are used, it should not to be assumed that their distribution inside the cell is homogenous, as mitochondria may have their own specificity to accommodate antioxidants [110]. This could explain discrepancies obtained in some clinical trials showing the influence of antioxidant supplementation on AMD occurrence and progression [111]. There is a great diversity among results on mtDNA oxidation reported in various laboratories-the results from different groups can differ by more than 60,000-fold [55]. Many conclusions and hypotheses on the central role of mitochondrial ROS and damage to mtDNA in AMD pathogenesis are based on the mitochondrial vicious cycle mechanism. This mechanism was also implicated in the free radical theory of aging [112]. As aging is the most serious risk factor of AMD, this implication may be of great significance for AMD pathogenesis. Contemporary considerations on the nature of aging assume rather that the main sources of mtDNA mutations are errors in mtDNA replication that escaped repair mechanisms due to their inefficiency (reviewed in reference [46]). Accumulation of such mutations in aging organisms occurs by clonal expansion and not the vicious cycle, and ROS play mainly a signaling role mediating response to accumulated damages to biomolecules associated with aging. Therefore, it may be important to specify the role of "physiological" and accelerated aging in AMD pathogenesis. One way or another, mitochondria and mtDNA are central elements of contemporary theories of aging, so they are central elements of AMD pathogenesis. Author Contributions: J.B. and K.K. conceived the concept of this manuscript and wrote the first draft version of it, which was then equally developed by J.B., E.P., J.S., A.J. and K.K.
2019-05-17T13:08:37.671Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "740eae003aed3600f0a9fece3ffcf215da3ac3e8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/10/2374/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "740eae003aed3600f0a9fece3ffcf215da3ac3e8", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
256418805
pes2o/s2orc
v3-fos-license
Association between low handgrip strength and obesity with mortality in peritoneal dialysis patients The association between sarcopenia and obesity in peritoneal dialysis (PD) patients is more complex than that of the general population. The aim of this study was, therefore, to evaluate the association of patient survival with sarcopenia or sarcopenic components and obesity in groups of patients with PD. We retrospectively analyzed a dataset from 199 prevalent PD patients. Measurements including handgrip strength (HGS), appendicular lean mass index, and baseline characteristics, were obtained during the period of study. Patients were divided into four groups according to their HGS and obesity: NH-NO (normal HGS and non-obesity, n = 60), NH-O (normal HGS and obesity, n = 31), LH-NO (low HGS and non-obesity, n = 71), and LH-O (low HGS and obesity, n = 37). The median follow-up interval was 17 months. The Kaplan–Meier curve analysis showed that the LH-O group had the poorest patient survival outcome among the four groups (P < 0.001). The NH-NO group had a better patient survival outcome compared with the LH-NO group. Univariate and multivariate Cox regression analyses showed that the LH-O group had the highest mortality rate compared with the other groups. The NH-NO group had lower mortality compared with the LH-NO group. The present study demonstrated that obesity with low HGS was associated with the greatest mortality rate in groups defined by HGS and obesity. sarcopenia can be prevented by the presence of obesity as a source of energy storage in these patients 8 . Previous studies have evaluated the clinical impact of sarcopenic obesity in dialysis patients, but the number of studies are limited and the results have not been consistent [5][6][7] . Furthermore, there is little data regarding the association between patient survival as a hard outcome, and sarcopenic obesity in patients with PD. Further studies are required to identify the clinical impact and survival rates of such patients exhibiting sarcopenic obesity. The aim of this study was, therefore, to evaluate the association of patient survival with sarcopenia or sarcopenic components and obesity in groups of patients with PD. Methods Study population. We retrospectively re-analyzed a dataset from a previous study 9 . Briefly, the data of all patients with PD who attended a tertiary medical center between September 2017 and November 2020 were collected, and written informed consent was obtained from all the patients. In addition, our study enrolled PD patients after ≥ 3 months of treatment without a history of maintenance hemodialysis or kidney transplantation before PD (n = 214). Clinical or laboratory data including body composition, strength, and patient survival were recorded. Among these, 15 patients were excluded due to missing data (n = 9) or an inability to ambulate due to an amputated limb (n = 6). Therefore, 199 patients undergoing PD were finally analyzed. Measurements including handgrip strength (HGS), appendicular lean mass (ALM) index, and baseline characteristics, were obtained during a routine peritoneal membrane equilibration test during the period of study. Patients were divided into four groups according to their handgrip strength (HGS) and obesity: NH-NO (normal HGS and non-obesity), NH-O (normal HGS and obesity), LH-NO (low HGS and non-obesity), and LH-O (low HGS and obesity). The end point of follow-up measurements was December 2021. All mortality events were retrieved from patient medical records. Patients with kidney transplantation, switch to hemodialysis, recovery of renal function, or transfer to other hospitals were defined as censored data at the end of PD. This study received ethical approval from the Yeungnam University Hospital Institutional Review Board and was conducted in accordance with the principles of the World Medical Association Declaration of Helsinki (Approval no: 2020-06-002). Baseline variables. We collected baseline data for age, sex, presence of diabetes mellitus (DM), use of automated PD, duration of dialysis (months), body mass index (BMI, kg/m 2 ), weekly Kt/V urea , C-reactive protein (CRP) level (mg/dL), urine volume (mL/day), serum calcium (mg/dL), phosphorus (mg/dL), sodium (mmol/L), potassium (mmol/L), serum albumin (g/dL), fasting blood glucose (mg/dL), systolic blood pressure (mmHg), diastolic blood pressure (mmHg), triglycerides (mg/dL), high-density lipoprotein levels (mg/dL), normalized protein equivalent for total nitrogen appearance (nPNA, g/kg/day), alkaline phosphatase (IU/L), and intact-parathyroid hormone (pg/mL). All laboratory studies were performed after an overnight fast and dialysate drainage. DM was defined based on a patient-reported history of DM and its diagnosis on medical records, or use of DM medications. The dialysate per serum creatinine level at 4 h (DP4Cr) ratio was obtained during the peritoneal membrane equilibration test and weekly Kt/V urea was calculated using 24-h urine and dialysate collections, as previously described 10 . Assessment of body composition, strength, and metabolic syndrome. Body compositions were measured using dual-energy X-ray absorptiometry (DEXA, Hologic, Madison, WI, USA). The measurements were performed after dialysate drainage, with the patients in a supine position, wearing a light gown. Lean mass and fat mass (FM) were measured using the DEXA. The ALM index (kg/m 2 ) was defined as the sum of lean mass in the upper and lower extremities divided by height squared. The total FM was defined as whole-body FM. HGS was measured in all the patients using a digital dynamometer (Takei 5401; Takei Scientific Instruments Co., Ltd, Niigata, Japan) and was performed according to an American Society of Hand Therapists protocol 11 . First, the patient maintained an empty abdomen and seated position. Second, the patient positioned the adducted shoulder without rotation, flexed the forearm to 90°, and extended the wrist 0-30°. Each patient performed three trials with the dominant hand. HGS was defined as the highest value among the three trials. In our study, low HGS was defined as < 28 kg for men and < 18 kg for women from the Asian Working Group for Sarcopenia 12 . Low muscle mass was defined in terms of an ALM index of < 7.0 kg/m 2 for men and < 5.4 kg/ m 2 for women. In our study, low muscle mass cut-off values were defined based on Asian Working Group for Sarcopenia recommendations 12 . The guideline defines appendicular muscle mass measurements as the appendicular skeletal muscle mass index from the DEXA, but DEXA does not measure the skeletal muscle. Nevertheless, ALM reportedly represents appendicular skeletal muscle, and the ALM index and appendicular skeletal muscle index were used interchangeably in previous studies to derive relevant cut-off values in the guideline 12,13 . Therefore, we defined low muscle mass using ALM index values. Patients with low HGS and ALM index were classified as having sarcopenia. Obesity was defined from previous studies, as the percentage of total FM per body weight > 27% for men and > 38% for women 3,14 . Metabolic syndrome was defined using a modified version of metabolic syndrome for patients with PD 15 . It was diagnosed when three or more of the following five components were observed: (1) systolic blood pressure ≥ 130 mmHg and/or diastolic blood pressure ≥ 85 mmHg, or drug treatment for hypertension; (2) serum triglycerides ≥ 150 mg/dL, or drug treatment for high triglycerides; (3) high-density lipoprotein level < 40 mg/ dL in men or < 50 mg/dL in women, or drug treatment for low high-density lipoprotein; (4) fasting glucose level ≥ 100 mg/dL, or drug treatment for DM; (5) BMI > 25 kg/m 2 . Statistical analysis. Data www.nature.com/scientificreports/ were presented as the mean ± standard deviation for data with a normal distribution, and median (interquartile range) for data with a non-normal distribution. Continuous variables with a non-normal distribution were compared using the Kruskal-Wallis test, and those with a normal distribution were compared using a one-way analysis of variance. The Bonferroni or Tukey comparison post-hoc test was used in subgroup comparisons. Survival estimates were calculated using the Kaplan-Meier and Cox regression analyses. In Kaplan-Meier analyses, P-values for the comparison of survival curves were determined using the log-rank test. Multivariate Cox regression analyses were adjusted for age and serum albumin based on the statistical significance obtained from the univariate analysis. The proportional hazard assumption was satisfied for all the variables. In addition, we performed competing risk analyses to decrease effects by censored data. For competing risk analyses, we defined censored cases as competing risk and performed the Fine and Gray competing risk model. A P-value < 0.05 was considered statistically significant. Results Participants' clinical characteristics. The numbers of NH-NO, NH-O, LH-NO, and LH-O were 60 (30.2%), 31 (15.6%), 71 (35.7%), and 37 (18.6%), respectively (Table 1). Groups with low HGS were older than those with normal HGS, and LH-O had the oldest participants among the four groups. Groups with obesity had a greater proportion of males than those without obesity. Groups with low HGS had greater proportions of DM than those with normal HGS. Groups with obesity had greater BMI than those without obesity. Groups with obesity had greater CRP levels than the NH-NO group. Serum albumin levels were greater in groups with normal HGS than those with low HGS. Diastolic blood pressure was lower in the LH-O group than in the NH-NO and NH-O groups. The triglycerides level was highest in the NH-O group. There were no significant differences NH-NO (n = 60) NH-O (n = 31) LH-NO (n = 71) LH-O (n = 37) P-value Age (years) 51 (14) 49 (18) (8) months. The 18-month survival rate in patients with normal or low HGS was 97.8 and 79.7%, respectively (P < 0.001). That in patients with or without obesity was 79.8 and 92.8%, respectively (P = 0.030). The patient survival was poorer in patients with low HGS or with obesity compared to those with normal HGS or without obesity. These results reveal that combination with HGS and obesity is useful in discriminating the patient survival in patients with low HGS (Fig. S1). The 18-month survival rate was 98.3% in the NH-NO group, 96.8% in the NH-O group, 87.3% in the LH-NO group, and 65.2% in the LH-O group. The Kaplan-Meier curve analysis showed that the LH-O group had the poorest patient survival outcome among the four groups (Fig. 2, P < 0.001). The NH-NO group had a better patient survival outcome compared with the LH-NO group. Statistical significances among groups according to ALM index, or sarcopenia and obesity were lower than in those among groups according to HGS alone, and obesity (Fig. 3). A univariate Cox regression analysis showed that the LH-O group had the highest mortality rate compared with the other groups ( Table 2). The NH-NO group had lower mortality compared with the LH-NO group. Multivariate Cox regression analysis showed the same trends as those in the univariate analysis. Subgroup analyses according to age, sex, and the presence of DM showed that the LH-O had the highest mortality rate at the end-point of follow-up in all the subgroups except that of older patients (Table 3). Furthermore, results from the competing risk analysis were similar to those from Cox regression analyses performed using the total cohort (Table S1). Association between metabolic syndrome components or obesity and mortality. Among patients with metabolic syndrome, 56 (46.7%) had obesity based on FM and 70 (58.3%) had obesity based on BMI. The proportions of obesity based on two definitions differed. We analyzed patient survival according to high FM (total FM per weight > 27% for men and > 38% for women) based on DEXA measurements and high BMI (> 25 kg/m 2 ) based on anthropometric variables. There were 100 (50.3%) patients with a low FM and low Kaplan-Meier curves showed that patients with a low FM and high BMI had better survival than those with a high FM (Fig. S2). Furthermore, we analyzed patient survival according to the presence of metabolic syndrome components. There was no significant intergroup difference in patient survival based on triglycerides, high-density lipoprotein, glucose, or BMI (Fig. S3). For the blood pressure component, patients with high blood pressure had better survival than those with low blood pressure. Discussion Our study included 199 prevalent PD patients and evaluated their body compositions, HGS, and mortality. We divided the patients into various groups based on sarcopenia, sarcopenic components, or obesity. Our study showed that the LH-O group had the poorest patient survival rate among the four groups. Groups defined according to the ALM index, or sarcopenia and obesity were poorer in predicting prognosis as compared to those using HGS and obesity. Univariate and multivariate Cox regression analyses showed that the LH-O group had www.nature.com/scientificreports/ the highest mortality rate compared to the other groups (Table 2). Results from the subgroup analyses according to age, sex, and the presence of DM were similar to those from the Cox-regression analyses. Our results revealed that low HGS was associated with patient mortality, regardless of the presence of obesity, but the influence of obesity differed between patients with low and normal HGS. Obesity was not associated with patient survival in patients with normal HGS. Obesity is a well-known risk factor of cardiovascular disease and mortality, but reverse epidemiology is also a well-known phenomenon, in which obesity is paradoxically associated with a favorable outcome in patients with chronic diseases 8 . A time discrepancy between obesity and obesity-related hazard outcomes exist and obesity has a favorable effect on protein energy wasting in patients with chronic diseases on short-term follow-up 16 . Considering the relatively short-term follow-up of our study and the normal HGS in the NH-NO and NH-O groups, one cannot be certain of either the favorable or hazardous effects of obesity. If patients in the NH-NO or NH-O groups do not develop low HGS during further follow-up, one may deduce that obese patients have a poor survival outcome. On the other hand, if patients without obesity develop low HGS later, one may deduce that obese patients have a better survival outcome. In our study, the association between obesity and metabolic syndrome was evaluated to identify the hazard effect of obesity. We analyzed metabolic syndrome or numbers of its components according to four groups and patients with obesity had greater association with metabolic syndrome compared to those without obesity. There were inconsistent opinions regarding the effect of obesity on patient survival in dialysis patients 16,17 . These inconsistencies have been explained by some factors such as inaccuracies in fat mass measurements and different effects according to dialysis vintage. In our study, obesity was evaluated using DEXA as gold-standard method for predicting fat mass and prevalent PD patients were enrolled. In our study, obesity was inversely associated with www.nature.com/scientificreports/ patient survival in patients with low HGS. Although our study did not show the association between metabolic syndrome and mortality, the presence of obesity would exert the hazard effect manifested as cardio-metabolic effect in patients with low HGS, which may influence patient survival. Obesity is associated with long-term survival, and it is expected that its cardio-metabolic effects would not be great during short term follow-up. However, obesity can act as an additive effect in patients with sarcopenia, an important comorbidity associated with mortality, and patients with sarcopenia might be made more vulnerable to metabolic complications than those without sarcopenia. This may lead to early hazardous effects of obesity in these patients. Although our study did not evaluate biologic markers or follow-up data, previous studies have shown that the hazardous effects of obesity develop through various underlying mechanisms, such as aggravation of sarcopenia, or chronic inflammation 18 . There are different definitions for the diagnosis of sarcopenic obesity, and no definite consensus on optimal cut-off values or measurements for predicting muscle mass or obesity has been reached [3][4][5][6][7]15,19,20 . Some studies using different definitions have evaluated the clinical impact or prevalence of sarcopenic obesity in patients undergoing dialysis. Melhotra et al. evaluated 122 patients undergoing hemodialysis and defined sarcopenic obesity using ALM index and percentage of fat mass from DEXA 5 . They did not identify the association between sarcopenic obesity and mortality in these patients. Honda et al. evaluated 328 patients undergoing dialysis and body compositions using DEXA 6 . They showed an inverse association between fat mass and mortality. However, there was no association observed between lean body mass and mortality in their study 7 . Use of ALM index from DEXA could be influenced by volume status. Volume overloading may lead to an overestimation of ALM and an underestimation of sarcopenia. This would attenuate the association between clinical outcome of sarcopenia and sarcopenic obesity, defined using the ALM index of DEXA 21 . Our results also showed a lower association between the low muscle mass group using the ALM index and mortality, compared with those using the HGS. These results are in line with the results from Melhotra's study 5 . A previous study defined sarcopenic obesity based on low HGS and obesity and showed a positive association between sarcopenic obesity and cardiovascular risk factors in patients with PD 7 . However, the body composition measurements were evaluated using bioimpedance and their study did not have data for survival rates as hard outcomes. Obesity is an-well known risk factor for mortality in various populations, but metabolic syndrome including obesity as a component was not associated with mortality. In our study, the lack of an association between metabolic syndrome including BMI and mortality would be caused by two issues. First, patients defined as having obesity based on BMI may be associated with favorable outcomes through a high muscle mass. In our study, a high BMI was a component for obesity of metabolic syndrome. However, some patients with a high BMI would have high muscle mass and favorable survival. In contrast, some patients with a low BMI would have a low muscle mass and poor survival. In our study, patients with a low FM and high BMI had better survival than those with a high FM and a high or low BMI, which revealed that the definition of obesity using body composition measurements may be better than that using BMI in high-risk populations of protein energy wasting such as PD patients. Second, the reverse epidemiology for nutritional status and blood pressure may be associated with the protective effect of some components in metabolic syndrome among PD patients. The reverse epidemiology phenomenon has shown that patients with high cholesterol or blood pressure levels have better survival than those with low cholesterol or blood pressure levels 22 . Our results also showed better survival among patients with a high versus low blood pressure. In addition, there were no significant differences in patient survival according to triglyceride or high-density lipoprotein levels. Although further evaluations of the association between metabolic syndrome and mortality are beyond the scope of this study, the non-association between metabolic syndrome and mortality in PD patients may be associated with mixed effects with both hazardous and protective effects. Our study has some advantages over previous studies. Patients with PD have been sustainedly prone to volume overload compared to hemodialysis patients with a definite dry weight 23 . Therefore, ALM using DEXA is inherently biased as an overestimation of ALM index and HGS may be more accurate with the volume independent method. Our study also did not show significant differences in the ALM index between the NH-NO and LH-NO or LH-O groups, despite differences in muscle strength, and this may be associated with volume effects. We compared patient survival rates according to different definitions using the ALM index, HGS, or sarcopenia. HGS was the best predictor of mortality. In addition, our study evaluated the body composition measurements of patients with PD using DEXA as a gold-standard method of fat mass estimation 24 . A definition of obesity using DEXA may be superior to definitions of obesity with regard to BMI, bioimpedance, or waist circumference in patients with PD. The BMI does not differentiate between fat mass and muscle mass. The bioimpedance method is not the gold-standard in predicting fat mass in patients with PD. Waist circumference can also be influenced by peritoneal dialysate. Finally, we evaluated patient survival as a hard outcome. This study has several limitations. The first limitation is our study's design which was based on a single-center, and retrospective in nature. Second, our study was limited by a small sample size and differences in baseline characteristics. ALM index and HGS in the NH-O group were greater than that in the LH-NO or LH-O groups, and it would be associated with male predominance in the NH-O group. However, we suggest that disproportions in sex and DM would not influence the results of survival analyses owing to classification using a different cut-off value according to sex. We included subgroup analyses to overcome this limitation and the LH-O had the highest mortality rate at the end-point of follow-up in all the subgroups except in that of older patients. In addition, the number of patients in the groups was disproportionate and those in the NH-O and LH-O groups were especially small. These factors limited the full adjustment of covariates and we performed limited multivariate Cox regression analyses without adjustments for all confounding factors. Therefore, to select confounding factor for adjustments of multivariate analyses, we excluded variables without statistical significance in univariate Cox regression analyses. Further studies using more covariates and matched propensity scores are required to overcome this limitation. Third, our study used ALM index and HGS measurements at a single time-point without follow-up data. Finally, our study did not include some important variables such as physical performance. The present study demonstrated that obesity with a low HGS was associated with the greatest mortality rate in groups defined by HGS and obesity. Therefore, clinicians may evaluate patients in terms of the routine assessment of HGS using the volume-independent method to predict patient survival and by including additional discriminating factors related to obesity in patients with PD. Data availability All data generated or analyzed during this study are included in this published article and its Supplementary Information files.
2023-02-01T15:17:14.260Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "4f0ebe8855fd10a88b18053de48c6e3287fdc940", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "4f0ebe8855fd10a88b18053de48c6e3287fdc940", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
203593493
pes2o/s2orc
v3-fos-license
Accelerating the Computation of UCB and Related Indices for Reinforcement Learning In this paper we derive an efficient method for computing the indices associated with an asymptotically optimal upper confidence bound algorithm (MDP-UCB) of Burnetas and Katehakis (1997) that only requires solving a system of two non-linear equations with two unknowns, irrespective of the cardinality of the state space of the Markovian decision process (MDP). In addition, we develop a similar acceleration for computing the indices for the MDP-Deterministic Minimum Empirical Divergence (MDP-DMED) algorithm developed in Cowan et al. (2019), based on ideas from Honda and Takemura (2011), that involves solving a single equation of one variable. We provide experimental results demonstrating the computational time savings and regret performance of these algorithms. In these comparison we also consider the Optimistic Linear Programming (OLP) algorithm (Tewari and Bartlett, 2008) and a method based on Posterior sampling (MDP-PS). Introduction The practical use of the asymptotically optimal UCB algorithm (MDP-UCB) of Burnetas and Katehakis (1997) has been hindered (Tewari and Bartlett, 2008;Auer and Ortner, 2007) by the computational burden of the upper confidence bound indices c.f. Eq. (2), that involves the solution of a non-linear constrained optimization problem of dimension equal to the cardinality of the state space of the Markovian decision process (MDP) under consideration. In this paper we derive an efficient computational method that only requires solving a system of two non-linear equations with two unknowns, irrespective of the cardinality of the state space of the MDP. In addition, we develop a similar acceleration for computing the indices for the MDP-Deterministic Minimum Empirical Divergence (MDP-DMED) developed in Cowan et al. (2019), that involves solving a single equation of one variable. In Section 4 we present these computationally efficient formulations and provide experimental results demonstrating the computational time savings. In addition to the papers upon which the algorithms here are explicitly based, there are many other approaches for adaptively learning MDPs while minimizing expected regret. Jaksch et al. (2010) propose an algorithm, UCRL2, a variant of the UCRL algorithm of Auer and Ortner (2007), that achieves logarithmic regret asymptotically, as well as uniformly over time. UCRL2, defines a set of plausible MDPs and chooses a near-optimal policy for an optimistic version of the MDP through so called "extended value iteration". This approach, while similarly optimistic in flavor, is sufficiently different than the algorithms presented here that we will not be comparing them directly. The algorithms in this paper act upon the estimated transition probabilities of actions for only our current state, for a fixed estimated MDP. Specifically, MDP-UCB and OLP inflate the right hand side of the optimality equations by perturbing the estimated transition probabilities for actions in the current state. MDP-DMED estimates the rates at which actions should be taken by exploring nearby plausible transition probabilities for actions in the current state. Finally, MDP-PS obtains posterior sampled estimates, again, only for, the transition probabilities for actions in the current state. Recently, Efroni et al. (2019) show that model-based algorithms (which all the algorithms discussed here are), that use 1-step planning can achieve the same regret performance as algorithms that perform full-planning. This allows for a significant decrease in the computational complexity of the algorithms. In particular they propose UCRL2-GP, which uses a greedy policy instead of solving the MDP as in UCRL2, at the beginning of each episode. They find that this policy matches UCRL2 in terms of regret (up to constant and logarithmic factors), while benefiting from decreased computational complexity. The setting under consideration however, is a finite horizon MDP and the regret bounds are in PAC terms (Dann et al., 2017) and optimal minimax (Osband and Van Roy, 2016). Further analysis is required to transfer these results to the setting of this paper. Namely, an infinite horizon MDP with bounds on the asymptotic growth rate of the expected regret. A fruitful direction of study would be to examine the relationship between UCRL2-GP, UCRL2, and the algorithms presented here, more closely, paying particular attention to the varying dependencies on the dimensionality of the state space. Osband and Van Roy (2017) analyze and compare the expected regret and computational complexity of PS-type algorithms (PSRL therein) versus UCB-type (OFU therein) algorithms, in the setting of finite horizon MDPs. The PSRL algorithm presented there is similar to MDP-PS here. However, their optimistic inflation or stochastic optimism is done across the MDP as a whole, either over plausible MDPs in the case of OFU, or for a fixed MDP in the PSRL case. By contrast, in this paper we present non-episodic versions where the inflations are done only for the actions of our current state for a fixed estimated MDP. They also argue therein that any OFU approach which matches PSRL in regret performance will likely result in a computationally intractable optimization problem. Through that lens, the main result of this paper, proving a computationally tractable version of the optimization problem shows that actually a provably asymptotically optimal UCB approach can compete with a PS approach both in terms of regret performance as well as computational complexity. A more thorough analysis is required in order to determine what parts of our analysis here, with an undiscounted infinite horizon MDP, can carry over to the finite horizon MDP setting of Osband and Van Roy (2017) and Osband and Van Roy (2016). As this is a fast growing area of research, there is a lot of recent work. A good resource for reinforcement learning problems and their potential solution methods is Bertsekas (2019). For a more bandit focused approach, Lattimore and Szepesvári (2018) has a nice overview of the current state of the art. Most directly relevant to this paper are Chapters 8, 10, and 38 therein. Cesa-Bianchi and Lugosi (2006) discuss online learning while minimizing regret for predicting individual sequences of various forms, with Chapter 6 (bandit related problems) therein being most relevant here. For other related early work we refer to Mandl (1974), Borkar and Varaiya (1982), Agrawal et al. (1988a), and Agrawal et al. (1988b). Paper Structure The paper is organized as follows. In Section 2 we formulate the problem under consideration first as a completely known MDP and then as an MDP with unknown transition laws. In Section 3 we present four simple algorithms 1 for adaptively optimizing the average reward in an unknown irreducible MDP. The first is the asymptotically optimal UCB algorithm (MDP-UCB) of Burnetas and Katehakis (1997) that uses estimates for the MDP and choose actions by maximizing an inflation of the estimated right hand side of the average reward optimality equations. The second (MDP-DMED) is inspired by the DMED method for the multi-armed bandit problem developed in Takemura (2010, 2011) and estimates the optimal rates at which actions should be taken and attempts to take actions at that rate. The third is the Optimistic Linear Programming (OLP) algorithm (Tewari and Bartlett, 2008) which is based on MDP-UCB but instead of using the KL divergence to inflate the optimality equations, uses the L 1 norm. The fourth (MDP-PS) is based on ideas of greedy posterior sampling that go back to Thompson (1933) and similar to PSRL in Osband and Van Roy (2017). The main contribution of this paper is in Section 4, where we present the efficient formulations and demonstrate the computational time savings. Various computational challenges and simplifications are discussed, with the goal of making these algorithms practical for broader use. In Section 5 we compare the regret performance of these algorithms in numerical examples and discuss the relative advantages of each. While no proofs of optimality are presented, the results of numerical experiments are presented demonstrating the efficacy of these algorithms. Proof of optimality for these algorithms will be discussed in future works. Formulation Reinforcement learning problems are commonly expressed in terms of a controllable, probabilistic, dynamic system, where the dynamics must be learned over time. The classical model for this is that of a discrete time, finite state and action Markovian decision process (MDP). See for example, Derman (1970) and Auer and Ortner (2007). In particular, learning is necessary when the underlying dynamics (the transition laws) are unknown, and must be learned by observing the effects of actions and transitions of the system over time. A finite MDP is specified by a quadruple (S, A, R, P ), where S is a finite state space, A = [A(x)] x∈S is the action space, with A(x) being the (finite) set of admissible actions (or controls) in state x, R = [r x,a ] x∈S,a∈A(x) , is the expected reward structure and P = [p a x,y ] x,y∈S,a∈A(x) is the transition law. Here r x,a and p a x,y are respectively the one step expected reward and transition probability from state x to state y under action a. For extensions regarding state and action spaces and continuous time we refer to Feinberg et al. (2016) and references therein. When all elements of (S, A, R, P ) are known the model is said to be an MDP with complete information (CI-MDP). In this case, optimal polices can be obtained via the appropriate version of Bellman's equations, given the prevailing optimization criterion, state, action, time conditions and regularity assumptions; c.f. Feinberg et al. (2016), Robbins (1952). When some of the elements of (S, A, R, P ) are unknown the model is said to be an MDP with incomplete or partial information (PI-MDP). This is the primary model of interest for reinforcement learning, when some aspect of the dynamics must be learned through interaction with the system. For the body of the paper, we consider the following partial information model: the transition probability vector p a x = [p a x,y ] y∈S is taken to be an element of parameter space that is, the space of all |S|-dimensional probability vectors. The assertion of this parameter space deserves some unpacking. It is at first simply a theoretical convenience-it ensures that for any control policy, the resulting Markov chain is irreducible. It also represents a complete lack of prior knowledge about the transition dynamics of the MDP. Knowing that certain state-state transitions are impossible requires prior model specific knowledge (such knowing the rules of chess). Learning based purely on finite observed data could never conclude that a given transition probability is zero. Thus, we assert a uniform Bayesian prior on the transition probabilities and therefore the likelihood associated with p = 0 is 0. In this way, asserting this parameter space starting out represents a fairly agnostic initial view of the underlying learning problem. A possible future direction of study is to examine how to efficiently incorporate prior knowledge, for instance modifying the specified parameter space, into the learning process without compromising on the learning rate. Killian et al. (2017) and Doshi-Velez and Konidaris (2016) discuss hidden parameterized transition models, for example, which leverage additional prior knowledge about the transition probability space. In the body of this paper, we take this unknown transition law to be the only source of incomplete information about the underlying MDP; the reward structure R = [r x,a ] x∈S,a∈A(x) is taken to be known (at least in expectation), and constant. Much of the discussed algorithms will generalize to the situation where the distribution of rewards must also be learned, but we reserve this for future work. Under this model, we define a sequence of state valued random variables X 1 , X 2 , X 3 , . . . representing the sequence of states of the MDP (taking X 1 = x 1 as a given initial state), and action valued random variables A 1 , A 2 , . . . as the action taken by the controller, action A t being taken at time t when the MDP is in state X t . It is convenient to define a control policy π as a (potentially random) history dependent sequence of actions such that π(t) = π(X 1 , A 1 , . . . , X t−1 , A t−1 , X t ) = A t ∈ A(X t ). We may then define the value of a policy as the total expected reward over a given horizon of action: Let Π be the set of all feasible MDP policies π. We are interested in policies that maximize the expected reward from the MDP. In particular, policies that are capable of maximizing the expected reward irrespective of the initial uncertainty that exists about the underlying MDP dynamics (i.e., for all possible P under consideration). It is convenient then to define V (T ) = sup π∈Π V π (T ). We may then define the "regret" as the expected loss due to ignorance of the underlying dynamics, Note, V, V π , R π all have an implicit dependence on P , through the dynamics of the states and effects of the actions. We are interested in uniformly fast (Burnetas and Katehakis, 1997) policies, policies π that achieve R π (T ) = O(ln T ) for all feasible transition laws P . In this case, despite the controller's initial lack of knowledge about the underlying dynamics, she can be assured that her expected loss due to ignorance grows not only sub-linearly over time, but slower than any power of T . It is shown in Burnetas and Katehakis (1997) that any uniformly fast policy has a strict lower bound of logarithmic asymptotic growth of regret, with the unknown transition law P and reward structure R only influencing the order coefficient, not the growth rate. Policies that achieve this lower bound are called asymptotically optimal c.f. Burnetas and Katehakis (1997). As final notation, it is convenient to define the specific data available at any point in time, under a given (understood) policy π: let T x (t), T a x (t), T a x,y (t) be, respectively, the number of visits to state x, the uses of action a in state x, and the transitions from x to y under action a, that are observed in the first t rounds. In the next subsection, we consider the case of the controller having complete information (the best possible case) and use this to motivate notation and machinery for the remainder of the paper. The body of the paper is devoted to presenting and discussing four computationally simple algorithm that are either provably asymptotically optimal, or at least appear to be. While no proofs of optimality are presented, the results of numerical experiments are presented demonstrating the efficacy of these algorithm. Proof of optimality for these algorithm will be discussed in future works. The Optimal Policy Under Complete Information Classical results (Burnetas and Katehakis, 1997) show that there is a stationary, deterministic policy π (each action depends only on the current state), that realizes the maximal long term average expected value. That is, a simple Markovian policy π * that realizes We may characterize this optimal policy in terms of the solution for φ = φ * (A, P ) and v = v(A, P ) of the following system of optimality equations: Given the solution φ and vector v to the above equations, the asymptotically optimal policy π * can be characterized as, whenever in state x ∈ S, take any action a which realizes the maximum in Eq. (1). We denote the set of such asymptotically optimal actions as O(x, P ). In general, a * (x, P ) should be taken to denote an action a * ∈ O(x, P ). Note, realizing this policy necessarily requires knowledge of P and R, in order to solve the system of optimality equations. The solution φ above represents the maximal long term average expected reward of an optimal policy. The vector v, or more precisely, v x for any x ∈ S, represents in some sense the immediate value of being in state x relative to the long term average expected reward. The value v x essentially encapsulates the future opportunities for value available due to being in state x. Optimal Policies Under Unknown Transition Laws The results of the previous section show that V (T ), the value of the optimal policy, goes approximately like V (T ) ≈ φ * T . We begin by characterizing the regret of any arbitrary policy π, comparing its value relative to this baseline. It will be convenient in what is to follow to define the following notation: The function L represents the value of a given action in a given state, for a given transition vector-both the immediate reward, and the expected future value of whatever state the MDP transitions into. The value of an asymptotically optimal action for any state x is thus given by L * (x, A, P ) = L(x, a * (x, P ), p a * (x,P ) x , v(A, P )). It can be shown that the "expected loss" due to an asymptotically sub-optimal action, taking action a / ∈ O(x, P ) when the MDP is in state x, is in the limit given by x , v(A, P )). In the general (partial or complete information) case, it is shown in Burnetas and Katehakis (1997) that the regret of a given policy π ∈ Π can be expressed asymptotically as Note, the above formula justifies the description of ∆(x, a, A, P ) as the "average loss due to sub-optimal activation of a in state x". Additionally, from the above it is clear that in the case of complete information, when P is known and therefore the asymptotically optimal actions are computable, the total regret at any time T is bound by a constant. Any expected loss at time T is due only to finite horizon effects. In general, for the unknown transition laws case, we have the following bound due to Burnetas and Katehakis (1997), for any uniformly fast policy π, where K x,a (P ) represents the minimal Kullback-Leibler divergence between p a x and any q ∈ Θ such that substituting q for p a x x in P renders a the unique optimal action for x. Recall, the Kullback-Leibler divergence is given by I(p, q) = x∈S p x ln(p x /q x ). This is equivalent to stating that any sub-optimal action must be sampled at least at a minimum rate, in particular, for a / ∈ O(x, P ), . This can be interpreted in the following way: for a sub-optimal action, the "closer" the transition law is to an alternative transition law that would make it the best action, the more data we need to distinguish between the truth and this plausible alternative hypothesis, and therefore the more times we need to sample the action to distinguish the truth. Anything less than this "base rate", we risk convincing ourselves of a plausible, sub-optimal hypothesis and therefore incurring high regret when we act on that belief. Policies that achieve this lower bound, for all P , are referred to as asymptotically optimal. Achieving this bound, or at least the desired logarithmic growth requires careful exploration of actions. In the next section, we present four algorithms to accomplish this. Algorithms for Optimal Exploration Common RL algorithms solve the exploration/exploitation dilemma in the following way: most of the time, select an action (based on the current data) that seems best, otherwise select some other action. This alternative action selection is commonly done uniformly at random. As long as this is done infrequently, but not too infrequently, the optimal actions and policy will be discovered, potentially at the cost of high regret. Minimizing regret requires careful consideration of which alternative actions are worth taking at any given point in time. The following algorithms are methods for performing this selection; essentially, instead of blindly selecting from the available actions to explore, each algorithm evaluates the currently available data to determine which action is most worth exploring. Each accomplishes this through an exploration of the space of plausible transition hypotheses. The benefit of this is that through careful exploration, optimal (minimal) regret can be achieved. The cost however, is additional computation. The set of alternative transition laws is large and high dimensional, and can be difficult to work with. In Section 4 we show several simplifications, however, that make this exploration practical. A UCB-Type Algorithm for MDPs Under Uncertain Transitions Classical upper confidence bound (UCB) decision algorithms (for instance as in multi-armed bandit problems, c.f. , Burnetas and Katehakis (1996), Cowan et al. (2017)), approach the problem of exploration in the following way: in each round, given the current estimated transition law, we consider "inflated" estimates of the values of each actions, by finding the best (value-maximizing) plausible hypothesis within some confidence interval of the current estimated transition law. The more data that is available for an action, the more confidence there is in the current estimate, and the tighter the confidence interval becomes; the tighter the confidence interval becomes, the less exploration is necessary for that action. The algorithm we present here is a version of the MDP-UCB algorithm presented in Burnetas and Katehakis (1996). At any time t ≥ 1, let x t be the current (given) state of the MDP. We construct the following estimators: • Transition Probability Estimators: for each state y and action a ∈ A(x t ), construct Note the biasing terms (the 1 in the numerator, |S| in the denominator). Including these, biases the estimated transition probabilities away from 0, so that our estimates p a xt will be in Θ. Additionally, these guarantee that the above is in fact the maximum likelihood estimate for the transition probability, given the observed data and uniform priors. • "Good" Action Sets: construct the following subset of the available actions A(x t ), The set t represents the actions available from state x t that have been sampled frequently enough that the estimates of the associated transition probabilities should be "good". In the limit, we expect that sub-optimal actions will be taken only logarithmically, and hence for sufficiently large t, t will contain only actions that are truly optimal. If no actions have been taken sufficiently many times, we take t = A(x t ) to prevent it from being empty. • Value Estimates: having constructed these estimators, we computeφ t = φ( t ,P t ) andv t = v( t ,P t ) as the solution to the optimality equations in Eq. (1), essentially treating the estimated probabilities as correct and computing the optimal values and policy for the resulting estimated MDP. At this point, we implement the following decision rule: for each action a ∈ A(x t ), we compute the following index over the set of possible transition laws: where I(p, q) = y p y ln(p y /q y ) is the Kullback-Leibler divergence, and take action This is a natural extension of several classical KL-divergence based UCB algorithms for the multi-armed bandit problem c.f. Lai and Robbins (1985), Burnetas and Katehakis (1996), Cowan et al. (2017) taking the view of the L function as the 'value' of taking a given action in a given state, estimated with the current data. In Burnetas and Katehakis (1996), a modified version of the above algorithm is in fact shown to be asymptotically optimal. The modification is largely for analytical benefit however, the pure index algorithm as above shows excellent performance c.f. Figure 3. Further discussion of the performance of this algorithm is given in Section 5. An important and legitimate concern to the practical usage of the MDP-UCB algorithm that has been noted in Tewari and Bartlett (2008) among others, is actually calculating the index in Eq. (2). This and other issues are discussed in more depth in Section 4, where a computationally efficient formulation is presented. Additionally, in Section 5, we highlight beneficial behavior of this algorithm that makes it worth pursuing. A Deterministic Minimum Empirical Divergence Type Algorithm for MDPs Under Uncertain Transitions In the classical DMED algorithm for multi-armed bandit problems (Honda and Takemura, 2010), rather than considering (inflated) values for each action to determine which should be taken, DMED attempts to estimate how often each action ought to be taken. Recall the interpretation of Burnetas and Katehakis (1996) given previously, that for any uniformly fast policy π, for any sub-optimal action a / ∈ O(x, P ) we have where K x,a (P ) measures (via the Kullback-Leibler divergence) how much the transition law for action a would need to be changed to make action a optimal. DMED proceeds by the following reasoning. If we estimate that the sub-optimal action a is close to being optimal (low K x,a ), make sure we take it often enough to differentiate between them (ensure T a x is high). If, on the other hand, we estimate that the sub-optimal action a is far from being optimal (high K x,a ), we don't need to take is as often (ensure T a x is low). As with the MDP-UCB and OLP algorithms, this requires an exploration of the possible transition laws "near" the current estimated transition law. In general, computing the function K x,a (P ) is not easy. We consider the following substitute, then: This is akin to exploratory policy iteration. That is, determining, based on the current value estimates, how much modification would produce an improving action. The function K measures how far the transition vector associated with x and a must be perturbed (under the KL-divergence) to make a the optimal action for x. The functionK measures how far the transition vector associated with x and a must be perturbed (under the KL-divergence) to make the value of a, as measured by the L-function, no less than the value of an optimal action a * . As will be shown in Section 4,K may be computed fairly simply, in terms of the root of a single non-linear equation. In this way, we have the following approximate MDP-DMED algorithm (see Honda and Takemura (2010) and Honda and Takemura (2011) for the multi-armed bandit version of this algorithm). At any time t ≥ 1, let x t be the current state, and construct the estimators as in the MDP-UCB algorithm in Section 3.1,P t , t , and utilize these to compute the estimated optimal values,φ t = φ( t ,P t ) andv t = v( t ,P t ). Letâ * t = arg max a∈A(xt) L(x t , a,p a xt ,v t ) be the estimated "best" action to take at time t. For each a =â * t , compute the discrepancies If max a =â * t D t (a) ≤ 0, take π(t) =â * t , otherwise, take π(t) = arg max a =â * t D t (a). Following this algorithm, we perpetually reduce the discrepancy between the estimated sub-optimal actions, and the estimated rate at which those actions should be taken. The exchange from K toK sacrifices some performance in the pursuit of computational simplicity, however it also seems clear from computational experiments that MDP-DMED as above is not only computationally tractable, but also produces reasonable performance in terms of achieving small regret c.f. Figure 3. Further discussion of the performance of this algorithm is given in Section 5. Optimistic Linear Programming, Another UCB-Type Algorithm for MDPs Under Uncertain Transitions As we have previously noted, Tewari and Bartlett (2008) raises some legitimate computational concerns. They propose an alternative, algorithm which they term "optimistic linear programming" (OLP), which is closely related to the MDP-UCB algorithm presented here. The main difference between OLP and MDP-UCB is that OLP does not use the KL divergence to determine the confidence interval. Instead, OLP uses L 1 distance, which allows the resulting index to be computed via solving linear programs. This reduces the computational complexity at the cost of performance. As we will show in Section 4, the MDP-UCB optimization problem can be simplified drastically, to render the use of OLP, at least with respect to the computational issues, unnecessary. The algorithm we present here is a version of OLP algorithm presented in Tewari and Bartlett (2008). At any time t ≥ 1, let x t be the current state, and construct the estimators as in the MDP-UCB algorithm in Section 3.1,P t , t , and utilize these to compute the estimated optimal values,φ t = φ( t ,P t ) andv t = v( t ,P t ). At this point, we implement the following decision rule: for each action a ∈ A(x t ), we compute the following index, again maximizing value within some distance of the current estimates: , and take action π(t) = arg max a∈A(xt) u a (t). A Thompson-Type Algorithm for MDPs Under Uncertain Transitions In MDP-UCB, MDP-DMED, and OLP, above, we realized the notion of "exploration" in terms of considering alternative hypotheses that were "close" to the current estimates within Θ, interpreting closeness in terms of "plausibility". In this section, we consider an alternative form of exploration through random sampling over Θ, based on the current available data. Given a uniform prior over Θ, the posterior for p a x is given by a Dirichlet distribution with the observed occurrences. Posterior Sampling (MDP-PS) proceeds in the following way: At any time t ≥ 1, let x t be the current state, and construct the estimators as in the MDP-UCB algorithm in Section 3.1,P t , t , and utilize these to compute the estimated optimal values,φ t = φ( t ,P t ) andv t = v( t ,P t ). In addition, generate the following random vectors: For each action a ∈ A(x t ), let T a xt (t) = [T a xt,y (t)] y∈S be the vector of observed transition counts from state x t to y under action a. Generate the random vector Q according to The Q a (t) are distributed according to the joint posterior distribution of p a xt with a uniform prior. At this point, define the following values as posterior sampled estimates of the potential value L of each action: W a (t) = r xt,a + y Q a y (t)v y , and take action π(t) = arg max a∈A(xt) W a (t). In this way, we probabilistically explore likely hypotheses within Θ, and act according to the action with best hypothesized value. Accelerating Computation All of the above algorithms require computing the estimated optimality valuesφ t ,v t each round. This is an issue, but efficient linear programming formulations exist to solve the optimality equations in Eq. (1) see for example Derman (1970). It may also be possible to adapt the method of Lakshminarayanan et al. (2017) for approximately solving MDPs, among others, to our undiscounted and potentially changing MDP setting. However, each of these algorithms additionally has unique computational challenges, through computations over the high dimensional parameter space Θ due to the typically high cardinality of the state space. MDP-UCB We will first examine the MDP-UCB algorithm from Section 3.1. Recalling the notation that I(p, q) = x p x ln(p x /q x ), MDP-UCB has to repeatedly solve the following optimization problem: The index of the MDP-UCB algorithm may be efficiently expressed in terms of the C function above which we will refer to as the q-Formulation. This represents an |S|-dimensional non-linear constrained optimization problem which is not, in general, easy to solve. For mathematical completeness, as well as for practical implementation, we first analyze some trivial cases. Let µ p = x p x v x and V = max x v x , then Theorem 1 The value of C(p, v, δ) can be easily found in the following cases: • If δ < 0 then the optimization problem, C(p, v, δ) is infeasible and we say C(p, v, δ) = −∞. Proof of this theorem is provided in Appendix A.1. For other cases, we can reduce this to solving a 2 dimensional system of non-linear equations, with unknowns µ * q and λ as follows. Theorem 2 For any δ > 0 and v such that v x 1 = v x 2 for some x 1 , x 2 ∈ S, Proof of this theorem is provided in Appendix A.2. Solving these systems, which we will refer to as the (µ * q , λ)-Formulation, provides dramatic speed increases for the implementation of the algorithm (Figure 1). We also note that the (µ * q , λ)-Formulation scales manageably with the dimension of the state space, as opposed to the q-Formulation. Additionally, the structure of the equations admits several nice solution methods since, for a given µ q , the second equation has a unique solution for λ in the indicated range, and given that solution, the summation in the first equation is increasing to infinity as a function of µ q . MDP-DMED Next we examine the MDP-DMED algorithm from Section 3.2. Again, recalling the notation that I(p, q) = x p x ln(p x /q x ), MDP-DMED has to repeatedly solve the following optimization problems: The rate functionK of the MDP-DMED algorithm may be efficiently expressed in terms of the D function above which we will refer to as the q-Formulation. This represents an |S|-dimensional non-linear constrained optimization problems, which is not, in general, easy to solve. As before, we consider some trivial cases first. Let µ p = x p x v x and V = max x v x , then Theorem 3 The value of D(p, v, ρ) and by extension D t (a) can be easily found in the following cases: • If ρ > V then the optimization problem, D(p, v, ρ) is infeasible and we say D(p, v, ρ) = ∞ and D t (a) = −T xt,a (t). • If v x 1 = v x 2 for some x 1 , x 2 ∈ S and ρ = V , then optimization problem D (p, v, ρ) diverges to infinity and we say D(p, v, ρ) = ∞ and D t (a) = −T xt,a (t). Proof of this theorem is provided in Appendix A.3. For other cases, this optimization problem reduces to solving a 1-dimensional system of non-linear equations with one unknown, λ, as follows: Theorem 4 For any v such that v x 1 = v x 2 for some x 1 , x 2 ∈ S and µ p < ρ < V , Proof of this theorem is provided in Appendix A.4. As with the MDP-UCB case, solving this system, which we will refer to as the λ-Formulation, provides dramatic speed increases for the implementation of the algorithm (Figure 1). We also note that the λ-Formulation scales manageably with the dimension of the state space, as opposed to the q-Formulation. Additionally, the λ-Formulation structurally lends itself well to solutions. Over the indicated range, the summation is positive and constant in the limit as λ → 0, and monotonically decreasing, diverging to negative infinity as λ → 1/(V − ρ). Hence the solution is unique, and can easily be found via bisection. OLP Next we examine the OLP algorithm from Section 3.3. OLP has to repeatedly solve the following optimization problem: The index of the OLP algorithm may be efficiently expressed in terms of the B function above. B(p, v, δ) is equivalent to the following linear program: This represents an |S|-dimensional linear program, which can generally be computed quite efficiently. However, as the dimension of the state space increases we incur a greater computational burden (Figure 1). MDP-PS The most attractive advantage of MDP-PS is the reduced computational cost, relative to the other three proposed algorithms (Figure 2). Notice there is no extra optimization problem that needs to be solved. In the MDP-UCB algorithm, at every time t, we had to iteratively solve |A(x t )| instances of C(p, v, δ), for OLP |A(x t )| instances of B(p, v, δ), and for MDP-DMED, |A(x t )| instances of D(p, v, ρ). Under MDP-PS, the computational burden stems from sampling from the Dirichlet distribution for each action (again, |A(x t )| steps), but this is a well studied problem with many efficiently implemented solutions (see for example McKay (2003)). Specific properties of the MDP-PS algorithm may still make these other algorithms worth pursuing, however, as seen in Section 5. Computation Time Comparison To demonstrate the computational time savings achieved by these simplifications we randomly generated the parameters for 15 different action indices and timed how long each algorithm took to solve. We repeated this for 4 different values of |S|, the dimension of the state space, 10, 100, 1, 000, and 10, 000. In Figure 1, we plot the mean computation time as In order to keep the comparisons as equitable as possible, the optimization problem for all the algorithms (with the exception of MDP-PS) were solved to within 4 digits of accuracy using TensorFlow for Python (Abadi et al., 2016). MDP-PS used SciPy's random Dirichlet generator. They were all run on a MacBook Pro with a 3.1 Ghz i7 processor with 16GB DDR3 RAM. [6] OLP also suffers from increasing computation time as the dimension of the state space increases. OLP performs the worst in terms of computational time which is likely due to the fact that we are not using a specialized fast LP solver but rather TensorFlow. In Figure 2 we can see the relative performances of the top three algorithms. Figure 2: Computation time as |S| increases for the top three performers The absolute time is not as important as the relative time. There are numerous ways to achieve significantly faster absolute time but our focus here is to demonstrate the relative speed increase gained by using our simplifications. In addition, one can get raw computational time savings by developing a devoted optimizer for problems of this type but if we restrict to using a generic black box optimizer, the method we employed seems a reasonable reflection of what one would do. Comparison of Performance In this section we discuss the results of our simulation test of these algorithms on a small example problem. There is nothing particularly special about the values for this example, and we observe similar results under other values. Our example had 3 states (x 1 , x 2 , and x 3 ) with 2 available actions (a 1 and a 2 ) in each state. Below we show the transition probabilities, as well as the reward, returned under each action. If these transition probabilities were known, the optimal policy for this MDP would be π * (x 1 ) = a 1 , π * (x 2 ) = a 2 , and π * (x 3 ) = a 1 . We simulated each algorithm 100 times over a time horizon of 10,000 and for each time step we computed the mean regret as well as the variance. In Figure 3 We can see that all algorithms seem to have logarithmic growth of regret. There are a few interesting differences that the plot highlights, at least for these specific parameter values: MDP-DMED has not only the highest finite time regret, but also large variance that seems to increase over time. This seems primarily due to the "epoch" based nature of the algorithm, which results in exponentially long periods when the algorithm may get trapped taking sub-optimal actions, incurring large regret until the true optimal actions are discovered. The benefit of this epoch structure is that once the optimal actions are discovered, they are taken for exponentially long periods, to the exclusion of sub-optimal actions. As expected, see Tewari and Bartlett (2008), OLP has a higher finite time regret when compared to MDP-UCB, but still achieves logarithmic growth. MDP-PS seems to perform best, exhibiting lowest finite time regret as well as the tightest variance. This seems largely in agreement with the performance of PS-type algorithms in other bandit problems as well, in which they are frequently asymptotically optimal c.f. Cowan et al. (2017) and references therein. Algorithm Robustness-Inaccurate Priors How do these algorithms respond to potentially "unlucky" or non-representative streaks of data? How does bad initial estimates effect their performance? Can these algorithms be fooled, and what are the resulting costs before they recover? This is a practically important question, in terms of data security and risk assessment, but also an important element of evaluating a learning algorithm. How does the learning agent respond to non-ideal conditions? To test these algorithms, we "rigged" or biased the first 60 actions and transitions, such that under the estimated transition probabilities the optimal policy would be to activate the sub-optimal action in each state. In more detail, let T a x,y be the number of times we transitioned from state x to state y under action a. Then we rigged T a so that it started like so, x 1 x 2 x 3 x 1 1 1 8 Under the resulting (bad) estimated transition probabilities, we have that the (estimated) optimal policy isπ * (x 1 ) = a 2 ,π * (x 2 ) = a 1 , andπ * (x 3 ) = a 1 , which in fact chooses the sub-optimal action in each state. The subsequent performances of the MDP algorithms are plotted in Figure 4. All algorithms still appear to have logarithmic growth in regret, suggesting they can all 'recover' from the initial bad estimates. It is striking though, the extent to which the average regrets for MDP-DMED and MDP-PS are affected, increasing dramatically as a result, MDP-PS demonstrating an increase in variance as well. However, the MDP-UCB algorithm seems relatively stable: its average regret has barely increased, and maintains a small variance. Empirically, this phenomenon appears common for the MDP-UCB algorithm under other extreme conditions. The underlying cause and a rigorous examination of these intuitions, will be explored in a future work. Conclusion and Future Work In this paper we have presented four algorithms adapted from classical multi-armed bandit algorithms that either are provably asymptotically optimal or at least give that appearance Figure 4: Robustness test. MDP-UCB seems to be largely unaffected by the inaccurate priors. in practice. The simplifications for MDP-UCB and MDP-DMED presented here have been shown to dramatically reduce the computational burden for these algorithms, rendering them more useful in practice. As a result, the provably worse performing OLP, no longer has any advantage over them. MDP-DMED under the λ-Formulation is fast and possibly optimal, but has a high variance for regret that increases over time. While MDP-PS is very fast and appears to be optimal, it is highly sensitive to incorrect priors or extreme sampling errors. MDP-UCB is provably optimal has stable performance under various extreme conditions, and can be computed quickly using the (µ * q , λ)-Formulation. There are various interesting directions to continue this work, we mention a few potential avenues here. The idea of "exploring the hypothesis space" is something that extends immediately to the case of unknown rewards. Each of the algorithms presented here can generalize immediately to such situations, though the computational simplifications would need to be modified significantly. It would also be of theoretical interest to find sufficient conditions on the estimators used to ensure asymptotically optimal performance. This could potentially allow these algorithms to be modified to use other state value estimators (for example, Q-learning Watkins (1989)) while maintaining their theoretical guarantees. From a practical computational point of view we could consider systems where we can't easily iterate over all possible states, and how these algorithms can be modified to address this. These ideas will be explored in future works. Acknowledgments We acknowledge support for this work from the National Science Foundation, NSF grant CMMI-1662629. Appendix A. Proof of Theorems of Section 4 A.1 Proof of Theorem 1 First we restate Theorem 1: The value of C(p, v, δ) can be easily found in the following cases: • If δ < 0 then the optimization problem, C(p, v, δ) is infeasible and we say C(p, v, δ) = −∞. Proof Recall that I(p, q) is the KL Divergence from p to q. We then have by Gibb's inequality that I(p, q) ≥ 0, with equality if and only if p = q. Thus, if δ < 0 then the optimization problem is infeasible. If δ = 0 then it has the trivial solution q * = p. We therefore take δ > 0. Now, if v x 1 = v x 2 for all x 1 , x 2 ∈ S then any feasible probability vector q is also optimal with C(p, v, δ) = v x = µ p . A.2 Proof of Theorem 2 In this section we will prove Theorem 2, which we restate here. Let µ p = x p x v x and V = max x v x . Then for any v such that v x 1 = v x 2 for some x 1 , x 2 ∈ S and δ > 0, Before giving the formal proof, it may be helpful to understand the overall conception of the proof. The main idea is the use of Lagrange multiplier techniques, which greatly reduces the dimensionality of the problem to be solved. We are able to exchange from trying to find the optimal probability vector q * , to a problem where we need only find two moments of the optimal q * , a dramatic dimension reduction. In the MDP-UCB case, it suffices to find the unknown optimal mean of the optimal distribution, q * , µ * q , and a value λ = σ 2 q * /(µ p − µ * q ) which depends on the optimal, unknown variance. Proof Recall that, Since {q : q ∈ Θ, I(p, q) ≤ δ} is a closed compact set, the supremum will be realized by a maximum, and we may express the problem of computing C(p, v, δ) in the following form: Let µ * q = x∈S q * x v x be the optimal value of the objective function, µ p = x∈S p x v x , and V = max x v x . First we will argue that, To see the first inequality, observe that q = p satisfies the constraints and is therefore feasible, hence the objective function at q = p is less than or equal to the optimum: µ p ≤ µ * q . To see the second, note that µ * q will be an expected value over the {v x }, and hence less than or equal to the maximum, V . Because the probabilities in q * are strictly positive, the expected value µ * q must actually be strictly less than the maximum: µ * q < V . Utilizing Lemma 5 in Appendix B, for any feasible q such that the KL Divergence constraint is not achieved with equality, a different feasible q exists with an improved value of the objective function. Hence we can rewrite the optimization problem as, x∈S q x = 1, We now turn to the main task, reducing the dimension of the optimization problem. Using Lagrange multipliers we have the following auxiliary function, Note that when using the Lagrange multipliers, we can safely ignore the positivity inequality constraints in Eq. (8) because they are strict inequalities, thus inactive, and removing them will not change the local optimum. Taking partial derivatives, we get, Setting them to zero, results in the following system of equations for the optimal solution, We are looking for a solution q * to this system, and any such solution will be a global maximum. To see this, observe that our optimization problem is a convex optimization problem. This can be seen more easily when put in its original form, as in Eq. (3). We are maximizing a linear (and thus concave) function, the inequality constraint is convex, and the equality constraints are affine. Thus, any stationary point will be a local maximum and any local maximum will be a global maximum. (Boyd and Vandenberghe, 2004) Multiplying Eq. (9) through by q * x , we have, Summing Eq. (10) over x, we have We now introduce a quantity, σ 2 q * , the variance under transition law q * , explicitly defined as follows Looking at Eq. (10) again, but this time, multiplying through by v x we get, x v x µ , ∀x ∈ S. Summing this over x yields, Equations (11) and (13) form a system of equations with two unknowns µ and λ. Solving this system yields, Substituting them into the first equation in the original system Eq. (9), and recalling the relationship between λ and µ from Eq. (11), we get that for each x: We can now rewrite the optimization problem in Eq. The positivity constraint in Eq. (8) and recalling that p x > 0 for all x ∈ S, yields, the normalization constraint in Eq. (7) yields, and the KL divergence constraint in Eq. (6) yields, Observe that µ p must be strictly less than µ * q . To see this, take q = p, then q is feasible and the left hand side of Eq. (5) is 0 which is less than δ. Lemma 5 implies there exists some feasible q with a strictly greater objective function, i.e. µ p = µ q < µ q ≤ µ * q . We also know that λ < 0 because σ 2 q * > 0 by definition in Eq. (12). Thus we can rewrite the optimization problem in Eq. Having established that λ is strictly less than zero we can simplify the last constraint, Eq. (15), as follows. Thus we have, max µq,λ µ q , Which is just two equations with two unknowns. Recalling that any feasible solution will be a global maximum by our discussion of the convexity of the optimization problem, we have the desired result, Where the only unknowns are µ * q and λ, and they satisfy these constraints: A.3 Proof of Theorem 3 First we restate Theorem 3: The value of D(p, v, ρ) and by extension D t (a) can be easily found in the following cases: • If ρ > V then the optimization problem, D(p, v, ρ) is infeasible and we say D(p, v, ρ) = ∞ and D t (a) = −T xt,a (t). • If v x 1 = v x 2 for some x 1 , x 2 ∈ S and ρ = V , then optimization problem D(p, v, ρ) diverges to infinity and we say D(p, v, ρ) = ∞ and D t (a) = −T xt,a (t). Proof For ρ > V = max x v x , the optimization problem is infeasible because there is no feasible q that will have an average more than V (i.e. x q x v x ≤ V ). In that case we take D(p, v, ρ) = ∞ and the corresponding DMED discrepancy index D t (a) = −T xt,a (t). For any ρ ≤ µ p , i.e. less than or equal to the expected value under the current estimates, D(p, v, ρ) = 0 by simply taking q * = p and we take the corresponding DMED discrepancy index D t (a) = ∞. If v x 1 = v x 2 for all x 1 , x 2 ∈ S then µ p = v x = V and depending on the value of ρ one of the previous two situations apply. If v x 1 = v x 2 for some x 1 , x 2 ∈ S and ρ = V we have the following. Any feasible q such that x q x v x = V must have q x = 0 for some x ∈ S such that v x < V , in which case q falls outside of Θ -and it is in fact not feasible. We therefore take D(p, v, ρ) = ∞ and the corresponding DMED discrepancy index D t (a) = −T xt,a (t). A.4 Proof of Theorem 4 In this section we will prove Theorem 4, which we restate here. Let V = max x v x . Then, for any v such that v x 1 = v x 2 for some x 1 , x 2 ∈ S and for x∈S p x v x < ρ < V , Before giving the formal proof, it may be helpful to understand the overall conception of the proof. The main idea is the use of Lagrange multiplier techniques, which greatly reduces the dimensionality of the problem to be solved. We are able to exchange from trying to find the optimal probability vector q * , to a problem where we need only find two moments of the optimal q * , a dramatic dimension reduction. In the MDP-DMED case we are able to simplify even further, because the optimal unknown mean µ * q is given as ρ, and it suffices to find λ = (µ * q − µ p )/σ 2 q * which is a function of the unknown optimal variance. The proof follows along similar lines as the one for MDP-UCB in Appendix A.2. Proof Recall that, We want to show that the infimum in EQ. (16) is realized by a minimum. Let 0 < < 1 and x * = arg max v x . Consider the probability vector q defined as q x * = 1 − and q x = /|S| for x = x * . For the appropriate choice of , we will have x q x v x = ρ < V with finite valued I(p, q ). Thus, D(p, v, ρ) ≤ I(p, q ) and we can restrict to only considering q ∈ Θ such that I(p, q) ≤ I(p, q ). This feasible set is closed and compact, and hence the infimum is realized by a minimum over this set. Since I(p, q ) is diverging to infinity as → 0, this minimum must occur in the interior of the constrained feasible region. Hence the infimum without the additional constraint on feasibility will also be realized by a minimum within the interior of the set {q ∈ Θ, x q x v x ≥ ρ}. Thus, we can rewrite the problem of computing D(p, v, ρ) in the following form: x∈S q x = 1, Here we can use Lemma 6 in Appendix B to observe that for any feasible q where the constraint in Eq. (17) is strict, we can construct a feasible q with a strictly smaller objective function (KL divergence w.r.t. p). As such, the optimum must occur when this constraint is satisfied with equality, and the optimization problem can be re-written as so: We now turn to the main task, reducing the dimension of the optimization problem. Using Lagrange multipliers we have the following auxiliary equation, Note when using the Lagrange multipliers, we can safely ignore the positivity constraints in Eq. (20) because they are strict inequalities, thus inactive, and thus have a Lagrange multiplier of zero. Taking partial derivatives, we get, Setting them to zero, results in the following system of equations for the optimal solution, q * , x∈S q * x v x = ρ, x∈S q * x = 1. We are looking for a solution q * to this system, and any such solution will be a global minimum. To see this, observe that our optimization problem is a convex optimization problem. We are minimizing a convex function, with affine equality constraints. Thus, any stationary point will be a local minimum, and any local minimum will be a global minimum. (Boyd and Vandenberghe, 2004) Consider the first equation: multiply through by q * x to get −p x = λv x q * x +µq * x . Summing this over x and simplifying accordingly, we get −1 = λρ + µ. Solving these for λ and µ we have, Substituting them into the first equation in the original system Eq. (21), and noting that Eq. (23) implies µ = −1 − λρ, we get that for each x: In order to reduce the original problem to a 1-dimensional problem, we now express each of the constraints in terms of our new variables using Eq. (24). The positivity constraint in Eq. (20) and recalling that p x > 0 for all x ∈ S, yields, the normalization constraint in Eq. (19) yields, x p x 1 + (ρ − v x )λ = 1, and the mean constraint in Eq. (18) yields, which is Eq. (28). Thus we can write the optimization problem as, min λ x p x ln (1 + λ(ρ − v x )) , Recall that any feasible solution will be a global minimum, by our discussion of the convexity of the optimization problem. To find a feasible solution, notice that the derivative of the objective function with respect to λ is simply the first constraint, Eq. (29). Therefore any stationary point of the objective function will satisfy the constraint, be feasible, and thus be a global minimum. Hence, we may replace the original optimization problem with the problem of solving, subject to 0 < λ < 1 V −ρ . Thus we have the desired result, Where the only unknown is λ, and it satisfies these constraints: Appendix B. KL Divergence Optimization Lemmas The purpose of this section is to state and prove a number of lemmas associated with convex optimization problems involving KL-Divergence terms. They are relevant, but tangential to most of the content of the paper. In this section, we take p ∈ Θ to be a distribution over S, with v to be the vector of intermediate state values. It is convenient to define µ p = x p x v x and V = max x v x . The vector q is taken to be another distribution over S, with possibly zero-valued elements. The KL Divergence between p and q is given by Lemma 5 Let q ∈ Θ be such that I(p, q) < δ < ∞, and suppose v x 1 > v x 2 for some x 1 , x 2 ∈ S. Then there is a valid probability distribution q such that I(p, q ) ≤ δ, and Proof Consider constructing an alternative q ∈ Θ in the following way. Define q x 1 = q x 1 + ∆, q x 2 = q x 2 − ∆, and q x = q x for x = x 1 , x 2 . Note that for 0 ≤ ∆ < min(q x 1 , q x 2 ), q will be a valid probability distribution vector over S. We have that for ∆ > 0, Consider constructing an alternative distribution q ∈ Θ in the following way. For 0 ≤ ∆ < q x 1 , define q by q x 1 = q x 1 − ∆, q x 2 = q x 2 + ∆, and q x = q x for x = x 1 , x 2 . As before, for ∆ in this range, q ∈ Θ represents a valid probability distribution on S. As in the proof of Lemma 5, we have that for ∆ > 0, Taking ∆ sufficiently small (so that the mean does not drop below ρ), we have that It remains to show that I(p, q ) ≤ I(p, q). Similar to the proof of Lemma 5, we have that I(p, q ) = I(p, q) + p x 1 ln q x 1 q x 1 − ∆ + p x 2 ln q x 2 q x 2 + ∆ . Hence we see that I(p, q ) = I(p, q) when ∆ = 0. Looking at the derivative of I(p, q ) with respect to ∆ at ∆ = 0, we see where the last step follows since p x 1 /q x 1 < 1 and p x 2 /q x 2 > 1, as discussed initially. Hence while the KL divergences are equal for ∆ = 0, I(p, q ) is decreasing within some small neighborhood, and the KL divergence between p and q is reduced.
2019-09-28T21:56:11.000Z
2019-09-28T00:00:00.000
{ "year": 2019, "sha1": "5525aa2a8b3bc8cd328667242d4a94c71c5daa45", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5525aa2a8b3bc8cd328667242d4a94c71c5daa45", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
7695368
pes2o/s2orc
v3-fos-license
On Mixed Abstraction, Languages, and Simulation Approach to Refinement with SystemC AMS Executable specifications and simulations are cornerstone to system design flows. Complex-mixed-signal embedded systems can be specified with SystemC AMS which supports abstraction and extensible models of computation. The language contains semantics for module connections and synchronization required in analog and digital interaction. Through the synchronization layer, user defined models of computation, solvers and simulators can be unified in the SystemC AMS simulator for achieving low-level abstraction and model refinement. These improvements assist in amplifying model aspects and their contribution to the overall system behavior. This work presents cosimulating refined models with timed data flow paradigm of SystemC AMS. The methodology uses C-based interaction between simulators. An RTL model of data encryption standard is demonstrated as an example. The methodology is flexible and can be applied in early design decision tradeo ff , architecture experimentation, and particularly for model refinement and critical behavior analysis. Introduction The exploration of design space with model refinement by reusing legacy models requires developing an executable specification before a frozen product shall specification. Analyzing alternate system configuration and rearranging of model blocks depend on putting together quick virtual prototypes of the system.Immediate try out of various models in the executable specification is possible with high simulation speed and relative ease of model plug-in.Decomposing models into detail implementation options exposes problem areas, helps resolve conflicting requirements, and causes greater visibility in design by weighing design feasibility with goals such as critical analysis (corner case coverage, worst case analysis, failure analysis, and fault tolerance), optimization (physical area, power consumption, or speed) and performance evaluation (speed, latency, throughput, bandwidth, response time, link utilization, BER, and QoS). Our work presents an approach to replace abstract model blocks described in SystemC AMS with fine grain models written in a hardware description languages.We cosimulate the two domains of Models of Computation (MoC).The methodology is pure simulation based and does not rely on controlled stepwise refinement or formalism.The goal is to directly and effortlessly insert specific models in the system and quickly generate architectures that can be traded off on merits of specification, correctness, performance, and accuracy to yield an implementable platform.This type of refinement needs a model rich library of possible implementation choices.SystemC AMS is an executable specificationbased analog mixed signal simulator that supports varying abstraction levels.We present, using scalable cosimulation, a test case of abstract SystemC AMS description, in which a parity check block is refined as a synthesizable VHDL model of Data Encryption Standard (DES).The description is Amplitude Shift Keying (ASK) transceiver which originates from the ANDRES project [1] whose goal is to reduce the design time and cost of highly integrated embedded systems. This work covers certain novel aspects.Until now previous works have targeted refinement in pure digital environment; however we do direct digital refinement in heterogeneous/mixed signal systems.In the simulation domain, the older works use VPI C interface [2,3] but to our knowledge we do not know of any published work based on the VHPI interface.SystemC AMS does not require commercial licensing of the costly system design tools (Section 3) to the technical community.SystemC AMS is much faster than the conventional system design tools.Further more the cosimulation interfaces in commercial tools are proprietary and inadequate in most academic tools; whereas SystemC AMS tackles these issues by being open source, C/C++ based, and through its synchronization layer. Related Work and Methodologies Methodologies for refinement as it relates to design space and architecture exploration use several methods which can be categorized as formal and informal.For example for more controlled and formalized refinement, graph-based formalism (graphs of problem, specification, architecture, and network) is at the corner stone of SysteMoC [4].These graphs are used for binding and mapping in feasible implementation, optimization, and resource allocation during refinement.A similar symbolic representation is in [5] which reduces the exploration issue to a linear programming optimization problem using pseudoboolean solver.Orthogonality [6] driven refinement (behavior, timing, and interface) has also been proposed.In an earlier technique [7] the refinement task is split into types: control, data, and architecture.Dataflow graphs and trace transformation of events were utilized in [8].Among the less formal methods, a holistic way of interconnecting modules [9] uses UML SysML.Interface-based [10] partitioning of networked architecture is suited for NoC applications.Automatic IP selection was proposed by [11].A stepwise exploration flow (steps of analysis, building, exploration, composition, estimation) is discussed in [12].A framework using hardware emulation [13] has also been suggested.Simulation-based methods are widely spread [6,14,15].Many works take MPEG [4,13,16] and JPEG [8,17,18] as test cases.In compliance to the SystemC AMS philosophy of abstraction our work uses simple cosimulation-based refinement embedded in Timed Data Flow (TDF) graph while not requiring complex mechanics of interfacing and therefore the technique is informal. Related Tools Architecture development, design space exploration, and model refinement all are generally tool driven processes.These concepts are closely related and thus there is significant thrust from the technical community to research them.The effort can be broadly categorized in industrial and academic realms each pushing with their own tools and neither providing a complete solution because of the shear size of the problem. In the industry architecture design, refinement and partitioning is supported using automation by Electronic System-level (ESL) tools which predominately target digital domain of the heterogeneity because it makes the largest contribution in design space.The ESL tools are: CoWare Platform Architecture, ARM RealView, Bluespec Development Workstation from Bluespec, Arteris NoCExplorer, Mirabilis VisualSim, Tensilica XPRES Compiler, AutoESL AutoPilot, Binachip-FPGA tool, Mitrion Platform, and Synfora PICO Express FPGA among others.Most of these tools support hardware software codesign using either virtual platforms or FPGA prototypes.The problem of refinement is usually labeled as high level synthesis which basically means to generate a synthesizable circuit description from algorithmic or behavioral description, that is, a translation process for converting untimed SystemC-based description into equivalent HDL description.The translation also converts cycle accurate models of datapaths, control units, memory banks, busses, and interfaces to bit accurate models.The HDL description can then be input to other refinement engines and applied with architectural design constraints to produce even lower levels of abstraction for example RTL, or gate level netlist mapped to a target technology.High level synthesis is supported by Cadence C-to-Silicon Compiler, Mentor Graphics CatapultC and Seamless, Synopsys CoCentric, SystemCrafter SC from SystemCrafter, and Cynthesizer from Forte Design System.These tools are a big jump forward to bridge the gap between macroarchitecture and underlying mapped microarchitecture since SystemC is suited for design exploration.The only drawback is that the HDL description is not generic or clean enough to readily target any vendor's FPGA fabric or ASIC standard cells.High level synthesis tools and ESL tools are coupled by custom signal processing tools, for example, MATLAB/Simulink, Cadence Signal Processing Worksystem, or Virtuoso.The motivation has been prevailing wireless-based products.Instruction Set Simulators are used for running virtual firmware on processor core.The industrial based designs are centered around IP cores of Multiprocessor System on Chip (MPSoC) and on-chip bus communication.The bus connects processors, power management unit, timing units, memory blocks, microcontrollers, DSPs, external busses, for example, USB, Ethernet, and supporting peripherals.The processors' architectures are generally RISC based, for example, ARM, PowerPC and bus architectures are AMBA or NoC.SystemC interface classes and Transaction Level Modeling (TLM) are a major modeling paradigm for bus topologies. The academic research also employs SystemC-based tools.Academic circles use established tools such as Metropolis, Ptolemy and its variants, for example, Kepler for signal processing applications, mathematical intensive models and MoC calculations.LESTER-UBS aims at reconfigurable architectures while MILAN framework is integrated with MATLAB, HiPerE and DESERT (formal methods).SESAME simulator uses Kahn Process Networks (KPNs) for concurrency and Y-Chart for abstraction level and domain recognition.High level systems can be described by LISA language which is an architectural description language.SystemCoDesigner takes specification graphs as inputs and generates platform-based virtual prototypes covering specification, automatic exploration; and implementation.The academic groups try to advance research while commercial tools manufacturers try to earn revenues on their R&D work, however the marriage of academic and commercial tools is a rarity and the gulf is large.Some exceptions are that UC Berkeley Ptolemy dataflow modeling paradigm was EURASIP Journal on Embedded Systems 3 incorporated in Agilent ADS simulator, Ptolemy from Agile Design has found wide acceptance in many large companies, Bluespec tools are written in Haskell, and SystemC was influenced by SpecC. A major drawback in the tools is the lack of analog modeling support below system-level. Modeling with SystemC AMS 4.1.The Right Level of Abstraction.The motivation behind analog and mixed signal extension to SystemC has been the modeling of applications that are dominated by signal processing models [20], better understanding hw/sw interaction and developing concepts of AMS systems at architectural level [21].With this in mind SystemC and SystemC AMS opted for C/C++ based classes and methods for their kernels only augmented with hardware data types and concurrency.SystemC is now well settled in the EDA community because of the immense advantage it brings over the conventional HDL simulators inadequacy as hardware faithfully follows Moore's law and is more and more interlocked with software. The current SystemC AMS prototype offers three MoCs as shown in Figure 1: Electrical linear networks and linear signal flow (transfer function, pole-zero or state space representation of input/output behavior).Both modeling paradigms embed in SystemC sc method() class.The third is synchronous Timed Data Flow (TDF) MoC with indigenous processing() method which is a solver for computing the continuous time behavior of the model as defined by the user.Both linear MoCs solve linear implicit differential equations at appropriate time.Simple nonlinear static behavior can be approximated with TDF by selecting a rational sampling rate.External solvers [30] can be interfaced with SystemC AMS as well as user defined MoCs [31] can be formally binded.Analog TDF models can interact with TLM-based DE models [32].These capabilities are due to the open architecture of SystemC AMS. Figure 2 shows a typical SystemC AMS cluster which is a set of binded modules/nodes solved by same MoC such as TDF. Timed Data Flow in SystemC AMS. SystemC AMS is a class library and analog mixed-signal extension built as an add-on to SystemC digital simulator.SystemC AMS supports high level model abstraction with Time Data Flow (TDF) MoC which perfectly suits for loose simulator coupling.TDF is an analog solver that abstracts away cycle accurate timing information from parent SystemC, a Discrete Event (DE) simulator.TDF tokens are equally time spaced whose sampling rates are determined by the system clock [33], that is, constant analog stepping width (number of clock cycles) is mapped to SystemC sc time() class.This solver does not support adaptive or variable stepping yet in the current prototype as in commercial analog simulators that trade accuracy with speed.At system-level an analog simulator is too slow because of the integration time problem.These solvers would compute nonlinear system of equations for the model or netlist at each integration point.The simulation would be even slower for large circuits.Therefore for systemlevel AMS designs it is best to use a DE driven simulator (SystemC) with integration-based simulator (SystemC AMS).This combination enables software codesign as well which is not possible with SystemVerilog.The constant size stepping gives further simulation efficiency because the computational overhead of determining new step size is absent.TDF scheduling is static (ABCCCCCCCCC in Figure 2) for determining execution order of MoCs in the data flow network.This order is set at cluster (connected modules with common MoC) elaboration phase before simulation beginning.A static schedule or firing vector is calculated once and is applied repeatedly in every firing. Synchronization Layer. The synchronization of analog TDF models to DE models as well of any user written special solvers or externally hooked foreign simulators is AMS synchronization layer [34] duty which registers all DE and TDF MoCs and their data (signals, variables).The synchronization layer shown in Figure 1 determines execution order of the solvers and simulators in fixed time steps and synchronizes DE and TDF blocks in these steps.The analog and digital models are individually solved and synchronization of analog and digital domains is resolved by relaxation method [35] with little overhead since interactions are discrete events. ASK Transceiver-A Case Study. The ASK-based systems use On Off Keying (OOK) modulation for conserving power during off cycles.This scheme is simple and inexpensive.These systems are implemented in remote keyless entry, tire pressure monitor, and antilock brake systems of the automobiles. The ASK transceiver shown in Figure 3 is a nonconservative high-level analog mixed signal model.Its subsystems are modeled in TDF and DE domains.The analog TDF blocks are shown with the shadows.The waveform envelopes and silences are detected as bits.The task is to replace abstract cryptographic unit with a legitimate and realizable model.The selected models are VHDL RTL descriptions of DES and Triple DES (3DES) algorithm.The DES algorithm, released by the National Institute of Standards and Technology (NIST), encrypts message space M into the code space C using a unique key K, where the message, code, and key words are all 64 bits.Let e and d be the encoding and decoding functions, then the permutation cipher is one of the 2 64 !permutations on M. The encoding and decoding functions are where K = 2 64 .K is selected as one out of 2 56 permutations that are most random among 2 64 total available.The remaining 8 bits may be used as parity bits for error detection. The key is changed frequently to reduce the risk [36].In theory the key can be determined by 2 56 or more than 70 quadrillion (70 × 10 15 ) operations.The process can be made more robust with 3DES which requires 2 112 operations to crack the key although noncomputational schemes of hacking, for example, power analysis [37] have been known.Encryption and decryption processes use the same key and algorithm; however the subkeys are generated differently. Any legacy or refined model must be golden.The DES/3DES RTL model is first verified stand-alone in loopback mode with several NIST test vectors.The model is further verified with a logic equivalence checker that compares RTL synthesis and physical synthesis netlists.The mode of DES operation is Electronic Codebook (ECB). Simulation Interfaces. In the crypto unit module, we replace model description below ports and attribute definition with a cosimulation interface.The interface transforms the tokens and attributes associated with the TDF input ports into strings.The formatted data is then sent to an open socket at a remote computer that houses VHDL tools.The external simulator applies the received inputs to the refined VHDL model (DES) of the crypto unit.When VHDL simulation is finished the outputs are received in the crypto unit block which drives it to its output ports.The cosimulation interface is a blackbox in the module which only provides an interface to the remote simulator, where the refined model will be evaluated.SystemC AMS and VHDL simulators are configured in client-server topology on two computers.Distributed cosimulations are often a necessity because tools consisting of dedicated solvers have best performances on specific operating systems and different machine architectures.Various layers of applications that play role in cosimulation are depicted in Figure 4 which shows interface boundaries in the distributed cosimulation topology.SystemC AMS acts as a master simulator running at the client computer which connects to the VHDL simulator at the server computer.The executable specification, that is, a SystemC AMS model, may contain analog, digital, and software models.On the server side, a daemon runs which makes use of UNIX processes (parent-child) and system calls (fork-exec()) to invoke Cadence tools.Each child process, after calling its particular Cadence tool, does a control transfer to another process to call the next tool in the suite.The last child process calls the simulator and after completing the simulation returns the control to the parent process.A shared C library accesses VHDL objects. The distributed simulation runs in full automation requiring no user intervention.Two binaries are obtained.The main SystemC AMS description is compiled with the client cosimulating wrapper discussed in subsection V.D. Another wrapper runs at the server as a simple program.Both wrappers communicate via TCP/IP sockets.To compute the MoC, the client wrapper passes data tokens and necessary information to the server wrapper which then sets up the simulator after scrutinizing the received information.Setting up the simulator suite is a major task since a series of tools has to be called in the right order and with a list of command options and arguments.This job can be done in two way: wrapper invoking a setup script written in simulator's native scripting language; embedding the setup calls in the wrapper as if they would be made from the simulator command line interface.Besides the data token, the information related to MoC evaluation such as any other signals required in computation that do not originate from the SystemC AMS description, for example, clock, reset is This is because the model to be evaluated on a mature and commercial simulator is finer than the system-level model and thus needs more input signals than simply the input data.Furthermore, the refined models should never be modified for the sake of cosimulation. A description of the crypto model with cosimulation C wrapper is shown in Algorithm 1. Synchronization. The TDF formalism in SystemC AMS is realized using blocking read and write of sc fifo() primitive channel of SystemC in which reading from an empty FIFO or writing to a full FIFO suspends the calling process until data are available to read or FIFO has space for new data.In multirate TDF the FIFO depth is related to the rate attribute of the TDF.The rate value therefore must be realistic and restricted by both solution of homogeneous equations of the TDF cluster for determining a valid firing schedule and machine resources.Thus no tokens are lost in cosimulation. The clocks of the two kernels run completely independent of each other.Their timing states are not passed to each other.In fact the timing at SystemC AMS is coarse since the tokens however large are sample accurate or cycle approximate (the values are guaranteed at the end of the cycle ignoring timing information during the cycle) sampled at the system clock whereas on the VHDL side every bit is clocked (cycle precise) making the sample bit-accurate in addition to cycle-accurate.Such timing contrast is ideal for system-level simulation where speed is necessary and for refinement where problem boundaries can be pin pointed.Once the inputs have been available the cosimulation interface executes.However, the computation of outputs takes some time at the cosimulator and new tokens remain pending for next simulation iteration.This latency by virtue of the nature of TDF dataflow takes care of time warp problems in simulations: rolling back [38], lazy re-evaluation [39], cancellation [40], runahead (e.g., Calaveras algorithm by Synopsys); and slower lock-step algorithm in selecting smallest step for forced synchronization.The execution order of each TDF block is preset and the graph is cyclically directed; the cosimulator follows the order determined by SystemC AMS static scheduler at elaboration [33].The reaction time creates a temporary deadlock in the TDF cluster because the consuming actor must wait for the producing actor to fire.The deadlock state is cleared as soon as the tokens arrive from the refined model.The deadlock is different than the deadlock caused by cyclic dependency of the balance equations in the TDF cluster system which is precluded by slipping in a finite delay in the cycle [41].The process is completely asynchronous for the VHDL cosimulator while only locking is at the master SystemC AMS simulator which momentarily suspends itself for the cosimulated actor node while the simulator may be busy solving for nodes that have data.The static scheduling vector is applied for every simulation iteration. MoC Computation. The external simulator is handled by a daemon program which communicates with the SystemC AMS master.The daemon validates all incoming tokens and data (length and bit integrity of C-strings) required for refined MoC computation.This information is transformed in semantics for the VHDL language.For example, the port rate in the multirate TDF would be mapped to the number of clock cycles needed to write the output tokens for the corresponding input tokens to maintain the balance in the dataflow graph. Using OS utilities the daemon runs several child processes each executing a specific tool of the simulator tool flow as shown in Figure 5. Since the model is computed in a static fashion, that is, the input token values do not change throughout the simulation, these values can be passed to the VHDL elaborator as C-strings which will apply them to the model before simulation.If there are more than one input tokens that need to be fed periodically to the model, then they need to be input using the C interface by updating the input ports as discussed in Section 6. At higher level of abstraction such as the one used for ANDRES usually a clock signal is not needed for modeling since much of the behavior can be modeled reasonably without clock as clock makes simulation slow.However, since abstraction granularity is introduced in cosimulation using HDL, indeed clock becomes inevitable to drive HDL part of simulation.A testbench clocks the VHDL model and converts the values of SystemC AMS semantics of port rate, delay, and so forth, to meaningful VHDL objects if required by the VHDL model, for example, the rate attribute can be used for a counter to pulse an enable signal.the abstract AMS description.The testbench converts Cstrings of inputs into equivalent VHDL standard logic types.The testbench is an interface to bridge the abstraction in semantics of SystemC AMS and VHDL models.Next 3DES model is simulated.The model requires no change to the interface between the two simulators.However, the VHDL testbench is modified to provide two additional encryption keys so that all three cypher operations in the cascaded stages have their unique keys.An Initialization Vector (IV) is also required in the testbench to start the process because 3DES computation depends on the previous computation.The IV is symmetric for encryption and decryption. C Level Access Upon receiving messages the daemon reacts and executes simulator at server as follows.The execv() calls successively invoke VHDL compiler, elaborator, and simulator.The simulator loads the C library for VHDL module access with +loadvhpi argument (see Algorithm 2). The data tokens are exchanged between the daemon and the VHDL simulator by sharing the simulator's C interface IEEE Std 1076c-2007 (VHPI).This task is handled by creating an application for access of VHDL objects (instance, hierarchal components, ports, signals, data).The application is a shared library dynamically linked to the simulator executable and loaded by the particular child process of the daemon.The library contains callback functions for initialization, navigating VHDL design hierarchy, assigning ASCII IDs to the ports, registering simulation time, adding/removing value change callbacks, and storing output data and their time stamps.The library also tracks the simulation time and makes access to the VHDL outputs as they change values (VCD dump).Although recording values on transitions are an efficient way, our methodology is flexible to read off values at any specified times or stops in addition to value changes.The library sends the read data to the SystemC AMS simulator connected through socket.All ports are monitored but only the ports of interests are accessed and their outputs saved.The simulation start and stop times needed for the computation are preprogrammed and passed to the simulator by the child process.The designer should know approximately when the outputs are stable, valid, or meaningful to control the simulation.All data needed for MoC computation in AMS description at client is converted C-strings to be accepted by the socket.Therefore the input data and signals (SystemC or SystemC AMS hardware types) are formatted using overloading, static type casting, and other built-in C/C++ constructs.At the server the C interface reads VHDL ports as C-strings which are converted to SystemC/SystemC AMS types back in the main AMS simulation.Thus native C-based translation between SystemC/SystemC AMS and VHDL rules out special signals [1] and converter channels [19].The C based VHPI and Verilog VPI interfaces might find greater acceptability in mixed language cosimulations with the advent of SystemC/SystemC AMS (see Algorithm 3). Simulation and Experimental Results As soon as a token arrives in the crypto unit of SystemC AMS description, the cosimulation interface is invoked to simulate the DES model.The cosimulation interface on SystemC AMS side handles necessary communication and formatting, for example, zero padding for forming a 64-bit word for encryption, stripping off leading zeros of 64-bit decrypted word, making sure all incoming strings characters are 1s or 0s and the size of the received and transmitted data conform to the expected buffer lengths.Both DES and 3DES RTL descriptions are cosimulated.The simulations runs in complete automation.The methodology is implemented using Cadence Incisive Unified Simulator mixed language simulator. The combined simulation entirety is checked for long runs and compared to the SystemC AMS stand alone simulation.The XOR parity-based cryptounit is benchmarked against the refined DES/3DES models for channel effect by plotting BER versus SNR (Figure 6).The errors usually occur in burst due to intersymbol interference, that is, several adjacent bits are corrupted in 64-bit words.Figure 7 illustrates scatter plots of the recovered signal with crypto unit as a simple 9-bit XOR (no cosimulation).Figure 8 compares constellation diagrams of the filtered output for both DES and 3DES cases.Nonlinearity effect is visible in both amplitude variation and phase jitter.The radial constellations in Figures 7 and 8 indicate noncoherent frequency interference since the transceiver model transmitted data asynchronously.Figure 9 illustrates phase imbalance in the recovered symbols. The nonlinear response of the channel causes distortion which shows several improvement options.For example, modeling the channel more realistically, using appropriate encoding scheme in addition to the cryptography or introducing multiple levels of amplitudes, for example, M-ASK modulation.On the demodulator side, the important task is to detect the bit by sampling the wave at the end of symbol interval and compare with the threshold under noise. Here the designer can experiment with the suitable value of amplitude threshold for correctly detecting all 64 bits of DES words.Similarly for improved detection coherent ASK demodulation can be employed taking into account carrier phase information.The user would introduce a PLL in the demodulator based on the constant sampling rate to detect a signal synchronized to the transmitter carrier frequency.SystemC AMS provides the essential high modeling capabilities for such tasks. In summary, the system performance under secure DES keys can be tuned in envelope detection, receiver amplitude, selecting the carrier frequency and automatic gain control for a given data rate or SNR.Further more, intrusion attack or false authentication behavior can be modeled in the transceiver example for validating DES cryptography.The simulations enable studying ASK/OOD modulation which is susceptible to threat monitoring and radio frequency interference.Merits of other encryption methods, for example, AES, Blowfish, or Rijndael can similarly be evaluated. .Understanding these behaviors is vital for hw/sw codesigners who need accommodation for tolerances in their designs.The problem however with all these modeling levels is the huge amount data produced in sampling a continuous time wave which must be accumulated and segmented to be applied to the cosimulation interface for reasonable simulation speed and efficiency.These analog models will be simulated next in the framework and the methodology will be modified accordingly to handle data (setting thresholds or writing to files).The presence of floating pointing data types further exacerbates the problem of embedding a refined analog model into high level system description.Pure mixed-signal models would also demand a different solver, that is, a tool change, for example, Cadence AMS Designer, Synopsys Saber and therefore modification to simulation interface and data access mechanism as well. The SPICE netlists would have to be wrapped in HDL/HDL-AMS descriptions for enabling C level access with VHPI/VPI.The analog refinement process would require careful and qualitative evaluation because refined analog models are conservative they change the overall dynamics defined by ordinary differential or differential algebraic equations of the system in which they embed.These simulations will be inherently slow and would deal with typical convergence problem.SystemC AMS to Cadence analog cosimulation issues have been highlighted in [30]. Conclusion Complex SoC designs now are being specified by executable specification in addition to conventional written requirements.SystemC AMS is a suitable simulator for this task.However SystemC AMS abstraction level is inadequate for subsystem-level, fro example, refined models.This limitation in SystemC AMS kernel can be overcome by cosimulating refined models in specialty simulators which are invoked and controlled at the module entry point of the SystemC AMS model.A mixed analog and discrete simulation framework has been presented that capitalizes on simplicity and open architecture of SystemC AMS as master simulator and its synchronization layer which eliminates the overhead of common backbone and custom synchronizing schemes typical in simulator coupling. The framework has been demonstrated by replacing simple parity-based cryptographic unit with a refined (RTL) model of DES encryption standard, cosimulated with Cadence Incisive Unified Simulator.To evaluate robustness the framework is applied for two instances of refinement in a single system-level model.The simulation results confirm that the framework can handle complex HDL models because in absence of noise the cosimulated HDL models of encryption and decryption match pure SystemC AMS simulation which was of abstract parity checker.The framework C-based interfaces and can be used for any EDA vendor's tool compliant of standard C interface, for example, Synopsys and Mentor Graphics.There is capacity in the framework to simulate analog models of HDL-AMS and SPICE wrapped around C interface.The simulation interfaces are abstract written by designers who are not application programmers.The refined models are inserted directly in the executable specification and then analyzed.For SoC designers this is an enormous advantage because implementation specific details and performance can be viewed at the system definition.The methodology aims for meriting system concepts, exploring architectures, and bridging the implementation gap. Figure 3 : Figure 3: ASK transceiver model and simulation. Figure 9 : Figure 9: Phase difference at the output of LPF for 3DES case (LO frequency = fc). Table 1 : Degree of user modification in the framework.Table1describes ease in using the cosimulation framework.The C access library ought to be generic to imported to a variety of designs, for example, by passing model instance and simulation times. (languages, tools called DLLs loaded, speed of testbench, etc.), CPU load, and network speed among others.We illustrate execution times for various models in Figure10(client at AMD Athlon 2.10 GHz running Ubuntu, server at Intel Xeon 3.39 GHz running Debian GNU/Linux).These nonabsolute times reflect full-scale automation including compilation, elaboration, and simulation.8.3.Usability.
2014-10-01T00:00:00.000Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "001e6d7b9b4ef396d9b1e2cb166e9740f09f39b1", "oa_license": "CCBY", "oa_url": "https://uwe-repository.worktribe.com/preview/979828/489365.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "2c7eb36b6ffbd9965a77ff336f64e063ea7ac245", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
208392220
pes2o/s2orc
v3-fos-license
Umbilical squamous papilloma: A case report A 21-year-old Caucasian male presented with a 2-year history of an asymptomatic mass in the umbilicus. The patient stated that the lesion increased in size gradually. Therefore, the patient was complaining of cosmetic disfigurement. However, he did not receive any treatment previously. The past medical history and family history were both unremarkable. The patient denied any trauma to the umbilical region. Sir, A 21-year-old Caucasian male presented with a 2-year history of an asymptomatic mass in the umbilicus. The patient stated that the lesion increased in size gradually. Therefore, the patient was complaining of cosmetic disfigurement. However, he did not receive any treatment previously. The past medical history and family history were both unremarkable. The patient denied any trauma to the umbilical region. The physical examination of the patient revealed a skin colored pedunculated plaque with verrucous surface in the umbilicus measuring 1.5x1 cm in size (Fig. 1). The lesion was removed surgically under local anesthesia to reach a definitive diagnosis. Histopathological evaluation of the specimen revealed multiple finger-like projections supported by central fibrovascular cores which were covered by a stratified squamous epithelium. The longest diameter of the polipoid lesion was 1.5 cm. However, the diameter of the basis of the lesion was measured 0.4x0.3 cm. No evidence of malignancy or koilocytic changes in the epithelium were observed. The diagnosis of squamous papilloma was made based on clinical and histopathological features (Fig. 2). Squamous papilloma is a benign neoplastic proliferation with finger like morphology which usually affects skin, cervix, breast duct, respiratory tract and gastrointestinal tract. Human papilloma virus (HPV) infection plays role in the etiology of squamous papilloma [1]. It has been suggested that stem cell population of the hair follicle which contains keratin 15 might contribute to the development of squamous papilloma. Li et al. Terada reported a 46-year-old female patient with a squamous papilloma measuring 13 mm in size on the scalp, arising from an epidermal cyst. Squamous papilloma showed intracystic growth with fibrovascular cores. The tumor had benign character, and thus no atypia was observed. Immunohistochemical evaluation showed no association between squamous papilloma and HPV. Terada reported that squamous papilloma could arise in epidermal cysts. Therefore, differential diagnosis of squamous papilloma from trichilemmal tumor and proliferating trichilemmal cyst should be made [3]. Squamous papilloma of the umbilicus is a rare condition. Vijayabhaskar et al. reported a 47-year-old female patient with microinvasive squamous cell carcinoma of the cervix and benign squamous papilloma in the umbilicus at the same time [4]. Nathan reported coexistence of squamous papilloma arising from the umbilicus and genital warts. Nathan suggested possible role of autoinoculation from genital region to umbilicus during bathing [5]. Verrucous carcinoma is a differentiated type of squamous cell carcinoma which presents as a slow growing warty papule. Differential diagnosis of squamous papilloma from verrucous carcinoma is crucial as the verrucous carcinoma can lead to local invasion and metastasis. The risk of misdiagnosis increases in laryngeal lesions and in cases when the biopsy specimen is small [6]. In addition, Kim et al. reported a 20-year-old female patient with condyloma lata in the umbilicus, perineum and mucous patches on the lips. Even it is rare, syphilis should be kept in mind in the differential diagnosis of verrucous nodules in the umbilicus [7]. In conclusion, the patient presented hereby had a lesion diagnosed as squamous papilloma in the umbilicus which was an unusual localization for squamous papilloma. Possible initiating factors such as HPV infection, chronic irritation and rare diseases such as verrucous carcinoma and syphilis in the differential diagnosis of squamous papilloma are reminded through this case report. Consent The examination of the patient was conducted according to the Declaration of Helsinki principles.
2019-10-03T09:06:04.744Z
2019-10-03T00:00:00.000
{ "year": 2019, "sha1": "2bae97116de579b8c3b8c68b687b9334e0844c6f", "oa_license": "CCBY", "oa_url": "http://www.odermatol.com/odermatology/20194/30.Umbilical-YukselME.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a918dbae8d1429eaa47b78bb648219191e61ac55", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253291945
pes2o/s2orc
v3-fos-license
The Knowledge, Attitude and Practice of Fixed Prosthodontics: A Survey Among Dental Practitioners in Eastern Nepal Introduction: The aim of the study is to evaluate the knowledge, attitude and fixed prosthodontics practice guidelines amongst dental practitioners of Eastern part of Nepal. Materials and Methods: A descriptive cross‑sectional study was done among dental practitioners of Eastern Part of Nepal Practicing in Private Clinics and Dental Schools. A total of 250 dentists were selected randomly from private and public sectors and dental schools. A survey was conducted through a printed and online standard questionnaire with 18 open as well as multiple choice questions delivered to dental practitioners. All data were collected and coded, the statistical analysis was done using SPSS statistical software package. Descriptive statistics were used for data analysis. Result: The study showed that 167 (66.8%) were males while 83 (33.2%) were females. 80 (32%) of dentists were practicing crown and bridge for 1‑3 years, 88 (35.2%) of dentists were practicing crown and bridge for 4‑10 years, 38 (15.2%) of dentist were practicing for 11‑15 years while 44 (17.6%) of them were practicing for more than 16 years. Most of respondents 175 (70%) worked in private clinics. 90(36%) of participants fabricated study models before commencing fixed prosthodontic treatment. 190(76%) of participants always used radiographs for abutment tooth evaluation. Vitality test for restored abutments were always done by 115 (46%) respondents. Majority of respondents 200 (80%) were using high‑speed hand pieces and diamond bur during preparation 130 (52%) While preparing of teeth for dental prosthesis. additional cured silicon was mostly used by most of the practitioners 110 (44%) for making final impression with a Putty and wash techniques 183 (73.2%).165 (66%) participants used wax for bite registration, 100 (40%) of respondents always used retraction cord and 08(43.2%) practitioners never give provisional crown and bridges. Both written prescriptions and verbal communications were used during communication between dentist and lab by 175 (70%) respondents Conclusion: The dental practitioners of Eastern part of Nepal displayed acceptable level of knowledge and awareness regarding fixed prosthodontics practice. However, to further enhance efficiency and performance, an effort should be made to update the knowledge by conducting CDE on recent advances in dentistry and dental practices. Introduction T eeth play a remarkable role in the maintenance of a healthy personality and self-image of an individual. 1 Tooth loss is psychologically disturbing experience, and is considered to be a significant event in the life of a person, often requiring psychological, biomechanical and social readjustment. 2,3 Fixed prosthetic treatment restores form, function and aesthetic of the damaged or lost dentition and has been the most preferred modality of therapy. 2,3 It provides exceptional satisfaction for both patient as well as the dental practitioner and transforms an unhealthy, unattractive dentition with poor function into a comfortable, healthy occlusion capable of providing years of service. 4 Following diagnosis and treatment planning, FPD should be fabricated with meticulous preparation of the abutment teeth, appropriate soft tissue management, precise impression recording of the prepared and unprepared surfaces of the abutment, adequate temporization, critical evaluation of fit in metal trial and proper occlusion during cementation. 3,5,6 . This study was conducted to evaluate the knowledge, attitude and fixed prosthodontics practice guidelines amongst dental practitioners of Eastern part of Nepal. Materials and Methods This descriptive cross-sectional study was done among dental practitioners in Eastern region of Nepal. Dentists were selected randomly from private and public sectors through simple random sampling. The study was approved by Ethical Committee of Nobel Medical College and Teaching Hospital, Biratnagar, Nepal. A total of 250 dentists participated in this study. The survey was conducted through a printed and online standard questionnaire with 18 open as well as multiple choice questions delivered to dental practitioners. Questionnaire was prepared in English language. The questionnaire comprised questions to assess the knowledge, attitude, and practice of fixed prosthodontics among dental practitioners in the eastern part of Nepal, practicing in private clinics and dental schools that is adapted to Kannan et al. 5 All the respondents were informed about the aims and objectives of study. After eliciting their consent in participation, the questionnaires were distributed. Adequate time was provided to fill the questionnaire. The response of practitioners was recorded, analyzed for flaws, checked for completeness and were taken up for assessment. The questionnaires consisted of two parts. The first part recorded gender, level of education, nationality, place of work and number of years of practicing experience. The second part evaluated the knowledge of standard guidelines to be followed by the practitioner in prosthodontics practice such as pre-treatment vitality tests, radiographic evaluation, type of tray used, type of impression, impression material and quality of communication with the dental laboratory technician. After data was collected and coded, the statistical analysis was done using SPSS statistical software package. Descriptive statistics were used for analysis. 190 (76%) of the respondents always used radiographs for abutment tooth evaluation, 30 (12%) of them used it often, 20 (8%) of them used it rarely and 10 (4%) of them never used any radiograph before starting treatment [ Table 2]. Vitality test for the restored abutments were always done by 115 (46%) respondents, 85 (34%) of them often used it and 22 (8%) of them never used it on regular basis [ Table 2]. Majority of the respondents [200 (80%)] used high-speed hand pieces and 50 (20%) of them used both the high and low speed hand pieces during the tooth preparation. More than 50% of the dentists used diamond bur during preparation while 48% of them used both carbide and diamond burs during preparation [ Table 2]. Both the written prescriptions and verbal communications were used during communication between the dentist and the lab technician by 175 (70%) respondents while 50 (20%) of them provided only written instructions [ Table 2]. Discussion Present survey showed that most of participants (90) 36% fabricated study models routinely before starting treatment. (76%) 190 of participants always used radiographs for abutment tooth evaluation. Vitality test for restored abutments were always done by 115 (46%) respondents. The study of Moldi E et al. (2013) was to integrate impression techniques evolved all over the years for fixed partial dentures and to know the techniques and materials which are used by the practitioners, they found that 29% practitioners do not take diagnostic impressions and proceeded with the tooth preparation after the clinical intraoral examination. 7 Mohamed AB et al. (2010) found that unacceptable practice in crown and bridge work was noted that the majority of the surveyed practitioners rarely used study casts (38.1%) and radiograph (35.6%) for the abutment tooth, Sixty eight (46%) of surveyed dental practitioner's never used vitality test for abutment tooth. 8 . The results of the present study revealed that addition silicon impression material was mostly used, 110 (44%) for making final impression followed by condensation cured silicon, 80(32%) and 16(40%) preferred to make final impression using alginate which in contract to the results of questionnaire undertaken in Maharashtra state (2016) where 44% of participants used irreversible hydrocolloid, 26% used Condensation silicone, 23% used addition silicone, 5% use polyether, 2% uses polysulfide impression material [9]. Similar study conducted in Khartoum showed that alginate impression material, 101 (68.2%) was the most commonly used type of impression material by the surveyed dental practitioners while Condensation silicone 36 (24.3%) and addition silicone 11 (7.4%) materials were also selected. 8 In another study conducted in India (2013), they found that 55.46% use irreversible hydrocolloid and 44.54% use elastomeric impression materials to make final impression. 7 Regarding Impression technique used for final impression, Putty-wash techniqueswere mostly used by dentists who used elastomeric impression material 183 (73.2%) in present study. Amruta et al found that impression technique practiced most commonly is single mix technique (48%); 28% used putty reline without spacer, 20% used putty reline with spacer and 3% used multiple mix technique. 9 Another study found that elastomeric impression technique practiced most commonly is putty reline with/without spacer (77.2%). 7 Similar study done in Khartoum state show that The putty and wash impression technique was the most recommended technique and it was selected by 38 dental practitioners (80%). 8 Regarding using retraction cord before taking final impression, Gadhavi et al, the aim of their study was to evaluate the use of various gingival displacement techniques prior to impression making in fixed partial dentures by the Prosthodontists in Vadodara, the result of their study show that 62% prefer the use of gingival displacement technique for successful clinical practice while 38% of them do not follow the procedure believing it does not make major difference in clinical practice. 10 also Moldi et al. found that 72.8% of practitioners use gingival retraction cord. 7 and in the other hand, Only 9.4% used retraction cord while 53.7% of the surveyed dental practitioners never adopted the use of retraction cords 8 , while in the present study 100 (40%) of respondents always use retraction cord and 15 (3.6%) never use retraction cord. Regarding using of inter-occlusal record. Maru K et al, the aim of their study was together information on selection, usage, and materials and methods employed in inter-occlusal records and their communication with the dental laboratory for restorative procedures practiced by dentists, their result showed that a significant number of dental practitioners (79%) use inter-occlusal recording materials for the fabrication of crowns and bridge works. The most commonly use inter-occlusal recording material was wax (54.6%). 11 Wax was the most popular registration material, being selected by 100 dental practitioners (94.3%), followed by silicone 5 (4.7%) and silicone putty 1(0.9%) [8]. In present study the majority of respondents, 210 (84%) always took bite registration for multiple teeth replacement and wax was the most used material for bite registration 165 (66%). Prevention of cross infection in dental practice in general and dental laboratory specifically should now be a routine in practice. In Khartoum state, 73% of the surveyed dental practitioners never disinfect the impression before being send to the dental laboratory and they recommended that provide continuous dental education programmes for all dental practitioners especially in the practice of crown and bridge work [8]. Also, in present study 178(71.2%) of respondents disinfect the final impression chemically before pouring it and sending it to lab. Many studies have demonstrated concerns about the quality of dentist technician communication. Poor communication between dental practitioners and dental technicians for fixed prosthodontics was cited in conducted in Khartoum showed that both verbal and written prescriptions (54%) were selected as a communication method between dental practitioners and technicians. 9 A survey conducted in Riyadh by Tulbah et al., they evaluate the quality of communication between dentists and dental their result showed that the quality of communication between dentists and dental technicians in Riyadh can sometimes be inadequate, and governmental laboratories have a lower level of communication 12 and another study conducted in Qassim by Sedky N. in 2014, found that lack of communication between Prosthodontists and their dental technicians, reported a significant nonconformity of view between dental technicians and prosthodontics 13 While in the present study showed that the in eastern Nepal dentists, 175(70.1%) communicated well with the labs by giving both written and verbal instructions. Conclusion The dental practitioners practicing in Eastern part of Nepal displayed an acceptable level of knowledge and a level of awareness of fixed prosthodontics practicing. However, to further enhance the proficiency, efforts should be made to encourage the practitioners to be aware of the advances in fixed prosthodontics practice through state-of-the-art continuous education programmes.
2022-11-04T18:09:56.685Z
2021-12-31T00:00:00.000
{ "year": 2021, "sha1": "d8c32e12ec82d84dae39beff75d47a97756d5371", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/jnprossoc/article/download/48364/36615", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "748f872a1b864b1505df9d3d71510a858d39f21a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
234340197
pes2o/s2orc
v3-fos-license
Gravitational wave signatures of black hole quasi-normal mode instability Black hole (BH) spectroscopy has emerged as a powerful approach to extract spacetime information from gravitational wave (GW) observed signals. Yet, quasinormal mode (QNM) spectral instability under high wave-number perturbations has been recently shown to be a common classical general relativistic phenomenon [1]. This requires to assess its impact on the BH QNM spectrum, in particular on BH QNM overtone frequencies. We conclude: i) perturbed BH QNM overtones are indeed potentially observable in the GW waveform, providing information on small-scale environment BH physics, and ii) their detection poses a challenging data analysis problem of singular interest for LISA astrophysics. We adopt a two-fold approach, combining theoretical results from scattering theory with a fine-tuned data analysis on a highly-accurate numerical GW ringdown signal. The former introduces a set of effective parameters (partially lying on a BH Weyl law) to characterise QNM instability physics. The latter provides a proof-of-principle demonstrating that the QNM spectral instability is indeed accessible in the time-domain GW waveform, though certainly requiring large signal-to-noise ratios. Particular attention is devoted to discuss the patterns of isospectrality loss under QNM instability, since the disentanglement between axial and polar GW parities may already occur within the near-future detection range. Introduction: Are all black-hole vibrational modes observables in gravitational-wave astronomy? What astrophysical/fundamental physics information do they really convey? Small environmental perturbations are not expected to radically disrupt the underlying BH spacetime, given the confidence in BH dynamical stability. Yet, instabilities seem intrinsic to the theory at the spectral level [1,20,21]. Fluctuations may alter significantly the QNM spectrum itself, with stronger effects in the high overtones [1]. Since recent GW events hints towards the detectably of more than one mode [12][13][14]22], addressing our opening questions is paramount for the correct interpretation of future GW observations. BH perturbation theory is described via a nonconservative system with energy leaking inside the BH and propagating out to the wave zone. Equations are described by non-self-adjoint operators, in a common framework across classical and quantum systems [23]. The notion of pseudospectra, recently introduced into gravity [1], allows to identify spectral instabilities in nonconservative systems [23][24][25][26]. As a topographical map, the pseudospectra contour level with value delimits the region in the complex plane where QNMs can migrate when the system undergoes perturbations of this order. The spectra is stable if such -contour lines lie within a distance from the original spectra. Spectral instability follows when the contour lines extend into a large region in the complex plane. Remarkably, the latter arises in BH physics [1,20,21,27] (see fig. 1). QNM instability: To trigger the instabilities, ref. [1] introduced an ad hoc modification δV k ( 1) into the potential governing the dynamics of GWs on a spherically symmetric BH spacetime. When having a sinusoidal profile in the radial direction, δV k mimics a Fourier mode from a realistic potential and it captures the contribution of small and large scale perturbations via a wave-number k. Fig. 1 reproduces 1, the QNM overtones differ significantly from their original values (in red circles). One observes a branch sharing the tendency set by the pseudospectra [1] (blue circles) and new internal modes (blue squares) with decay scale similar to some QNM in the branch, but lower oscillatory frequency. Here, = 10 −3 and k = 10. fig. 1). Left Panel: Regimes of axial (red) -polar (blue) isospectrality loss (k = 10 and = 10 −15 , 10 −18 ). In the stable regime (top inset), lower QNMs are not affected by instability, thus axial and polar QNMs differs with order . The "w-mode" regime (right inset) strongly distinguishes axial and polar QNMs into an an alternating pattern. In the "Nollert-Price" regime (left inset) axial and polar QNMs differs again just within order , despite the wider branch opening. Transition between the regimes occurs close to internal modes. the overtones instability for k 1 [1]. We stress the appearance of: (i) branches opening similarly to the pseudospectra lines (blue circles), dubbed "Nollert-Price" branches [28,29] by ref. [1]; (ii) modes (blue squares) inside the region fixed by the "Nollert-Price" branches, named here "internal modes". Specific values for the perturbed QNMs depend on the particular model for the environmental effects or modifications in the gravity theory. Yet, the opening pattern observed in fig. 1 is rather generic, which raises the need for a research program aiming at understanding the GW observational implications of such QNM instabilities. The challenge lies on several fronts. On the theoretical side, apart from modelling the specificities of the local environmental astrophysics, or extending gravity beyond General Relativity (e.g. [30,31]), GW astronomy shall profit from formal results in the theory of scattering resonances [32][33][34][35][36][37][38][39][40][41][42]. The perturbed QNM patterns reflect features agnostic to the model under consideration. From the data analysis side, one may need enhanced detection pipelines so that the features displayed in fig. 1 are not overseen, if present. Effective parameters: We initiate this research line on the theoretical side by adapting to gravity further results from the theory of scattering resonances. Consistently with scattering theory [32][33][34][35][36][37][38], our numerical analysis demonstrate the logarithmic asymptotics of pseudospectra contour lines (cf. also [1,27]). Moreover, "Nollert-Price" QNM branches open up in the complex plane in qualitatively similar patterns [1]. Which information lies in the asymptotics? Currently, the question is not an observational one, as it would require an ideal instrument able to detect QNM overtones ω n with n 1. It is rather of structural nature: asymptotics do offer a guideline to identify the relevant patterns in the phenomenon [43]. If the dynamics of the physical scenario is dictated by po-tentials with discontinuities at some pth-derivative (i.e., of class C p ), then the spectra asymptotics must reach exactly the logarithmic boundaries of the pseudospectra. The real ω R n and imaginary ω I n parts follow "Regge QNM branches" [44,45] for n 1. Reverberations within chambers with a length scale L R is the mechanism behind the opening of the spectra into such log-branches [44][45][46]. These are modulated by 'regularity' γ,γ, γ and 'strength' S parameters. This behaviour is found, for instance, in BH-like potentials [28,29,47,48], and in the w-modes of (a class of) neutron stars [46,49,50]. Detecting QNMs obeying eq. (1) would be a strong indication of an underlying low regularity (C p -) potential. This feature of the n 1 QNM-pattern, suggests the introduction of a set of effective parameters: a reverberation length scale L R := π/|∆ω R n |; a 'small-scale' structure γ := L R ∆ω I n /∆ ln ω R n ; and a perturbation strength ln S n := γ ln(ω R n ) − L R ω I n [51]. Rigourous results for the spectra distribution (within the −pseudospectra region) for smooth potentials are less sharp, but QNMs must always lay above the logarithmic curves. We conjecture that the QNMs reach the log-curves in the large k wave-number limit. Supporting this statement, we introduce G n = ω R n /|ω n | to measure the opening. This is a different representation of the so-called quality factor Q n (e.g. [52]; cf. [53] in BH QNM physics). Schwarzschild QNMs' asymptotics [54] gives G := lim n→∞ G n = 0, whereas eq. (1) yields G = 1. Fig. 2's upper-left panel shows the monotonic increase of G ∈ [0, 1] (for several 's) with k. The tendency G → 1 as k → ∞ is a strong indication that the pseudospectra's logboundaries are attained in the large wave-number limit. Weyl Law's remains valid for perturbed BH potentials and L W is always robustly defined (fig. 2's lower-left panel). The changes in L W are not related to the branch opening. Indeed, we observe |∆ω n | constant along them. Rather, the apparent 'phase transition' with 'order parameter' L W /L W Sch shifting from 1 to O(3) results from an increase of internal QNMs. Hence, the measure of G and the Weyl length scale L W are complementary to each other, as they assess precisely the two novel aspects in QNM instability [66]: G accounts for the "Nollert-Price" branches opening in the complex plane, whereas L W measures the presence of internal modes. Isospectrality loss: Another outcome of the QNM overtones instability is the distinction between axial and polar GW parities. While both QNM spectra coincide for the Schwarzschild BH, parity disentanglement is a natural consequence when the system is slightly perturbed [1]. We observe the existence of three regimes of the isospectrality loss ( fig. 2): (i) Stable region: Relatively low wave-number k and small perturbed amplitude do not trigger the instability in the first few QNM overtones. The perturbed polar/axial QNMs stay at a distance from their original values. Isospectrality loss is then of the same order as the perturbation, i.e. |ω axial n − ω polar n | ∼ C n (k) . The function C n (k) ∼ C n (k + k n ) αn has model-dependent constants C n , k n and α n . As or k increase, the stable behaviour is observed by less and less overtones, eventually reducing only to the fundamental mode. Nearfuture observations shall measure both parities in the fundamental QNM, which may discriminate from other physical mechanisms behind the isospectrality loss (e.g. [16,67]). (ii) Alternating axial/polar "w-modes": Moving to higher overtones, parities drastically separate when QNMs instability first occurs. QNMs of different parity place themselves in an alternating pattern along the branch, as neutron star "w-modes" do [4,49]. Isospectrality loss is most accessible here, with BHs as compact star mimickers. We observe ω R n ∼ ln(ω I n ), ω I n ∼ n (cf. the contrast with eq. (1)). As or k increases, this regime descends in C towards the first overtones, eventually overcoming the previous stable region. (iii) Nollert-Price regime: In this third regime, the QNMs migrate further away from unperturbed ones. We observe the branches obeying ω I n ∼ ω R n ∼ n, the QNMs instability (assessed by the opening of the branch) is stronger than for alternating "w-modes". Yet, the isospectrality loss is once again linear in , as in the stable regime (i). The mechanism behind this result is unclear. This regime is the dominant one when or k is sufficiently large, as it overtakes both (i) and (ii). Interestingly, the transition between the three sectors seems to occur precisely upon appearance of an internal mode. New regimes in far asymptotic regions are not excluded, but their numerical study is challenging. We observe the internal QNMs to be very parity-sensitive, with values around the first overtone already for moderate wave-number perturbations. Since QNM instabilities are not restricted to the asymptotic behaviour of QNMs overtones, novel features might already be within near-future detection range. The next section initiates the discussion from a simple data analysis perspective by measuring the perturbed QNMs within a numerical GW time signal. Throughout the section, unbarred quantities and results displayed in red will denote dynamics under the unperturbed potential. Equivalently, barred symbols (with results in blue) refer to dynamics under the perturbed potential. Data analysis: Because the discussed QNMs instabilities are restricted to the overtones [1], one does not expect to see their effect in a time signal by a mere "naked-eye" study. Indeed, the (stable) fundamental mode typically dominates the dynamics. Also, the example in fig. 1 is rather conservative in the sense that the instability is triggered only for overtones with n ≥ 2. But interestingly, the perturbed spectra show an internal-mode ω 2 near the first overtone: Im(ω 1 ) ∼ Im(ω 2 ). As a proof-of-principle for eventual realistic detections (with large signal-to-noise ratio), we simulate here an ideal ringdown signal. The goal is to assess the detectability of the two classes of perturbed modes (Noller-Price and internal modes). For this purpose, we solve the usual unperturbed Regge-Wheeler wave equation [44], as well as its perturbed version [1] with = 10 −3 and k = 10. The solutions are obtained with the highly-accurate code from ref. [68], which ensures the numerical noise to be at machine roundoff error. The Fig. 1 -here, the inset shows the first four QNMs with their labelling -is not apparent due to the stability of fundamental mode. Prony's method infers the first overtones and detect the underlying differences in the spectra (Table I). overall qualitative behaviour is independent of the initial data (ID). We use the so-called polynomial ID [69] as they ensure a QNM spectral decomposition for all times [69]. Fig. 4 shows the GW time evolution at the BH horizon H and at future null infinity I + on the Schwarzschild background under the unperturbed (red) and perturbed (blue) Regge-Wheeler potential. As expected, the bare signals look indistinguishable and we resort to the Prony's method to measure the QNMs [70]. Table I compares the theoretical QNM values against those from the Prony's fitting [71]. We can infer three modes in both cases, but significant digits are lost on the overtones. Though the accuracy suffices to distinguish unperturbed from perturbed spectra, the method is oblivious to the internal mode. We stress that the time signal Φ evol (t) follows from the direct integration of the underlying wave equation with a given ID. Alternative and independently, a spectral analysis of the QNMs yields Φ Such "frequency domain" approach permits to tighten our assessment of the spectral instability within the dynamical waveform, in particular by focusing on the internal mode ω 2 . Namely, we use a semi-analytical tool [69,72,73] to measure A n . A close look at the excitation factors reveals that the internal mode ω 2 is very mildly, but unmistakably, excited (see Supplemental Material): perturbed QNMs are therefore indeed present in the perturbed GW signal. With the employed ID we get A 2 ∼ 10 −3 , whereas A 0 ∼ A 1 ∼ A 3 ∼ 10 −1 . The fainter signal explains why Prony's method bypasses this mode, while its background noise spoils ω 1 's and ω 3 's accuracy. An important open question is whether more realistic ID would excite the internal modes more effectively. Discussion: The QNM overtone instabilities described in ref. [1] are present in GW waveforms, directly impacting the future of BH spectroscopy. Assessing whether the instabilities are purely theoretical predictions or if realistic scenarios may trigger them is a pressing issue for the correct interpretation of future high accuracy GW observations. In this work we have: i) demonstrated that BH QNM overtone instabilities are not an artifact of the frequency-domain analysis, but they are actually present in the time-domain waveform, and ii) initiated a systematic multidisciplinary effort aiming at characterising the QNM instability signatures in GW signals. With results from the theory of scattering resonances adapted to GW physics, we introduced new observables obtained from the QNMs' asymptotic behaviour. At this stage, their interest is not in the direct detection of large over-tones, which is non-realistic within the near-future technology for GW astronomy. Rather, this is a theoretical contribution of fundamental nature since the effective asymptotic behaviour captures the constitutive features of the QNM instability phenomenon. In particular, such observables open an avenue to probe small-scale physics of the BH and its environment. Complementary knowledge follows from introducing the Weyl Law in the context of GW physics, which gives hints into further classical and quantum aspects of BH physics. Targeting near-future observations, fluctuations around the BH shall disentangle the GW spectra for axial and polar parities. For the fundamental mode the deviation is of the same order of the small perturbation. However, the first few overtones may show a significant contribution if the so-called internal modes are in the detection range. Indeed, by simulating a highly accurate GW ringdown signal, and exploiting finetuned fitting algorithms to measure the QNMs and to access their individual contribution into the evolution, we confirm the QNMs instabilities already in the first few overtones. In this work, the environmental fluctuations were modelled by adding an ad-hoc sinusoidal perturbation to the system (cf. ref. [1]). The generality of the results follows because such approach captures the contribution from a given Fourier mode in a more realistic analysis. The time evolutions employed the particular ID from ref. [69] but, since the ring-down is oblivious to the ID choice, we ensure the validity of our conclusions. Crucially, there remain open questions on whether and how more realistic scenarios trigger the instability and with which intensity each individual perturbed QNM is excited. We stress the timely necessity for liaising the theoretical results on the fundamental aspects of BH perturbation theory with the current efforts to set goals and detection strategies for future GW missions. Detecting QNM overtones in a noisy signal already imposes a challenging data analysis task when a deterministic underlying spectrum is a priori available, as illustrated by the QNM analyses in refs. [12][13][14]22]. The theoretical prediction of BH QNM instability adds another layer of obstacles, since the perturbed QNM overtone specific values will generically incorporate a stochastic component from (random) small-scale perturbations and only general patterns shall be available. This strongly indicates that only detections with very high signal-to-noise ratios will offer eligible candidates for disentagling BH overtone instabilities. In particular, this may define a challenging but tantalizing case for LISA science, requiring the development of specific data analysis tools able to cope with a more intricate parameter degeneracy. This work was supported by the French "Investissements d'Avenir" program through project ISITE-BFC (ANR-15-IDEX-03), the ANR "Quantum Fields interacting with Geometry" (QFG) project (ANR-20-CE40-0018-02), the EIPHI Graduate School (ANR-17-EURE-0002), the Spanish FIS2017-86497-C2-1 project (with FEDER contribution), the European Research Council Grant ERC-2014-StG 639022-NewNGR "New frontiers in numerical general relativity" and the European Commission Marie Sklodowska-Curie grant No 843152 (Horizon 2020 programme). The project used Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT, and CCuB computational resources (université de Bourgogne). SUPPLEMENTAL MATERIAL In this supplement material, we provide support to some of the most critical physical statements in the main text. Appendix A: Presence of perturbed QNM overtones in the time signal A fundamental result in the article is the confirmation of the presence of unstable QNM overtones in the (perturbed) ringdown time signal. We explicitly demonstrate this key point here. The top panel of fig. 4 reproduces fig. 3 from the main text. As discussed there, it shows the time evolution of GWs on the Schwarzschild background measured at the BH horizon H and at future null infinity I + (i.e., the mathematical formalized notion of an infinitely far wave zone). Please recall from the main text that unbarred quantities (with results displayed in red) will denote dynamics under the unperturbed [44] potential, whereas, barred symbols (with results in blue) refer to dynamics under the perturbed version employed in ref. [1]. To unmistakably unveil the QNM spectral instability within the time signal, we employ the robust semi-analytical tool developed in ref. [69,72,73], which is capable of identifying and filtering the contribution of each individual QNM. More specifically, we recall that the integration of the wave equation in the time domain yields a signal Φ evol (t) displaying two dynamical regimes: i) the QNM ring-down with an exponentially damped, oscillatory decay, and ii) a late-time powerlaw decay tail. Alternatively to such time domain approach, on can also study the underlying equation in the frequency domain. In this second approach, the QNM ring-down and tail-decay are associated, respectively, to the discrete spectrum {ω n ; φ n (σ)} and the continuous spectrum {ω = iy, y ≥ 0; φ(σ; y)} of the underlying operator generating the dynamics [69,73] (here σ is the relevant -compactified-spatial variable in the eigenfunctions φ, whereas n and y are discrete and continuous labels respectively parametrizing the discrete and continuous spectra). The focus here is on the discrete part of the spectrum, responsible for the ringdown decay, which is the dynamical regime currently accounted for in black-hole spectroscopy. In particular, one considers the signal "proxy" resulting from the spectral analysis as the (finite) superposition Recall that the coefficients A n assess how a given Initial Data (ID) excites each individual QNM in the time signal. Refs. [69,72,73] comprehensively discuss the algorithms to read A n directly from the ID. Such procedure is utterly independent from the construction of Φ evol (t). Consistency between time and frequency domain approaches is assessed by monitoring the rests By systematically filtering out the slowest decaying QNM and the first N overtones from Φ evol (t), the filtered signal F N (t) in eq. (A2) accesses higher overtones in the original full signal. Indeed, fig. 4's middle panel shows the unperturbed and perturbed filtered signals without the contributions from the fundamental QNM and the first overtone, where one directly distinguishes -in stark contrast with the full signal in the top panel-the different dynamics for unperturbed and perturbed filtered evolutions, respectively F 1 and F 1 . As expected, the second overtone ω 2 dominates F 1 's dynamics. However, the perturbed signal F 1 crucially decays exactly as predicted by ω 3 , indeed a perturbed QNM overtone: this demonstrates the presence of the perturbed ω 3 overtone in the full perturbed time signal Φ evol (t). Note that the actually identified overtone is ω 3 , instead of the internal mode ω 2 . This is precisely accounted for by regarding the excitation coefficients in eq. (A1): the signal from ω 3 is stronger than the one from ω 2 , namely A 3 ∼ 10 −1 whereas A 2 ∼ 10 −3 . But, indeed, the internal mode ω 2 is also present in the full perturbed signal Φ evol (t). To explicitly observe it, we slightly modify the theoretical filtering technique: a new filtered signal F * N is built with the mode n = 2 skipped from the sum in eq. (A2). Fig. 4's bottom panel shows F * 7 , where all modes with n = 0, . . . , 7 are removed, but for n = 2. Thus, the fainter contribution from the internal mode ω 2 is unmistakably detected from its spectrally predicted decaying slope. Appendix B: Isospectrality loss: stable region Upon a perturbation of the order , QNM isospectrality is lost for all QNMs. But different qualitative behaviours are observed in distinct parts of the QNM spectrum. Specifically, in the region we have referred to as "stable", perturbed polar and axial QNMs stay at a distance of order from the nonperturbed QNMs. In particular, the difference between perturbed polar and axial frequencies is also of order . Defining ∆ iso n (k, ) := |ω a n (k, ) − ω p n (k, )| , fig. 1 in the main text displays both underlying corresponding QNM spectra. Spectral instability is not apparent due to the stability of the dominant fundamental mode. The inset provides a zoom into the region around the fundamental QNMs and first overtones, introducing their labeling. Middle panel: Filtered ringdown signals without contribution from modes with n = 0 and n = 1. While F 1 decays according to the corresponding unperturbed second overtone ω2, the prescribed Initial Data is not efficient to excite the internal mode ω2 of the perturbed potential, and consequently F 1 is dominated by ω3. Bottom panel: Filtered perturbed ringdown signal, with modes from n = 0 to n = 7 removed, except ω2. Despite the small excitation coefficient A2, we indeed observe a contribution from ω2 to the ringdown signal. where ω a n (k, ) and ω p n (k, ) are, respectively, the axial and polar QNM frequencies under a sinusoidal perturbation of size and wave-number k, it is observed with C n (k) presenting a power-law dependence in the wave number k (namely, C n (k) ∼ C n (k + k n ) αn , with constants C n , k n and α n ). The critical point for claiming a stable behaviour is the strict linear dependence in the perturbation size . Figure 5 demonstatres bluntly such behaviour. Observational access to the differences ∆ iso n (k, ) would therefore di-rectly probe the (energy) size of the underlying perturbations. Appendix C: Logarithmic asymptotics of pseudospectra boundaries Data analysis strategies aiming at efficiently extracting QNM overtones from GW observational data will possibly lean on some "a priori" input knowledge about the expected resonant frequencies. One of the most striking consequences of the high-frequency overtone instability is the opening of BH QNM branches from the asymptotically vertical ones of non-perturbed BHs to the wide-open (Nollert-Price) branches of perturbed BHs. Pseudospectra boundaries provide "proxies" for such perturbed branches (cf. fig. 1 in the main text, and [1]), and present a logarithmic asymptotics for large n ω I n ∼ C 1 + C 2 ln(ω R n + C 3 ) . Beyond confirming such asymptotics, the present work demonstrates that the logarithmic asymptotic regime starts "very early" in the complex plane, already in the region close to the non-perturbed QNM spectrum, as illustrated in fig. 6. The key observational consequence is that data analysis strategies based on perturbed BH QNM templates constructed on eq. (C1), could be successful in extracting perturbed low overtones, therefore probing small BH physics through the effective parameters introduced out of eq. (1) in the main text. Each color refers to a particular n in QNMs ωn: violet corresponds to the fundamental mode (n = 0), whereas overtones (n ≥ 1) correspond to the lines above. The unity slope in the log-log profile (note the same scale in the abscissa and the ordinate) demonstrates the strict linearity in . The dependence of the proportionality constant Cn(k) in the wave number k -cf. eq. (B2) -is illustrated by using sinusoidal perturbations with k = 6, 10, 20. The observed pattern corresponds to a power-law of Cn(k) ∼ Cn(k + kn) αn , with αn increasing with n. FIG. 6. Pseudospectrum of the Schwarzschild BH. The white lines correspond to pseudospectra boundaries, namely the level-set contour lines of the pseudospectrum that mark QNM-free regions and are proxies of perturbed BH QNMS (cf. ref. [1]). The thin black lines provide logarithmic fittings of such pseudospectra boundaries, according to eq. C1. Quite remarkably, the logarithmic behaviour of pseudospectra boundaries extend to the region close to non-perturbed BH QNMs (red circles), therefore providing a parametrized pattern to model perturbed QNM overtone frequencies in data analysis.
2021-05-11T01:16:18.341Z
2021-05-07T00:00:00.000
{ "year": 2021, "sha1": "7c139b52cd5f72d4ca5185f650e1e5e46a409b6a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7c139b52cd5f72d4ca5185f650e1e5e46a409b6a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
256911387
pes2o/s2orc
v3-fos-license
Efficiency of newly formulated camptothecin with β-cyclodextrin-EDTA-Fe3O4 nanoparticle-conjugated nanocarriers as an anti-colon cancer (HT29) drug Camptothecin (CPT) is an anti-cancer drug that effectively treats various cancers, including colon cancer. However, poor solubility and other drawbacks have restricted its chemotherapeutic potential. To overcome these restrictions, CPT was encapsulated in CEF (cyclodextrin-EDTA-FE3O4), a composite nanoparticle of magnetic iron oxide (Fe3O4), and β-cyclodextrin was cross-linked with ethylenediaminetetraacetic acid (EDTA). This formulation improved CPT’s solubility and bioavailability for cancer cells. The use of magnetically responsive anti-cancer formulation is highly advantageous in cancer chemotherapy. The chemical characterisation of CPT-CEF was studied here. The ability of this nano-compound to induce apoptosis in HT29 colon cancer cells and A549 lung cancer cells was evaluated. The dose-dependent cytotoxicity of CPT-CEF was shown using MTT. Propidium iodide and Annexin V staining, mitochondrial membrane depolarisation (JC-1 dye), and caspase-3 activity were assayed to detect apoptosis in CPT-CEF-treated cancer cells. Cell cycle analysis also showed G1 phase arrest, which indicated possible synergistic effects of the nano-carrier. These study results show that CPT-CEF causes a dose-dependent cell viability reduction in HT29 and A549 cells and induces apoptosis in colon cancer cells via caspase-3 activation. These data strongly suggest that CPT could be used as a major nanocarrier for CPT to effectively treat colon cancer. The use of magnetic nanoparticles (MNPs) in the field of biomedical applications, such as magnetic drug delivery, magnetic resonance imaging, transfection, and cell and tissue targeting, has drawn considerable attention owing to their intrinsic magnetic properties 1 . MNPs show superparamagnetic behaviour, which permits them to gain magnetism in an applied magnetic field and lose it when the field is removed 2 . This property of MNPs is fully realised when they are used as drug delivery agents, whereby chemotherapeutic drugs can be targeted to desired locations in the body by application of an external magnetic field. The combination of MNPs and external magnetic field provides two unique advantages that benefit medicine immensely 3 . Priyanka Sharma et al. synthesized a biocompatible, water-dispersible phosphate affixed iron oxide magnetic drug vehicle by a superficial chemical method for anti-cancer drug delivery 4 . Polymers not only have a considerable potential for drug delivery 5 , but also can be used for medical devices, wound dressing, and fabricating scaffolds in tissue engineering 6 . Cyclodextrins are candidates for such a role, because of their ability to alter the physical, chemical, and biological properties of guest molecules through the formation of inclusion complexes. Recently, various cyclodextrin derivatives have been prepared to extend the physicochemical properties and inclusion capacity of cyclodextrin as novel drug carriers. Camptothecin (CPT) is a major anti-cancer drug that shows efficacy toward many cancers, including ovarian and colorectal tumours. CPT is an alkaloid isolated in the early 1960s from the Chinese tree, Camptotheca acuminata 7 . CPT is a selective topoisomerase I inhibitor 8 . CPT's ability to inhibit nitric oxide (NO) biosynthesis has also been proposed to contribute to its anti-tumour activity 9 . As a DNA topoisomerase I inhibitor, CPT forms a stable, ternary topoisomerase I-DNA cleavable complex, which initiates an apoptotic signalling pathway, ultimately resulting in cell death 10 . However, a major drawback of CPT is its reduced therapeutic potential owing to 1) poor solubility in aqueous media 11 and 2) active lactone ring instability at physiological pH 12 . Given that chemotherapy is a widely used cancer treatment, various nanocarriers are continuously being formulated and designed to enhance the solubility of chemotherapeutic drugs such as CPT. The solubility of chemotherapy drugs is critical because it affects delivery and bioavailability at the targeted location. Solubility limitations have greatly reduced the ability of chemotherapy drugs, such as CPT, to exert their anti-cancer properties, which limits their use for only a subset of cancers. To overcome this problem, multiple analogues of CPT have been developed with improved lactone stability and aqueous solubility. Various polymeric conjugates of CPT, including polyethylene glycol (PEG) 11 , cyclodextrin copolymer 13 , poly (L-glutamic acid) 14 , and chitosan 15 , have been investigated previously. Studies are ongoing in order to synthesis effective, water-soluble analogues of CPT to enhance its anti-cancer potential. With this study objective, we conjugated CPT with β-cyclodextrin and iron NPs (Fe 3 0 4 ) and cross-linked using EDTA to achieve a soluble CPT analogue (CPT-CEF) that was designed to improve the efficiency of CPT as an anti-cancer drug. We then tested the ability of CPT-CEF to induce apoptosis in the human colon adenocarcinoma cell line, HT29. Additionally, the drug was also concurrently tested on A549 lung cancer cells to reflect on the drug ability to be used in other cancers besides colon cancer. In this study, we provide further insight into the potential of this water-soluble formulation to enhance the anti-tumour activity of pure CPT. To the best of our knowledge, the functionalization of iron NPs on β-CD was carried with simple and sustainable method and the combination of CEF as a nanocarrier for CPT is being studied for the first time as an effective nanocarrier. Results FT-IR analysis. FT (Fig. 1A). This confirms the presence of the magnetic core, and thus is more pronounced in the bare magnetite NPs 16 . The spectrum of β-CD-EDTA nanocarriers showed the characteristic peaks of β-CD at 942, 1,028, 1,157, and 1,630 cm −1 (Fig. 1B). The peak at 942 cm −1 was due to the R-1, 4-bond skeleton vibration of β-CD, the peak at 1,028 cm −1 corresponded to the anti-symmetric glycosidic ν a (C-O-C) vibration, the peak at 1,197 cm −1 was due to the coupled ν(C-C/C-O) stretch vibration, and the peak at 1,630 cm −1 corresponded to N-H bending vibrations. A broad spectrum around 3,366 cm −1 was also observed, which was assigned to the hydroxyl group of the β-CD. These peaks indicate that β-CD-EDTA had been successfully combined with Fe 3 O 4 NPs. The encapsulation of the CPT drug in β-CD-EDTA-Fe was confirmed by FT-IR spectrum (Fig. 1C), which shows peaks at 1,075 and 2,915 cm −1 corresponding to C-O and C-H stretching vibrations in the drug. The hydroxyl stretching vibration of CPT at about 3,430 cm −1 and that of β-CD-EDTA-Fe at 3,336 cm −1 disappeared in the FT-IR spectrum because of the formation of hydrogen bonds. These data show that CPT had been successfully loaded with the β-CD-EDTA-Fe 3 O 4 nanocarrier. Morphological analysis by transmission electron microscopy (TEM). The morphology of the prepared Fe-MN particles, β-CD-EDTA, β-CD-EDTA-Fe 3 O 4, and β-CD-EDTA-Fe 3 O 4 /CPT, was investigated using TEM. The formed Fe NPs (Fig. 2a) exhibited a spherical shape, smooth surface, and uniform arrangement. The modified β-CD with EDTA had dispersed spherical morphology and some vast and unpredictably shaped aggregates were obtained after cross linking of EDTA on CD (β-CD-EDTA, Fig. 2b), and combining the Fe NPs with the β-CD-EDTA carriers produced structures with a smooth surface and good incorporation (β-CD-EDTA-Fe 3 O 4, Fig. 2c). When CPT was encapsulated within the β-CD-EDTA-Fe 3 O 4 carriers, the resulting β-CD-EDTA-Fe 3 O 4 / CPT inclusion complexes drastically changed shape and morphology, becoming more amorphous (Fig. 2d). The change in the surface morphology of the inclusion complexes was indicative of the presence of a new solid phase, which might be due to the molecular encapsulation of CPT into β-CD-EDTA-Fe 3 O 4 . TEM images confirmed that the individual components of the magnetic nanocarrier (β-CD-EDTA-Fe 3 O 4 ) and drug loaded magnetic carrier (β-CD-EDTA-Fe 3 O 4 /CPT), had nanometer size range with a separated particles. After the incorporation was performed, distinct drug particles with a dense structure were observed (Fig. 2d). Particle size analysis. Synthesised Fe 3 O 4 , β-CD-EDTA, β-CD-EDTA-Fe 3 O 4 , and β-CD-EDTA-Fe 3 O 4 /CPT nanocarriers were analysed for size and polydispersity index by a particle size analyser. The results are shown the Table 1, and the instrumental data are shown in Supplementary Figures S1-S4. The average particle sizes of the Fe 3 O 4 , β-CD-EDTA, β-CD-EDTA-Fe 3 O 4, and β-CD-EDTA-Fe 3 O 4 /CPT NPs are 168, 338, 308, and 386 nm, respectively. Fe MNs conjugation of with EDTA cross linked β-CD formed a carrier with more compact structure. Subsequent addition of CPT anticancer drug on the β-CD-EDTA-Fe 3 O 4 carriers increases their size. These results correlated well with the TEM results of Fig. 2. In the TEM, the surface of β-CD-EDTA-Fe 3 O 4 possesses cavity-like morphology, which is filled by the addition of the CPT drug, increasing the particle size. The polydispersity index decreases with increasing of particle size, it demonstrate increasing stability of the particles. From the results, β-CD-EDTA-Fe 3 O 4 appears to be more stable (Table 1). Figure S5). In the case of the encapsulation with CPT, the significantly changed colloidal stability of the nanoparticles and it indicates the success of drug loading on the carrier. As per the most widely accepted DLVO (named after inventors Derjaguin, Landau, Verwey and Overbeek) theory colloid stability depends on the sum of van der Waals attractive forces and electrostatic repulsive forces due to the electric double layer. This decrease of zeta potential is likely caused by the addition of carboxyl groups from the CPT drug molecules. It confirms loading CPT drug on the carrier as well as the CPT loaded carrier more stable. Zeta potential measurements. In-vitro drug release studies. Drug release studies are conducted to study the rate at which the loaded drug is released into the environment. Drug release studies are performed at biologically relevant pH and temperatures. In-vitro CPT drug release profile from β-CD-EDTA-Fe 3 O 4 carriers were assessed using the dialysis technique at pH 2.4 and pH 7.0 at 37 °C. As shown in (Fig. 3) nearly 65% and 58% of CPT was released within 10 hours at pH 2.4 and 7.0 respectively. At pH 7.0, the release of CPT is about 58% over a period of 10 hours, indicating that β-CD-EDTA-Fe 3 O 4 -CPT nano-carriers remain stable in the physiological condition. When pH is changed to 2.4 CPT is released more rapidly from the β-CD-EDTA-Fe 3 O 4 /CPT nanocarriers than pH 7.0. When treated in acidic condition at pH 2.4 conditions, the release rate is remarkably promoted. These results are consistent with the fact that CPT degrades much more quickly with acidic condition. The absorbance value increased with respect to the time CPT drug released from the carrier. From this study we confirm the drug was successfully released from the β-CD-EDTA-Fe 3 O 4 carrier at pH 2.4 and pH 7.0. The UV absorption peak is shifted to shorter wavelengths with an increase in the concentration of drug and dilution of the carriers, accompanied by the increase in absorbance. Similar behaviors of CD with various drugs by UV-visible spectroscopy have been reported in literature 17, 18 . Magnetic properties studies. Magnetic properties of the iron nanoparticles and iron nanoparticles loaded nanocarriers (CEF) was tested in vibrating sample magnetometer (VSM, Dexing, Model: 250) with a sensitivity of 50 emu. From this study, we observed that the magnetic properties of the Fe were retained after its functionalization in the nanocarriers (Fig. 4). This data is essential in reflecting the magnetic properties of CPT-CEF thus suggesting its potential to be utilized in magnetically targeted cancer therapy. Fig. 5a and b) showed a concentration-dependent decrease in cell viability of HT29 and A549 cancer cells respectively when compared to untreated cells, thus indicating the ability of CPT-CEF to retain the anticancer activity of CPT. A significant cell viability decrease was observed at a CPT-CEF concentration of 100 µg/mL. The effective CPT-CEF concentration for 50% inhibition (IC 50 ) of HT29 cell growth after 48 h was 133.5 μg/m (Fig. 5a). The IC 50 concentration of CPT for treatment with HT29 was observed to be beyond a range of 250 μg/mL thus indicating the potential of CPT-CEF to provide significant impact on HT29 cancer cells at low concentration of loaded CPT. In addition, treatment with CEF alone was mildly cytotoxic to HT29 cells. Also significant cell viability decrease in CPT-CEF treated A549 cells were observed at a CPT-CEF concentration of 85 μg/mL making it the effective CPT-CEF concentration for achieving 50% inhibition (IC 50 ) in A549 cell growth after 48 h of treatment (Fig. 5b). To evaluate further the capacity of CPT-CEF to induce apoptosis in cancer cell line, HT29 cell line model was selected as CPT is known to be utilized widely in colon cancer treatment. CPT-CEF causes morphological changes in HT29 cancer cells. HT29 and A549 cells were treated with the IC 50 concentration of CPT-CEF (derived from MTT assay) for and the morphological changes in the Membrane blebbing detected using AO/PI staining assay. Cell death can occur by apoptosis or necrosis. In this study, AO/PI double staining was used to determine the mode of death of HT29 and A549 cells treated with IC 50 concentrations of CPT-CEF for 48 h derived from respective MTT assay. AO/PI double staining distinguishes viable, apoptotic, and necrotic cells. Acridine orange (AO) stains viable cells, while propidium iodide (PI) intercalates into and stains double-stranded DNA of dead cells that have lost plasma membrane integrity. Viable cells have round and green nuclei. The nuclei of cells undergoing apoptosis are also stained green, but the nuclei appear fragmented. Late apoptotic and necrotic cells appear orange and red, respectively 19 . From the data in Fig. 6 HT29 (f) and A549 (l) we conclude that CPT-CEF treatment for 48 h with the IC 50 concentration of CPT-CEF causes characteristics of early apoptosis in both cell line, such as cell shrinkage (CS), plasma membrane blebbing (MB), and chromatin condensation (CC). Late stage apoptotic features are also detected (LA). The untreated cells ( Fig. 6e and k) mostly stained green and remained intact. Based on the results of AO/PI staining we conclude that CPT-CEF, at a concentration of 133.5 μg/mL for HT29 and 85 μg/mL for A549 cancer cells, has a significant impact on the cell membrane and nuclear membrane of the respective cells. CPT-CEF induces apoptosis in HT29 and A549 cancer cells. To evaluate whether the CPT-CEF-induced inhibition of cell proliferation was related to cell apoptosis, the effect of CPT-CEF on cell apoptosis was evaluated via Annexin V/PI staining. Cancer cells were exposed to either the IC 50 , ½ IC 50, or ¼ IC 50 of CPT-CEF for 48 h, and analysed by flow cytometry using FITC-conjugated Annexin V (FL1-H) and PI (FL2-H) double staining (Fig. 7). The data show a significant increase in the percentage of both early (Annexin V positive, PI negative) and late (Annexin V positive, PI positive) apoptotic cells in a CPT-CEF concentration-dependent manner. The number of cells entering early stage apoptosis reached about 35.9% with IC 50 treatment. These results suggest that CPT-CEF has retained the apoptosis-inducing potential of CPT in colon cancer cell lines. The data clearly show an increasing trend in early stage apoptosis with increasing concentration of CPT-CEF. The IC 50 concentration of CPT-CEF significantly induced early apoptosis in treated HT29 cells compared to untreated cells. Similar results were seen in A549 treated cells whereby after 48 h of IC 50 treatment with CPT-CEF, the number of cells in late stage apoptosis was about 30.5% (Fig. 8). These results suggested that CPT-CEF has the potential to induce apoptosis in lung cancer cell lines. CPT-CEF induces apoptosis in HT29 and A549 cancer cells by altering mitochondrial membrane potential. To evaluate whether CPT-CEF causes alterations to mitochondrial membrane potential (ΔΨ M ) of colon cancer cells, JC-1 dye was used. At higher potential, JC-1 aggregates in the mitochondria and fluoresces red; at lower potential, it loses its ability to form aggregates in the mitochondria, remains as a monomer in the cytoplasm, and fluoresces green 20 . In this study, HT29 cells were exposed to IC 50 , ½ IC 50, and ¼ IC 50 concentrations of CPT-CEF for 48 h, and then were analysed via flow cytometry. The data (Fig. 9) show a significant increase in the percentage of mitochondrial membrane depolarised cells in a concentration-dependent manner. At the IC 50 concentration, about 49% are observed to be ΔΨ M depolarised cells. Also, in A549 IC 50 treated cells about 60%, of the treated cells were observed to have experienced changes (depolarizations) in ΔΨ M (Fig. 10). These results suggest that CPT-CEF has the ability to affect mitochondrial membrane potential of cancer cells. During apoptosis, cell membranes are damaged, thus causing an alteration in the ΔΨ M . JC-1 staining shows that the IC 50 concentration ( Fig. 9) of CPT-CEF more than doubles the mitochondrial membrane depolarisation observed in untreated cells. These data suggest that CPT-CEF induces apoptosis, accompanied by alterations in the mitochondrial membrane potential. CPT-CEF induces caspase-3 protein expression in HT29 cancer cells. The release of cytochrome c from mitochondria into the cytosol leads to an activation of the caspase cascade, including caspase-3 activation 21 . Caspase-3, also known as CPP32/Yama/Apopain, is a key mediator of apoptosis 22 . To substantiate the presence of activated caspase-3 in CPT-CEF-treated cells, a colorimetric assay was performed for capsase-3, using its specific substrate poly (ADP-ribose) polymerase, containing the amino acid motif DEVD conjugated to p-nitroaniline (pNA). The role of caspase-3 in the CPT-CEF-induced apoptosis was investigated with IC 50 , ½ IC 50, and ¾ IC 50 concentrations. As shown in Fig. 11, HT29 cells treated for 48 h with CPT-CEF increased the activity of caspase-3 in a dose-dependent manner when compared to control group. The level of caspase-3 induction in CPT-CEF treated HT29 cell lines was 30% higher at IC 50 concentration compared to untreated cells. CPT-CEF treatment at increasing concentration showed an increasing trend in the number of HT29 cells at G1 phase and a decreasing trend in those at S phase and G2/M phase (Fig. 12). CPT is commonly known to induce cell cycle arrest at G2/M phase 23 , and, interestingly, in this study, the results show a different trend. We noticed a significant increase of cell cycle arrest in the G1 phase with a decreasing cell count in the S and G2/M phases at lower concentration of CPT-CEF. The impact of different concentrations of CPT on the stages of cell cycle arrest has been discussed previously 24 in which low concentrations of CPT were found to cause cell cycle arrest Magnetic force induced cells morphology changes in CPT-CEF treated HT29 cells. To determine the influence of magnetic force on the CPT-CEF in cancer treatment, a simple assay utilizing magnets was performed. Figure 14a shows untreated HT29 cells without any treatment for the duration of magnetic assay. Figure 14b shows HT29 cells treated with CPT-CEF under the influence of magnets in which cell to cell attachment and cell confluency has drastically reduced in comparison to the treated side without the influence of magnets (Fig. 14c) and untreated cells (Fig. 14a). The side exposed to the magnets shows significant reduction in cell confluency and cell to cell attachment. Essentiality the area under the magnetic force is subjected towards more impact from CPT-CEF as CPT-CEF is magnetically responsive. Figure 14 d shows the treated HT29-T75 flask with magnets placed under one side of the flask and another side of the flask without magnets Discussion Colon cancer is a major cancer worldwide, and despite advancements in chemotherapy, its mortality remains high, partially due to the failure of chemotherapy 25 . Camptothecin (CPT) is a major chemotherapy drug that is largely utilised in colon cancer treatment. However, the stability and solubility of CPT has greatly diminished its anti-cancer value, thus encouraging the nanotechnology field to synthesise an effective nanocarrier to enhance its solubility and stability. CPT-CEF was designed with these objectives. The formation of Fe MNs was confirmed by FT-IR spectroscopy 26 , and the particle size was found to average 168 nm. The particle size of the Fe MNs was well corroborated with TEM morphological analysis as a spherical and smooth surface. The homogenous dispersion of iron magnetic NPs by co-precipitation method was observed in a previous report 26 . After the conjugation of Fe on the EDTA-linked β-CD, the size of the carrier decreased from the non-conjugated β-CD-EDTA composite. Encapsulation of the CPT drug on the carrier subsequently increased the particle size. Conjugated Fe MNs enhance biomedical applications, such as targeted drug release and MRI detection. Because Fe MNs functionalisation imparts magnetic properties, these properties increase its usefulness with external magnetic force treatment 27,28 . Development of magnetic nanocarriers for drug delivery applications has become a significant area 29 . Nanocarriers conjugated with Fe MNs were tested on HT29 colon cancer cells after encapsulation with CPT drug. Additionally, the magnetic studies data suggests that CPT-CEF has the potential to be utilized in targeted therapy, thus diminishing the adverse impact on other normal cells in the patient body. In the present study, we characterised the apoptotic pathway induced by water soluble CPT-CEF in HT29 cells, which shares many features of CPT-induced apoptosis, such as nuclear condensation, and cell shrinkage. We also identified apoptosis signals, such as increased caspase-like protease activity and mitochondrial membrane depolarisation. One milligram of CPT-CEF consists of only 4.35% CPT, placing the final IC 50 derived from the HT29 MTT assay in this study at an incredibly low range of 5.8 µg/mL of CPT per 133.5 µg/mL of CPT-CEF. At the IC 50 of CPT-CEF, HT29 cells show a reduction of cell viability of up to 50%. In comparison, cells treated with CPT alone still maintained a cell viability of 70%. The solubility of CPT, which was enhanced through the CEF formulation, could be a contributing factor in the greater availability of CPT for the cancer cells. In addition, with only minimum amount of CPT loaded in the CEF nanocarrier, during chemotherapy only a small percentage of CPT would be administered to patients, ultimately reducing their adverse effects. Additionally, significant cell viability reduction was observed in CPT-CEF treated A549 cells. This data is indicative that CPT-CEF has the potential to be used as an anti-cancer drug in other major cancers treatment. Interestingly, we also noted that, at IC 50, CEF alone produced cytotoxicity. This might be due to the hemolytic effects of cyclodextrins (CDs), including β-Cyclodextrin that have been reported in several in vitro studies. However, the toxicological implications in vivo are considered negligible 30 . Essentially, the hemolytic activity of CDs correlates with their ability to solubilise cellular membrane lipids. This is due to the positive correlation that occurs between the hemolytic activity of CDs and their capacity to solubilise cholesterol, a main component of lipid bilayers, regardless of their varying physicochemical properties 31 . Additionally, the magnetic assay indicates that CPT-CEF is magnetically responsive. This is an added advantage as CPT-CEF can be used in magnetic based chemotherapy thus limiting the adverse impact caused by conventional chemotherapy. Cell damage due to treatment with CPT-CEF indicates the ability of CPT-CEF to causes membrane damage and nuclear disintegration in cancer cells. Apoptosis is fundamental to cancer therapies such as chemotherapy. Apoptosis is characterised by particular morphological as well as biochemical alterations, such as cell shrinkage, nuclear condensation and fragmentation, plasma membrane blebbing, formation of apoptotic bodies, and loss of cell contacts with neighbouring cells 32 . Notable biochemical changes in apoptosis are chromosomal DNA cleavage into internucleosomal fragments, phosphatidylserine externalisation, and a number of intracellular substrate cleavages by specific proteases 33 . Apoptosis induction by CPT-CEF through phosphatidylserine (PS) externalisation was measured using annexin V-FITC and propidium iodide double staining. Phosphatidylserine (PS) externalisation is an early indicator of apoptosis 34 . In this study, Annexin V/PI flow cytometry assays were performed to demonstrate the apoptosis-inducing potential of CPT-CEF against HT29 cells. FITC-annexin V is commonly applied in conjugation with propidium iodide to determine early apoptosis, prior to the loss of cell membrane integrity 35 . Translocation of phosphatidylserine (PS) from the inner to the outer leaflet of the plasma membrane, or externalisation of PS to the cell surface occurs in apoptotic cells 36 . In this study, a flow cytometric FITC-annexin V/PI apoptosis detection method was used to measure the apoptosis-inducing potential of CPT-CEF. The assay demonstrated a concentration-dependent shift of healthy cells from the early to the late stage of apoptosis. It is evident from the results that CPT-CEF-induced phosphatidylserine externalisation took place by 48 h of treatment. These results further authenticated that CPT-CEF could inhibit the growth of HT29 and A549 cells by inducing apoptosis. Mitochondria play a central role in the regulation of apoptotic signalling 37 . CPT-induced oxidative stress causes depolarisation of ΔΨ M, and it is associated with cellular events leading to apoptosis 38 . CPT-CEF-treated HT29 and A549 cells demonstrated a concentration-dependent decrease in the ΔΨ M, thus indicating that it might be an important pathway involved in the apoptotic progression of CPT-CEF induced cell death. The dissipation of ΔΨ M is a distinctive feature of apoptosis 39 . Mitochondrial depolarisation triggers the release of cytochrome c into the cytosol, which leads to the activation of proteases, especially the caspase cascade 20 . Caspases are key effectors of the apoptotic execution phase. Our results provide the first insight into the mechanistic pathway of apoptosis in HT29 cells induced by the novel CPT-CEF nano-compound, whereby mitochondria and caspase-like proteases play a central role. The cell cycle is a complex process where cells receive different growth-controlling signals that are integrated and processed at various points known as checkpoints. In this study, cell cycle arrest was detected at G1 phase. Interestingly, the impact of CPT is generally attributed to G2/M phase arrest, and the application of our formulation has altered the mechanism of cell cycle arrest. This could be due to the synergistic impact contributed by the nanocarrier formulation thus altering site of cell arrest. Concurrently, treatment on the A549 cells, CPT-CEF formulation sustained the typical cell cycle arrest mechanism of CPT at G2/M phase. Taken together, these results is suggestive that the action of CPT-CEF differs according the cancer cells. Collectively, the apoptotic signals, loss of membrane potential, and nuclear degradation due to treatment with CPT-CEF in this study suggest that our CEF nanocarrier has potential to maintain the anti-cancer properties of CPT while increasing water solubility. Consistent with these results, CPT-CEF has a promising future to be equally effective in treating other major cancer in addition to colon cancer. Synthesis of MNPs. Magnetite NPs were prepared by a previously reported method based on the controlled chemical co-precipitation 40 of Fe 2+ and Fe 3+ (1:2 ratio) in an ammoniacal medium at 80 °C under a nitrogen atmosphere. In a typical synthesis, 0.02 M of ferrous sulphate (FeSO 4 ·7H 2 O) and 0.04 M of FeCl 3 ·6H 2 O were dissolved in 200 mL of deionised water. The mixture was stirred and heated to 80 °C under a nitrogen atmosphere; 12 mL of a 25% ammonia solution was injected into the flask. Stirring was continued for 20 min to allow the growth of the NPs. After 20 min, the solution was cooled to 28 °C and the resulting magnetites NPs were centrifuged. The NPs were washed three times with distilled water. The pH of the suspension was brought to neutral by adding dilute HCl, and the particles were rewashed with distilled water for further experiments. Materials Synthesis of EDTA-β-CD. EDTA-β-CD polymers were synthesised by reacting β-CD with EDTA as a cross-linker and disodium hydrogen phosphate (MSP) as a catalyst. β-CD (4 g, 3.5 mmol), EDTA (6 g, 20.4 mmol), MSP (Na 2 HPO 4 7H 2 O, 2.68 g, 10 mmol), and 20 mL of deionised water were mixed in a round-bottomed flask and stirred in a 100 °C oil bath for 1 h. Polyethylene glycol 200 (PEG-200, 0.5 g, 2.5 mmol) as a dispersant was added drop wise to help to dissolve β-CD in the water. The mixture was transferred into a petri dish (φ160 mm) and heated in an oven at 155 °C for 10 h. After cooling to room temperature, the resulting condensation polymer product was ground and soaked with 500 mL of deionised water, and then suction-filtered and rinsed with a large amount of 0.1 M HCl, deionised water, 0.1 M NaOH, deionised water, and methanol, to remove the unreacted materials and catalyst. The final product was dried under vacuum at 60 °C overnight 41 . Twenty micrograms of β-CD-EDTA modified with Fe 3 O 4 and 1.0 mg of CPT were dissolved in the mixed solvent system, i.e., 2 mL of PBS and 2 mL of ethanol. The system was left to equilibrate under constant stirring for 24 h at 50 ° C in the dark. After the organic solvent was completely evaporated under vacuum, the suspension was filtered. The filtrate containing β-CD-EDTA modified with Fe 3 O 4 (β-CD-EDTA-Fe 3 O 4 ) was lyophilised to obtain a dry yellow powder 42 . Characterisation studies. FT-IR analysis. A small quantity of Fe 3 O 4 , drug-unloaded nanocarrier and drug-loaded nanocarrier was separately mixed with 200 mg KBr and compressed to form tablets. These tablets were scanned on a Fourier Transform Infrared Spectrometer (Spectrum GX-1, PerkinElmer, USA), in the spectral region of 4,000-400 cm −1 . Synthesis of β-CD-EDTA Transmission Electron Microscopy (TEM). The structural morphology and crystallite size of the samples were further investigated via high resolution transmission electron microscopy (HRTEM, TECNAI F30). For HRTEM analysis, the as-synthesised NP and its composites were dispersed in ethanol with the help of ultrasonication for 15 min and then loaded on a carbon-coated copper mesh. Particle size analysis. Mean particle size (diameter, nm ± S.D.) and polydispersity index of the NPs were determined using BECKMAN COULTER, Delsa TM Nano C. Measurements were at a 90° angle at 25 °C under suitable dilution conditions, and were performed in triplicate. Zeta potential measurement. Zeta potential of NP dispersions was measured in mV by BECKMAN COULTER, Delsa TM Nano C in triplicate to determine the surface charge and the potential physical stability of the nanosystem. Zeta potential of NPs was measured in aqueous dispersion. Measurements were at a 120° angle at 25 °C, and were performed in triplicate. In-vitro drug release profile. In-vitro release profiles of CPT from CEF nanocarrier were examined for 100 minutes in acidic medium (pH 2.4) and PBS solution (pH 7.0). Dialysis technique was employed. The nanoparticles (10 mg) were placed in a dialysis tube with 5 mL of release medium (MWCO: 12,000 Da). The dialysis tube was then placed in 50 mL of double distilled water at 37 °C and stirred continuously at 500 rpm. At specific time intervals, 2 mL of solution was withdrawn from the outer compartment and replaced with fresh double distilled water (2 mL). The concentration of the released CPT was determined by UV spectrophotometer at λmax 260 nm. The analysis was performed in triplicate for each sample. Cell culture. HT29: Human colorectal adenocarcinoma cells and A549: Adenocarcinomic human alveolar basal epithelial cells were procured from the Laboratory of Vaccine and Immunotherapy (LIVES) Institute of Biosciences (IBS), UPM. The cell lines were grown adherently using RPMI media supplemented with 10% foetal calf serum, 100 U/mL penicillin, and 100 μg/mL streptomycin at 5% CO 2 at 37 °C. The cells were seeded in 96-well plates at a concentration of 1 × 10 4 in 100 µl of cell culture medium. Cells were allowed to grow for 24 h to reach approximately 90% confluency. Treatment with CPT-CEF. The final stock solution of each compound was made by dissolving them in 10% DMSO and cell culture media. Multiple concentrations were made by serial dilutions, using cell culture medium. DMSO concentration was kept below 1% (v/v) in all analyses. As vehicle control, complete cell culture medium was added to the cells without imposing any treatment. MTT viability assay. The tetrazolium salt 3-[4, 5-demethylthiazol-2-yl]-2-5-diphenlytetrazolium bromide (MTT) assay was performed to determine cell viability and overall cytotoxicity with different concentrations of CPT-CEF on HT29 (human colon adenocarcinoma) and A549 (human lung adenocarcinoma). Cell viability of cancer cell lines in response to treatment with various concentrations of CEF-CET, CPT, CEF, and FMN were determined using MTT as described by Mosmann 43,44 . Cells were plated at a density of 1 × 10 4 cells per well in 96-well plates and cultured at 37 °C for 24 h under 5% CO 2 for cell attachment. Cells were then treated with the compounds mentioned above. The concentrations used were 250, 125, 62.5, 31.25, 15.62, 7.81, 3.91, and 1.95 µg/ mL. The final concentration of DMSO in the well did not exceed 1% (v/v). Treated cells were then tested after 24, 48, and 72 h of incubation. For each dosage, three replicates were performed. Negative controls were performed with cell culture media only. The MTT assay was then performed. The following procedure was conducted in dim light, as MTT is light-sensitive. To initiate the assay, 20 µl of MTT (Naccalai, Japan) (5 mg/mL) was added into each well and incubated for 3 h at 37 °C. After incubation, supernatants were carefully removed and 100 µl of DMSO were then added into each well to solubilise the formazan product. The absorbance was measured using a plate reader (Sunrise ™ -Tecan) at 570 nm with a reference wavelength of 630 nm. Cell viability was calculated as the ratio of the absorbance of treated cells to that of blank controls. The IC 50 value of CPT-CEF was determined, and this concentration was utilised for subsequent assays. Annexin V/ PI Assay. Detection of apoptosis was conducted using the Annexin VFITC/PI apoptosis detection kit (BD Pharmingen ™ , USA), according to manufacturer's protocol. Briefly, cells were plated at a density of 3 × 10 5 per well in six-well plates, and treated with different concentrations of CPT-CEF. After a 48-h incubation period, cells were collected, pooled, and washed with PBS twice. Cells were then resuspended in 1X Binding Buffer at a concentration of 1 × 10 6 cells/mL, and 100 µL of the solution (1 × 10 5 cells) was transferred to a 5 mL culture tube; 5 µL of FITC Annexin V and 5 µL of PI was then added into the tube and incubated for 15 min at room temperature in the dark. Then, 400 µL of 1X Binding Buffer was again added into each tube, and the contents were examined using a BD FACSARIA flow cytometer (USA). Mitochondrial depolarisation assay (JC-1). Mitochondrial depolarisation was determined using a JC-1 kit ((BD Pharmingen ™ , USA). The cell treatment procedure was as described in the Annexin V/PI assay treatment. Following treatment, 1 mL of each cell suspension was transferred into a sterile 15 mL polystyrene centrifuge tube. Cells were then centrifuged at 400 × g for 5 min at room temperature, and the supernatant was discarded; 0.5 mL of freshly prepared JC-1 Working Solution was added into the tubes and incubated for 10-15 min at 37 °C in a CO 2 incubator. Cells were then washed twice with 1 × Assay Buffer and centrifuged 400 × g for 5 min. Cells were finally resuspended in 0.5 mL 1 × Assay Buffer and analysed using flow cytometry. Cell cycle analysis. Cells were treated and incubated for 48 h, as explained above. Cell cycle analysis was carried out using a Propidium Iodide Flow Cytometry Kit for Cell Cycle Analysis (Abcam, UK). Cells (3 × 10 5 per well) were grown in six-well plates and then treated with multiple concentrations of CPT-CEF for 48 h. After treatment, the culture media was removed, and cells were rinsed with PBS. Trypsin was used to dissociate the cells. Culture media and PBS rinses were collected and pooled. Cells were then pelleted by centrifugation at 500 × g for 5 min. The supernatant was then discarded, and the cells were washed with 1X PBS and centrifuged again at 500 × g for 5 min. Cells were then fixed using 66% ethanol on ice, and stored at 4 °C for at least 2 h. The cells were then centrifuged at 500 × g for 5 min and washed with 1 mL 1X PBS and centrifuged again. The cells were gently resuspended in 200 µL of 1X propidium iodide + RNAse staining solution. After incubation for 30 min in the dark at 37 °C, cells were analysed for DNA content by using a FACS calibur flow cytometer. Cell distribution among cell cycle phases and the percentage of apoptotic cells were evaluated as previously described 45 . The cell cycle distribution is shown as the percentage of cells containing 2n (G1 phase), 4n (G2 and M phases), and 4n > 3 > 2n DNA amount (S phase), assessed via PI staining. The apoptotic population is defined by the percentage of cells with DNA content lower than 2n (sub/G1 phase). Caspase-3. Cells (1 × 10 5 per well) were cultured overnight in six-well plates, and then treated with various concentrations of CPT-CEF (30, 60, and 130 µg/mL) for 48 h. Cells without treatment were used as controls. Caspase-3 activity was assessed, according to the manufacturer's instruction of the caspase-3 colorimetric Assay Kit (R&D systems, USA). Briefly, cells were harvested after treatment and lysed in 50 µL lysis buffer on ice for 10 min and then centrifuged at 10,000 × g for 1 min. After centrifugation, 50 µl of supernatant were incubated with caspase-3 substrate in reaction buffer. Samples were incubated in 96-well, flat bottom microplates at 37 °C for 2 h. The amount of released pNA (p-nitroaniline) was measured using a microplate reader (Bio-Rad, Hercules, CA, USA) at 405 nm wavelength. Background readings were determined from wells containing culture medium without cells and without substrate. Protein concentration was determined using the Pierce 660 nm Protein Assay Reagent. AO/PI Staining. The morphological changes in CPT-CEF treated HT29 and A549 cells were characterised using acridine orange (AO) and propidium iodide (PI) double staining, according to the method described by Hajiaghaalipour et al. 46,47 with minor modifications. Briefly, cells were plated at a density of 1 × 10 5 cells/mL in a six-well plate and treated with the IC 50 concentration of CPT-CEF in an atmosphere of 5% CO 2 at 37 °C for 48 h. The cells were then trypsinized with trypsin-EDTA, washed twice with PBS and centrifuged for 5 min to remove the remaining media. An equal volume of fluorescent dye (AO/PI) containing AO (50 μg/mL) and PI (50 μg/mL) was added to the cellular pellet, and freshly stained cells were observed under a UV-fluorescence microscope within 30 min. Magnetic assay. A simple magnetic assay was performed to indicate the magnetic potential of CPT-CEF. To perform this, a T75 flask confluent with HT29 cells was treated with the IC 50 concentration of CPT-CEF. This treated flask was then placed on layer magnets that are directed towards one side of the flask while the other side is left free without any magnets. Post 48 h of treatment, changes in the cell morphology was observed and compared with the sides of the flask with magnets and side of the flask without magnets. Statistical analysis. Results were expressed as the mean ± standard deviation (SD). Statistical comparisons of mean values were analysed by one-way ANOVA using SPSS 22.0 software. All P < 0.05 was considered to indicate statistically significant differences. Conclusion In this study, we have thoroughly studied the mechanism of action of CPT-CEF by analysing the nuclear, mitochondrial membrane potential, activity of caspase-like proteases, and cytosolic changes associated with apoptosis in HT29 cells. CPT-CEF induced cell apoptosis and growth inhibition due to cell cycle arrest, as well as activation of mitochondrial apoptotic pathways. In the present study, we demonstrated that the soluble form of CPT-CEF has successfully exhibited anti-cancer properties while being loaded with a low concentration of CPT as well as being magnetically active. Also, the selected assay performed in A549 cells are also reflective on the ability of CPT-CEF to be utilized in the treatment of other cancers apart from colon cancer. With further improvements, this new formulation could be a promising nanocarrier for CPT drug delivery for an effective chemotherapy treatment of colon cancer.
2023-02-17T15:25:26.421Z
2017-09-08T00:00:00.000
{ "year": 2017, "sha1": "783e7041f9e825c1ac574bcf81537864534b448a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-017-09140-1", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "783e7041f9e825c1ac574bcf81537864534b448a", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
259210879
pes2o/s2orc
v3-fos-license
Disparities in access to primary care are growing wider in Canada Canadian provinces and territories have undertaken varied reforms to how primary care is funded, organized, and delivered, but equity impacts of reforms are unclear. We explore disparities in access to primary care by income, educational attainment, dwelling ownership, immigration, racialization, place of residence (metropolitan/non-metropolitan), and sex/gender, and how these have changed over time, using data from the Canadian Community Health Survey (2007/08 and 2015/16 or 2017/18). We observe disparities by income, educational attainment, dwelling ownership, recent immigration, immigration (regular place of care), racialization (regular place of care), and sex/gender. Disparities are persistent over time or increasing in the case of income and racialization (regular medical provider and consulted with a medical professional). Primary care policy decisions that do not explicitly consider existing inequities may continue to entrench them. Careful study of equity impacts of ongoing policy reforms is needed. Introduction Strong primary care systems have been associated with greater equity in population health, while contributing to better patient outcomes, provider experiences, and efficiency of health systems, 1-6 but inequities in access to primary care may be growing. Primary care is the first and main point of access to healthcare in Canada and so more equitable access to primary care (here defined as similar access to services for people with similar needs regardless of social position 2,3,7,8 ) has the potential to improve equity in access throughout the system, and ideally contribute to more equitable health outcomes. 1 We note "primary care" is distinct from "primary healthcare" which has a broader population orientation and equity as a core principle. [9][10][11] We use the term "primary care" as it more accurately describes services in Canada. Canadian provinces and territories have undertaken varied macro-and meso-level reforms to how primary care is organized, financed, delivered, and held accountable. 1,[12][13][14][15] Ideally these efforts would promote more equitable access to primary care. However, in implementing new models of care, areas with resources and readiness for change may be prioritized for feasibility, even if they are not areas with greatest need. 14 It may also be that people in positions of advantage can more easily navigate access to innovative models. 1 Indeed, research shows that people experiencing economic or social marginalization were less likely to be enrolled in new models in Ontario 16,17 and Alberta. 18 In British Columbia, primary care visits, continuity, and specialist referrals fell more rapidly in low-income neighbourhoods relative to high-income neighbourhoods over a period of reform. 19 Similarly, in Ontario, cancer screening gaps by income quintile grew wider in the context of primary care payment reform. 20 National studies have highlighted inequities in healthcare service use [21][22][23] and persistent reports of unmet need 24 in Canada. This study builds on this research, contributing national data documenting the extent to which access to primary care by income, educational attainment, dwelling ownership, immigration, racialization, sex/gender, and urban/rural residence has changed in Canada over a time of ongoing primary care reform. Study data and population We used data from the Canadian Community Health Survey (CCHS), a national health survey of people residing in all provinces and territories in Canada. We accessed the Public Use Microdata File (PUMF), a publicly available version of the CCHS file [25][26][27] and so ethics approval for this study was not required. We use data from 2007/08 and 2015/16 or 2017/18 (depending on the measure of access, described below). These years were selected as the earliest and latest for which comparable variables describing access to primary care and measures of social position were available. This also captures a period of sustained primary care reform, but does not include changes in the context of the COVID-19 pandemic. All survey respondents in the selected years were included in the study. [28][29][30] The CCHS does not include people younger than 12 years of age, Indigenous people living on reserve, people who are institutionalized, youth living in foster homes, members of the armed forces, and people who live in some remote regions. [25][26][27] Measuring access to primary care. We identified items that inform the process of accessing primary care, and that can be compared over time. Chosen questions are informed by the framework proposed by Levesque et al., 31 which identifies stages in the process of accessing care. The questions "Do you have a regular medical provider?" and "Do you have a regular place of care?" focus on availability of primary care services. We include having consulted with a medical professional as a measure of use of health services, though this does not specify if this was with a primary care provider or if the consultation was episodic in nature or part of a continuous relationship. This was asked in the 2015/16 CCHS but not the 2017/18 CCHS. Stratification variables. Stratification variables were informed by the Canadian Institute for Health Information (CIHI) list of equity stratifiers, reflecting measures of social position relevant to health and healthcare use. 32 We identified variables that were consistently measured across survey years. We dichotomized variables to support clarity and interpretation, given the number of variables examined. Categories are as follows: household income (low vs. middle/high), household education level (secondary school graduation or less vs. post-secondary), household dwelling ownership (dwelling owned by household member vs. dwelling not owned by household member), immigration (recent or nonrecent immigrant vs. born in Canada), racialization (White or racialized), and location of residence, based on health region (metropolitan and non-metropolitan). There are important limitations of these variables we wish to emphasize. Respondents were asked if they are male or female. It is not clear whether this would be interpreted by respondents as legal sex, sex assigned at birth, or gender, so we label this sex/gender to reflect this uncertainty. Only a binary variable reflecting racialization is available, with all people not racialized as White grouped together. While we label these categories "White" and "racialized," we recognize the potential harm in framing Whiteness as a default, and also that this labelling does not reflect that Whiteness itself is racialized in ways that confer power and privilege in the Canadian context. Location of residence was assigned based on health region of residence. Health regions that include a Census Metropolitan Area 33 were classified as metropolitan, all others were nonmetropolitan. This reflects residence in health regions that likely have more extensive resources for healthcare, including hospitals and tertiary care centres, but does not reflect individual rural/urban residence as the units of health regions are large and may include both urban and rural places. Measuring health need. Equity is often defined as equal access for equal need. In multivariable analysis, we include variables that may be associated with stratifiers and also with need for primary care, 17,34 including age, sex/gender (female/male), self-reported health (excellent, good, fair, or poor), and presence of chronic conditions (a series of binary variables reflecting having been told by a medical professional respondents have asthma, arthritis, high blood pressure, diabetes, heart disease, cancer, stroke effects, anxiety disorders, and/or mood disorders). Statistical analysis We calculated the percent of people with access for each of the three measures, stratified by measures of social position at both the earlier and later time periods, applying survey weights in all descriptive analysis. Respondents with missing responses for service use variables were not included in analysis. We calculate the difference in percentages over time and by stratifier. We report the difference in differences over time, with negative values reflecting growing disparities. We also graph change over time to show patterns visually. A note that we use "disparities" to describe differences in primary care access or use within reporting of quantitative methods and results. We use "inequities" to reflect our interpretation that these differences are unfair and unjust, as they reflect differences by social position and not need for healthcare. For each access measure and measure of social position, we constructed a binary logistic regression model examining odds of access (greater access: 1 and less access: 0), including data from both time periods. Each model included a binary variable for year (most recent vs. earliest), the measure of social position (with the reference being the group with relative advantage), an interaction between the year and measure of social position, and measures of need (sex/gender, self-reported health, and chronic conditions). The estimated odds ratios for each measure of social position informed whether we observe a disparity in access overall (OR < 1 comparing groups that experience marginalization to those with relative advantage), whether access declined over time generally (OR < 1), and whether declines were greater for groups with lower social position (OR < 1) (i.e. if disparities grew wider over time). We applied scaled survey weights in multivariable analysis, 27 using the "survey" package 35 in R version 4.1.1 with standard errors corrected for cluster sampling. 36,37 We performed two additional analyses to confirm consistency of findings. We removed sex/gender as an adjustment variable and stratified by sex/gender to determine if changes over time differ by sex/gender categories. We also completed the main analysis stratified by province/territory to confirm findings are similar across settings. Results In 2007/8, the percentage of people with a regular medical provider was lower among people with low income (82.01% vs. 85.53%), who live in dwellings not owned by a household member (74.62% vs. 87.96%), who were recent immigrants (72.11% vs. 84.57% born in Canada), who were racialized as non-White (82.51% vs. 85.00%), and among respondents described as male (80.42% vs. 88.76%), (Table 1, Figure 1). The percentage of people with a regular medical provider or place of care differed less by educational attainment and metro/non-metro residence (differences of .61 and -.01 percentage points, respectively). Similar patterns were observed when examining the percentage of participants with a regular place of care. In 2007/8, a slightly higher percentage of people with low income had consulted a medical professional (77.18% vs. 76.45%), but patterns were otherwise similar to other access variables. Over time, disparities based on income, dwelling ownership, and racialization grew visibly wider (Figure 1). Multivariable models adjusting for variables reflecting need for care reinforce observations of disparities by income, educational attainment, dwelling ownership, recent immigration, immigration (regular place of care), racialization (regular place of care), and sex/gender ( Figure 2, Table 2). Interaction terms show disparities persisted over time or increased in the case of income and racialization (regular medical provider, consulted with a medical professional). Exceptions are that disparities in access based on recent immigration attenuated, as did differences by sex/gender. People in non-metropolitan health regions had higher odds of having a regular place of care overall (OR 1.21; 1.06, 1.38), though we did not observe differences for other access variables. That a slightly higher percentage of people with low income had consulted a medical professional compared to those with higher income (77.18% vs. 76.45%) in 2007/8 likely reflects greater need for care. In adjusted models, odds of consultation are lower for people with low household income. At the same time, the percentage of people with a regular medical provider or Interpretation This analysis of national health survey data shows that disparities in access to primary care exist and have persisted or increased over time. These findings are consistent with existing studies highlighting disparities in access to primary care, 21,22,38 while extending them to show that disparities have not attenuated over time. Findings add to observations of declining service use and widening (consulted with medical professional). Note: odds ratios below 1 in panel (a) reflect disparities in access to care, and odds ratios below 1 in panel (b) reflect growing disparities in access. All models adjusted for age, sex/gender, self-reported health (excellent, good, fair, or poor), and chronic conditions (asthma, arthritis, high blood pressure, diabetes, heart disease, cancer, stroke effects, anxiety disorders, and/or mood disorder. disparities by neighbourhood income observed in health system data 19 showing similar patterns with individuallevel measures of social position and a different data source. Findings are also consistent with analysis of specific reform efforts that show differential access to reforms by income. [19][20][21][22] The period captured between observations in this study saw primary care reform across all provinces and territories. 1,[12][13][14] Reform efforts involved a range of strategies to improve access to primary care, but equity was not an explicit objective. 13,14 It is often observed that health systems with stronger primary care systems produce more equitable outcomes, based on comparisons across jurisdictions. [1][2][3][4] Our findings highlight that primary care reforms that do not explicitly integrate equity as an objective are unlikely to impact equity in access to primary care, or, by extension, equity in health outcomes more broadly. [39][40][41] There has been important work to outline dimensions of equityoriented healthcare, including trauma-and violence-informed care, contextually tailored care, and culturally safe care at clinic or organizational level, 39 but this is not yet widely implemented. While there are models of team-based care with express commitments to meeting the needs of the communities they serve in an equity-informed manner, such as community health centres, 42 new investment in such models was limited over the study period. 13,14 Other research has highlighted how equity-mandated organizations must navigate competing discourses within health Table 2. Adjusted odds ratios reflecting changes in access over time, disparities in access, and changes in disparities in Canada, 2007/8 to 2017/8 (regular medical provider and place of care) or 2015/6 (consulted with medical professional). systems, with equity framed as an "add-on" to usual primary care, rather than an integral and fundamental aspect of care. 43 To support delivery of equity-oriented care, accountability and performance frameworks must fully align with an equity mandate and patterns of funding and resource allocation must be tailored to needs and responsive as needs change. 44 Box 1. Summary of recommendations to support equity in access to primary care · Include equity as an integral objective in primary care reform. · Expand training and capacity for trauma-and violence-informed care, contextually tailored care, and culturally safe care. · Support models of team-based care with equity mandates and accountability to communities, such as community health centres. · Ensure equity mandates are captured in accountability and performance frameworks. · Ensure patterns of funding and resource allocation are tailored to needs and responsive as needs change. Limitations This analysis is limited in several ways. Measures available in the CCHS are limited. Access to a medical professional is not specific to primary care, question wording varied slightly in the early and later years, and there is no information about quality of care received. We can only observe if people are racialized as non-White or White and there is no measure of gender. This centres Whiteness and reinforces cisnormativity. Location of residence assigned at the level of health region likely obscures substantial differences in access within regions, including between urban, small town, rural and remote places. Measures of need are also limited, especially given that people must have received healthcare to have identified chronic conditions. This likely means we likely under-adjust for need in multivariable analyses. This research reveals only broad patterns and trends. Different information and close engagement with equity deserving communities is needed to understand access to care in greater depth and to plan more equitable primary care delivery. CCHS data are particularly limited in their ability to inform primary care among Indigenous Peoples in Canada. Though inequitable access to preventative and primary care among First Nations, Inuit, and Métis people is documented elsewhere, 45 Indigenous Peoples are invisible in this analysis. The CCHS does not include people living on reserves and only identifies Indigenous people following the 2015/16 survey. While there are innovative primary care reforms led by Indigenous communities and organizations that warrant study 44,46,47 our analysis cannot capture impacts of innovations on health equity within communities and nations. Different approaches are needed to explore these. We chose to make comparisons prior to the COVID-19 pandemic, but widespread changes to primary care, including rapid expansion of virtual options may have equity impacts that require careful study. Since the study period, it has become even more difficult to obtain consistent and reliable access to a primary care provider 48 and provinces are considering various directions for future reform, so ongoing tracking of disparities is important. Conclusions Primary care is receiving renewed attention, as multiple jurisdictions are struggling to make sure people can access needed care. This analysis suggests that approaches to primary care transformation that do not explicitly consider equity may continue to entrench inequities. Health leaders are encouraged to design primary care transformation efforts with equity as priority area of focus. This would require allocation of new investments in primary care to meet the needs of underserved populations and inclusion of equity in accountability frameworks. Careful study of the equity impacts of ongoing policy reforms is needed. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Canadian Institutes of Health Research Quadruple Ain and Equity Catalyst Grant 180996.
2023-06-22T06:17:03.572Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "365ba403062f168149e4253b81b21bfe30b87b77", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/08404704231183599", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d7aa83cd609288b95d435340064f43faf783279d", "s2fieldsofstudy": [ "Medicine", "Political Science", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
56432370
pes2o/s2orc
v3-fos-license
REPRODUCTIVE ECOLOGY OF BIRDVILLE INDIGO, INDIGOFERA LINNAEI ALI. (FABACEAE) Indigofera linnaei, a low ground herb is an important species of herbaceous cover. It forms extensive mats of populations carpeting the soil. But, it is not known how it is able to increase its population size and spread into other areas. The knowledge of its reproductive ecology is important to understand its reproductive capabilities and evaluate its use in eco-restoration initiatives. The study aims to provide information on its reproductive ecology. I. linnaei flowers are papilionaceous, bisexual, monostylous, weakly protandrous, self-compatible, facultatively xenogamous and possess explosive pollination mechanism. High winds, high temperature and heavy rain trip the pollination mechanism and effect only selfing while bees and lycaenid butterflies trip pollination mechanism and affect both selfing and outcrossing. Thrips use the flower buds for breeding, emerge during anthesis and pollinate the flowers by feeding on pollen and nectar. The fruit is a pod which dehisces explosively to disperse seeds; long distance dispersal is facilitated by their irregular shape, oily nature, and carried away by wind and rain water. Therefore, ballistichory, anemochory and hydrochory are the functional modes of seed dispersal that enable the plant to invade and colonize especially open, sandy dry areas. The plant provides food for certain local insects and protects the soil cover with its clustered root system and spreading form of multi-stemmed branching pattern. Therefore, I. linnaei has the potential for use in the restoration of destroyed, degraded and damaged habitats. INTRODUCTION The herb layer has structural and functional significance in both forest and non-forest ecosystems.Certain low ground herbs have the ability in retarding soil, water and nutrient erosion and these abilities vary with different weed species (Kumar et al., 1997).In this context, Gebrekirstos et al. (2006) stated that the herbaceous weeds are important in reforestation programs to initiate ground flora in order to pave the way for the successional communities.But, lack of knowledge on the biology and ecology of herbaceous weeds and their adaptation to environmental stress is a drawback to use them in the reforestation programs in order to counteract the effects of degraded sites where soil water is a major limiting factor in the growth, species composition and distribution of species.Collevatti et al. (1998) stated that the knowledge of invertebrates and their interactions with native plants should be included in designing restoration programs and in ecological resilience.In this context, the herbaceous species of Indigofera assume importance since they normally grow as minor or major invasive weeds.The knowledge of the reproductive ecology of these species is essential to use them for the restoration of degraded areas.Wilson (1987) and Wilson & Ross (2004) provided a general account on Indigofera genus.It is a large pantropical genus of Fabaceae with over 750 species mostly of shrubs and a few of small trees or annual or perennial herbs distributed in all arid zones of the tropical and sub-tropical regions of world.Majority of taxa occur in Africa with other centers of diversity in Arabia to South East Asia, Mexico to sub-tropical North and South America, Australia and Madagascar.Most species have flowers in shades of red, but there are a few white-and yellow-flowered species.The flowers are hermaphrodite, zygomorphic and possess conspicuously specialized explosive pollination mechanism.The fruit is a legume pod with seeds dispersed in an explosive manner.Indigofera species have not been studied for their reproductive ecology despite their importance in painting, medicinal and cosmetic purposes since ancient times.Therefore, the present study is contemplated to provide the information on floral morphology, biology, flowers visitors, pollination, fruiting ecology and seed dispersal of Indigofera linnaei.It is an important species of herbaceous cover and forms extensive mats of populations carpeting the soil.The knowledge of its reproductive ecology is important to understand its reproductive capabilities and evaluate its use in eco-restoration initiatives. MATERIALS AND METHODS The wild patches of Indigofera linnaei growing in Visakhapatnam and its surroundings (17°42'N Latitude and 82°18'E Longitude) were used for the study.Field trips were conducted to record phenological aspects.Ten inflorescences which have not initiated flowering on five plants, each one situated 500 m apart were tagged in August and followed daily to record the duration of flowering, anthesis schedule and the timing of anther dehiscence.Twenty five fresh flowers were used to record the floral morphological details.Nectar could not be measured and analyzed due to its secretion in minute quantity which was further depleted by thrips during mature bud and flower life.Twenty mature, but un-dehisced anthers, two anthers each per flower/plant from ten plants were collected and examined for pollen output as per the protocol described in Dafni et al. (2005).The calculation of pollen output per flower and pollenovule ratio was done as per the formulas described in Cruden (1977).The fresh pollen grains were collected from virgin flowers after anther dehiscence from bagged inflorescences and placed in Petri dish in lab conditions.They were placed in cavity slides with 15% pure sucrose solution, placed cover slip on them.The percentage of pollen tubes formed from the pollen grains at hourly intervals was recorded.Ten flowers each from five individuals were used to test stigma receptivity.It was tested with hydrogen peroxide from mature bud stage to flower closure/drop as per Dafni et al. (2005).Further, the receptivity was also observed visually whether the stigma is shiny, wet or changing colours or withering.Twenty inflorescences other than the ones used for phenological study were tagged prior to the initiation of their flowering and followed for four weeks to record fruit and seed set rate in openpollinations.A collection of 274 pods were used to record the percentage of 1-seeded and 2-seeded pods.The fruit and seed morphological characteristics were observed in detail to evaluate their adaptations for dispersal by different means.Fields visits were made during rainy season to note the aspects of seed germination and production of new plants. Insects foraging at the flowers were observed from morning to evening on four different days for their mode of approach, landing, probing behavior and contact with the floral sexual organs.Bees were identified with the representative specimens available with the Department of Environmental Sciences, Andhra University, Visakhapatnam.Butterflies were identified by consulting the books of Kunte (2007) and Gunathilagaraj et al. (1998). The foraging visits of insects were recorded using 1×1 m area of flowering patch for 10 min at each hour for the entire day on four different days and the data was tabulated to record the foraging pattern and the percentage of visits made by bees and butterflies.The pollen/nectar collection behaviour of insects was carefully observed to assess their role in effecting pollination.Ten specimens of each insect species were captured during 0800-1100 h and brought to the laboratory.Each specimen was washed in ethyl alcohol, stained with aniline-blue on a glass slide and observed under microscope to count the number of pollen grains present.From this, the average number of pollen grains carried by each insect species was calculated to know the pollen carryover efficiency. Phenology The plant is a perennial prostrate herb which grows naturally in open and fallow fields and forest lands (Figure 1a).The flowering occurs throughout the year if the soil is sufficiently wet but peak flowering occured during July-September.In areas where the soil is dry during summer season, the plant withers but re-growth occurs from the perennial root stock with the commencement of rainy season in June.Seeds also germinate during rainy season and produce new plants; these plants display flowering from August onwards and continue flowering as long as the soil has enough moisture for survival and reproduction.The flowers were borne in axillary racemes which were covered with dense hairs; the inflorescence was crownshaped with 11.63 ± 3.15 flowers which opened acropetally over a time span of 4-5 days (Figure 1b). Flower morphology The flowers were small (5.6 ± 0.4 mm long, 6.2± 0.7 mm wide), pinkish red, odourless, zygomorphic and bisexual.The calyx was green, densely hairy outside, glabrous inside, and consists of five lanceolate sepals; single sepals were 2.6 ± 0.4 mm long and 1 mm wide.The corolla was partially gamopetalous, pinkish red and papilionaceous consisting of a broad standard petal (4.9 ± 0.7 mm long and 4.6 ± 0.4 mm wide), two spathulate wing petals (4.5± 0.4 mm long and 2 mm wide) and two bottom petals (3.6 ± 0.4 mm long and 2 mm wide) were growing together to form the acuminate keel.All five petals were glabrous both outside and inside.The keel and the wing petals were attached by means of two notched folds.The complex of keel and wings served as a landing platform for insects visiting the flower from the front.The androecium consisted of ten creamy white 3-4 mm long glabrous stamens, diadelphous; nine filaments were fused by the basal part into a sheath open along the upper side while the tenth filament was free and lie on the others.All the ten filaments were tipped with 1 mm long creamy white basifixed erect anthers.The ovary (2 mm long) was green, densely hairy outside, unicarpellary, unilocular with two sessile ovules arranged on marginal placentation and lie in the sheath of the filaments along the cylindrical part of the keel (Figure 2h-j). It has a long style (2.5 mm long) with a bearded stigma (Figure 2g).The whole pistil has the same length as the stamens but the style tip with the stigma was curved upwards facing the standard petal.The entire reproductive column was housed inside the keel petals. Floral biology Mature buds opened during 0700-0900 h (Figure 2a).Unfolding of the standard petal and wing petals indicate flower opening (Figure 2b-d).The keel petals do not unfold but remain tensed.All the ten anthers dehisce by longitudinal slits in mature bud stage, approximately half an open prior to flower opening.The number of pollen grains per anther was 197.6 ± 17.03 and per flower was 1,976 ± 170.36.The pollen-ovule ratio was 988:1.The pollen grains were pale yellow, powdery, triangular in polar axis, isopolar, bilaterally symmetrical, exine smooth, tricolporate and 27.39 ± 3.80 µm (Figure 2f).In vitro pollen germination tests indicated that pollen grains were viable and had the ability to germinate as soon as the anthers dehisced and remain viable until 1630 h with a gradual decrease in germination rate (Table 1).The high pollen germination rate remains until 1230 h only. The stigma becomes receptive at the end of anthesis and remains so until the evening of the following day.The stigma is shiny and wet during receptive phase.A nectary is situated at the base of the flower; it is limited to the area of the receptacle. The two small fenestrae present between the joined and the free filaments facilitate access to the nectar by the foragers.The nectar was secreted in minute quantity during mature bud stage and it was concealed by the hook-like structures of the standard petal which hold the basal part of the wing and keel petals intact.It was present in traces in flowers due to feeding by thrips in buds.Thrips use the growing buds for breeding and emerge by the time the buds bloom.During mature bud, anthesis and post-anthesis, the thrips continually feed on nectar and pollen.The petals and stamens fall off on the third day in pollinated flowers while the entire flower falls off in un-pollinated flowers. Pollination mechanism The staminal column and pistil are held under pressure within the pouch-like keel petals, and when the pressure is released, the staminal column and stigma snap forward against the standard petal causing the pollen to be instantly released (Fig. 2e).This process constitutes tripping mechanism and it is accomplished when the keel petals are pressed down by a foraging insect.High wind speed, heavy rain and high temperature that weaken the turgidity of the restraining keel tissues can cause tripping of the keel petals which spontaneously release the staminal column and stigma.If the flowers are untouched, the stamens and pistil remain within the keel and subsequently fall off without pollination.Insects landing on the front of the flower, especially on the wing petals, ride the keel petals with their legs by pushing down the wing petals which in turn pull back the keel petals by the notched folds.Once tripping occurs, the staminal column and the pistil do not return into the keel petals and so are the keel petals.Such a wing-keel complex tripping mechanism characterizes explosive pollination mechanism.Since the stigma stands above the anthers, the foraging insect during probing first brushes against the stigma followed by the anthers with its ventral side.If the insect carries pollen on its ventral side from the flowers which it visited previously, then cross-pollination occurs or else self-pollination occurs.In case of abiotic factors, wind, rain and temperature tripping the keel complex, only self-pollination occurs.The explosive pollination mechanism functional in this plant is adapted for manipulation by both biotic and abiotic factors.Pollen germination was recorded in 15% pure sucrose solution. Insect visitors and pollination Thrips were the first and resident foragers.They were ineffective in tripping the keel petals but were found to be contributing to primarily selfpollination by feeding on both pollen and nectar during mature bud and during and after anthesis.The flowers were foraged by bees and lycaenid butterflies (Table 2).The bees were Apis florea and Ceratina sp.Lycaenid butterflies were Zizeeria karsandra (Figure 3a), Zizina otis (Figure 3b); Freyeria trochylus (Figure 3c); Chilades laius (Figure 3d) and Chilades pandava (Figure 3e).Foraging was observed during daytime from 0800 to 1300 h with concentrated foraging activity during 0900-1100 h (Figure 4).Bees made 33% and lycaenids 67% of total foraging visits (Figure 5).The bees always approached the flower from the front with the head facing the standard petal while the lycaenids approached the flower also from the top of the standard petal with the head facing the wing-keel complex.The bees inserted the glossa under the standard petal to reach the nectar while the butterflies inserted the proboscis gently through the gap between the standard petal and the wingkeel complex for nectar collection.In this probing behavior, the keel petals were tripped, stigma brushed and pollen ejected violently against the ventral side of thorax and abdomen of the insects and this ended up in cross and/or self-pollination.When the butterflies approached from the top of the standard petal, they slowly inserted the proboscis into the flower base for nectar collection during which keel petals were tripped, stigma brushed and pollen ejected violently against the head and the adjacent parts, this also ended up in cross and/or self-pollination.The bees turned around towards the staminal column to collect pollen from the anthers and sometimes they also collected pollen that was deposited on the inside of the standard petal.They did not discriminate between the stigma and anthers while probing for pollen collection and in effect, they invariably gathered pollen also from the stigma.In most of the foraging visits, the bees probed for nectar and pollen while in other visits, they collected either nectar or pollen.Comparatively, bees were more effective in tripping the flowers and effecting pollination.The small and delicate flowers equipped with explosive pollination mechanism were found to be facilitating the lycaenid butterflies to trip the keel petals.These insects visited 5-6 flowers consecutively and in some cases many more up to 21-25 before leaving the flowering patch.The insects were not disturbed during probing by the tripping of keel petals and made consistent foraging visits as long as the flower density was prominent at plant level.As the flowers were depleted of nectar by thrips, the insects made multiple visits to the same flowers in quest of nectar and/or pollen.The pollen carrying efficiency evaluated by body washings of captured insects indicated that honey bees were more efficient in carrying pollen than butterflies; the average number of pollen grains recorded varied from 43.1 to 30.2 for honey bees and small carpenter bees and from 22.1 to 17.1 for butterflies (Table 3).The insects foraged the flowers in quick succession from one inflorescence to the other on the same and/or different plant populations in order to collect as much pollen and/or nectar as possible; this inter-inflorescence/plant foraging activity was considered to be promoting cross-pollination. Fruiting ecology The pollinated and fertilized flowers grew continually and produce pods (fruits) in three weeks.An inflorescence produced 8.46 ± 2.88 pods.Natural fruit set rate was 91.33% and seed set rate 81.16%.The pods were green initially and brown when mature (Figure 2k); they were sessile, non-fleshy, cylindrical, falcate, hairy, 4.3 ± 0.5 mm long and 2 mm wide; they produced one or two seeds.1-seeded pod set rate at was 23% while 2seeded pod set rate was 77%.The seeds were irregularly shaped, oily, light brown, glabrous, lustrous, 2 mm long and 2 mm wide (Figure 2m).The pods dispersed seeds by explosive dehiscence (Figure 2l).The dispersed seeds fall to the ground quickly due to low height of the mother plant.Subsequently, the seeds disperse through wind during summer season and rain water during rainy season.The seeds were dormant and germinated only during rainy season which starts in June.They did not germinate in the lab conditions.Erratic rainfall and long dry spells during rainy season terminated the growth and development of seedlings.The old plants with their robust underground root system withstood fluctuations in rainfall levels and continued their phenological events sequentially.However, the aerial plants withered away during extreme dry soil conditions, the situations of which occur during dry season only. DISCUSSION Indigofera linnaei grows throughout the year and displays profuse to sporadic flowering and intense fruiting if the soil is sufficiently wet and nutrientrich.The plant disappears if the soil is dry but the woody root stock stays below ground.The plant appears again from this rooting system during rainy season.Seed production is continuous in plants growing in areas of moist soils; but seeds are dormant and germinate only during rainy season to produce new plants and populations.The plant with dual modes of regeneration exhibits prolific growth during rainy season.Individual plants grow side by side, form pure populations and closely cover the soil.The flowers are pinkish red, borne in leaf axils and stand out above the foliage; such a pattern of flower production gives the view that the entire green foliage mat at ground level is decorated with the flowers.The occurrence of the plant can be recognized easily from a long distance, especially during its peak flowering season which is confined to July-September. In I. linnaei, the papilionaceous corolla conceals the stamens and stigma concealed in the keel petals even after anthesis.The keel petals require tripping to expose the sex organs.The abiotic and biotic agents trip the keel petals due to which the sex organs are exposed.The abiotic agents, high winds, heavy rain and high temperature weaken the turgidity of the restraining keel tissues and in effect the tripping occurs causing the spontaneous release of the sex organs.In this act, the tripping facilitates autogamy but to what extent this mode of pollination occurs is situational depending on the force exerted on the keel petals and the prevailing ambient environment.Bees and lycaenid butterflies also cause the tripping of keel petals causing the sudden release of the sex organs.This tripping of keel petals facilitates autogamy but it is not definite; these insects also affect geitonogamy and xenogamy depending on the pollen carried by them on their ventral side.The high natural fruit set and seed set rates recorded in this study suggest that the plant has mixed breeding system consisting of autogamy, geitonogamy and xenogamy.But, it is principally facultative autogamous due to the function of autogamy both by abiotic and biotic agents.Further, the floral characteristics such as a low investment in attractive structures, such as petals, nectar and pollen, short period of anthesis schedule, self-compatibility and low pollen/ovule ratio confirm that I. linnaei is facultative autogamous (Faegri & van der Pijl, 1979;Cruden & Miller-Ward, 1981).The long period of stigma receptivity is certainly an evolved character to achieve geitonogamy and xenogamy in the late evening of the day of anthesis and also on the following day since self-pollen within the flowers is not viable after 1630 h on the day of anthesis.Insect activity also brings about geitonogamy and xenogamy in day 1 and day 2 flowers by tripping the keel petals.Autogamy does not occur in day 2 flowers by abiotic and biotic agents due to in-viable self-pollen and only geitonogamy and xenogamy can occur in such flowers due to the function of abiotic and biotic agents.The production of numerous seedlings in the vicinity and adjacent areas of the parental populations during rainy season suggest that seeds produced from self-as well as cross-pollinations are viable.Therefore, the mixed breeding system is a "fail-safe system" to assure I. linnaei to have high seed set rate and eventually to qualify itself as a colonizer species. In the present study, I. linnaei flowers exhibit functional explosive pollination mechanism.In this mechanism, the keel petals with the staminal column together with the pistil is held under tension, and following tension release by the tripping caused by either abiotic or biotic agents, the sex organs snap forward against the standard petal causing all the pollen to be instantly ejected. In this process, autogamy can occur but the probability of its occurrence is situational depending on the ambient environment and the force with which the tripping of keel petals is caused.In the tripped flowers, either the keel petals or the staminal column and stigma do not return to their original positions.The flowers with this explosive pollination mechanism are specialized for tripping by external agents.Further, the concealment of the stamens within the keel until it is tripped is an adaptation to secure pollen from rainfall, and such an adaptation is essentially required for I. linnaei which is a low ground prostrate herb and is prone to rain water flooding.The containment of the stamens and also stigma within the keel after anthesis appears to be a mechanism evolved to protect the pollen from moisture during low ambient temperature and dew at night and also to maintain pollen fertility because its fertility will be affected through contact with water on rainy days and cool humid days; this protection is very much required for the flowers which were not tripped on the day of anthesis (Peter et al., 2004).Small (1988) stated that Medicago species of the tribe Trifolieae with explosive pollination mechanism displays the lowest pollen-ovule ratios.Lopez et al. (1999) recorded explosive pollination mechanism with highest pollen-ovule ratios in certain genera of the Fabaceae such as Cytisus, Pterospartum, Teline, Ulex, Stauracanthus and Cytisophyllum.Etcheverry et al. (2012) noted that the Fabaceae plants which they studied with explosive pollination mechanism had intermediate pollen-ovule ratios.Padmavathi et al. (2012) reported that Indigofera barberi with explosive pollination mechanism had intermediate pollenovule ratio.In the present study, I. linnaei with explosive pollination mechanism has low pollenovule ratio; it is almost half of the pollen-ovule ratio recorded for I. barberi.Therefore, the study suggests that I. linnaei is facultative autogamous and accordingly its pollen-ovule ratio is in conformity with the pollen-ovule ratio used by Cruden (1977) for the same breeding system.The plants with explosive pollination mechanism depend on pollen vectors because they cannot selfactivate the tripping process to affect autogamy.It is more so in Fabaceae members that present four types of pollination mechanisms (Aronne et al., 2012).In Fabaceae, the papilionaceous corolla is considered to be a general adaptation to pollination by Hymenoptera.Subsequent sternotribic pollination events can occur within a single flower (Faegri & van der Pijl, 1979).Therefore, all pollen produced within a flower can be potentially delivered to those insects which are able to trigger the mechanism without any waste suggesting that the pollen is placed in such positions on the insects' bodies that it is difficult for them to brush it off (Westerkamp, 1997).In the present study, I. linnaei with explosive pollination mechanism attracts only certain bees and lycaenid butterflies despite the occurrence of different categories of hymenopterans including various bees, wasps, butterflies and moths in the habitat of the plant.The principle reason could be that the flowers are small, odourless, pinkish red.In this context, it is appropriate to state the observation of Hingston & McQuillan (2000) that Indigofera species with purple zygomorphic flowers are not attractive to bees compared to the yellow flowers of other Fabaceae members.These authors also specifically noted such a situation in case of I. australis in Tasmania.The present study indicated that lycaenid butterflies are less efficient compared to bees in tripping the explosive pollination mechanism; however, both categories of insects are involved in tripping the mechanism due to very delicate corolla.But, bees are the appropriate pollinators which while collecting nectar and pollen effect self-and cross-pollination.The bees and butterflies do not forage throughout the day and cease their foraging activity by noon time despite the occurrence of flowers.Such a foraging schedule indicates that most of the standing crop of floral rewards gets exhausted by then and thereafter the visits of insects to the flowers are not energetically profitable since there is less reward to be removed.The availability of traces of nectar in the flowers due to feeding of pollen and nectar by thrips appears to be advantageous for the plant to increase foraging visits across conspecific populations and promote outcrossing.Marina Fernanda et al. (2008) reported that Indigofera species pollen contain oil, starch grains and protein bodies.In I. linnaei also, the pollen is oily and it could be a potential nutrient source for bees.In un-tripped flowers, the staminal column and the pistil remain within the keel and subsequently fall off without pollination.Therefore, I. linnaei with explosive pollination mechanism adapted for pollination by abiotic and biotic agents is able to produce highest natural fruit and seed set rates through autogamy, geitonogamy and xenogamy.In I. linnaei, seed dormancy is another important character of the plant that facilitates germination only during rainy season during which the soil absorbs and stores sufficient moisture.This dormancy enables the plant to save the seed source and use the same for the colonization of the areas during rainy season.In recent years, rainfall is insufficient and also long dry spells exist within rainy season.In consequence, the seedlings struggle to survive and if there is not enough soil moisture, they do not show any further growth and subsequently perish.However, the re-growth from the well established old root stock withstands rain deficit and produces new plants alleviating the loss of seedlings from the seeds to some extent.Therefore, the seed dormancy and the production of new plants from the old root stock enable the plant to occupy various habitats to extend and expand its distribution range.In this context, it is appropriate to mention that explosive pod dehiscence facilitates seed dispersal to different distances on the ground and the seeds with irregular shape and oily nature subsequently disperse through wind and rain water.Therefore, I. linnaei shows ballistichory, anemochory and hydrochory; these modes of seed dispersal benefit the plant to invade and colonize new areas.Indigofera linnaei is used for promoting hair growth, chronic bronchitis, asthma, ulcers, skin diseases, in gastropathy and in epilepsy (Kumanan et al., 2014).Since the plant is used in traditional medicine, it can be scientifically used in the modern forms of medicine while allowing it to grow in areas where it is not a menace from the human point of view.
2018-12-15T01:34:08.777Z
2017-07-18T00:00:00.000
{ "year": 2017, "sha1": "569829a5004203ecde891d7bd371e87bd53135db", "oa_license": "CCBYSA", "oa_url": "https://www.nepjol.info/index.php/JIST/article/download/17743/14395", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "569829a5004203ecde891d7bd371e87bd53135db", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
15456363
pes2o/s2orc
v3-fos-license
Reliability of Transcutaneous Measurement of Renal Function in Various Strains of Conscious Mice Measuring renal function in laboratory animals using blood and/or urine sampling is not only labor-intensive but puts also a strain on the animal. Several approaches for fluorescence based transcutaneous measurement of the glomerular filtration rate (GFR) in laboratory animals have been developed. They allow the measurement of GFR based on the elimination kinetics of fluorescent exogenous markers. None of the studies dealt with the reproducibility of the measurements in the same animals. Therefore, the reproducibility of a transcutaneous GFR assessment method was investigated using the fluorescent renal marker FITC-Sinistrin in conscious mice in the present study. We performed two transcutaneous GFR measurements within three days in five groups of mice (Balb/c, C57BL/6, SV129, NMRI at 3–4 months of age, and a group of 24 months old C57BL/6). Data were evaluated regarding day-to-day reproducibility as well as intra- and inter-strain variability of GFR and the impact of age on these parameters. No significant differences between the two subsequent GFR measurements were detected. Fastest elimination for FITC-Sinistrin was detected in Balb/c with significant differences to C57BL/6 and SV129 mice. GFR decreased significantly with age in C57BL/6 mice. Evaluation of GFR in cohorts of young and old C57BL/6 mice from the same supplier showed high consistency of GFR values between groups. Our study shows that the investigated technique is a highly reproducible and reliable method for repeated GFR measurements in conscious mice. This gentle method is easily used even in old mice and can be used to monitor the age-related decline in GFR. All of them allow the measurement of GFR without blood and/ or urine sampling and are based on quantifying the elimination kinetics of fluorescent exogenous markers. Generally, these transcutaneous methods were validated against a gold standard plasma clearance procedure. Due to the independence of blood and/or urine samples transcutaneous methods allow repeated GFR measurements in short time intervals in the same animal. Such repeated measurements of GFR are of high interest e.g. in preclinical nephrotoxicity studies of novel medical agents or investigations regarding acute kidney injury (AKI) models and development of AKI treatment regimens. Besides validation against a gold standard the repeatability (self-consistency) of a new method is of utmost relevance, as variations of measured results could be wrongly misinterpreted as e.g. disease progression or treatment outcome. So far, no investigations of repeatability of transcutaneous GFR measurements in the same animal exist. Hence, the repeatability of a transcutaneous method for GFR assessment was evaluated for the first time [14]. The method, allowing measurement in conscious, freely moving mice, is based on the transcutaneous measurement of the elimination kinetics of the exogenous GFR marker FITC-Sinistrin [2][3][4]7,8,10,14,15]. Two GFR measurements within three days were performed in five different groups of mice. The collected data were evaluated regarding day-to-day reproducibility of the method as well as inter-and intra-strain variability of GFR and the impact of older age on these parameters [16]. In addition, we assessed GFR again in two additional groups of mice that were purchased at a later time point from the same supplier to test for GFR consistency in the breeding colony and to validate our method over a longer time interval. The protocol was approved by the Committee on the Ethics of Regierungsprä sidium Karlsruhe (Permit Number: 35-9185.81/ G19/11). Transcutaneous bolus clearance The transcutaneous measurement was performed as previously described [14]. In brief, the miniaturized fluorescence detector (NIC-Kidney; Mannheim Pharma & Diagnostics GmbH, Mannheim, Germany) was fixed on a depilated region on the back of the mice using a double-sided adhesive patch (Lohmann GmbH & Co. KG, Neuwied, Germany). The preliminary depilation and the FITC-Sinistrin injection were performed under short isoflurane (Abbott Laboratories, Abbott Park, USA) anesthesia. The measurement started with activation of the device before FITC-Sinistrin (7.5 mg/100 g b.w. dissolved in 0.25 mL NaCl 0,9%; Mannheim Pharma & Diagnostics GmbH, Mannheim, Germany) was injected, in order to measure the background signal for 1 min. Starting from the marker injection, the data acquisition lasted 60 min. GFR calculation Transcutaneous GFR was calculated using the half-life (t 1/2 ) derived from the rate constant of the single exponential elimination phase of the fluorescence-time curve and a semiempirical mouse-specific conversion factor established previously [14]. Statistical analysis For day-to-day reproducibility, i.e. intra-strain repeatability, Bland-Altman plots were generated and 97.5% limits of agreement were calculated from mean -26 SD (standard deviation) and mean +26 SD [17]: As further measures of agreement of day-today reproducibility, we calculated the coefficient of variation (CV, calculated as SD/M*100%). Multi-factor ANOVA was used to check for influences of the factors day, and the interaction between day and strain. Day-to-day reproducibility (intra-strain repeatability) Across all experimental groups for FITC-Sinistrin excretion, mean half-life (t 1/2 ) 6 SD was 16.464.5 and 15.963.6 min as well as mean GFR 6 SD was 9516235 and 9616187 mL/min/100 g b.w. at the first and second day of measurement, respectively. Mean difference between repeated measurements was 0.5663 min and 210.56178 mL/min/100 g b.w., respectively. The CV between measurements was 10.88% for t 1/2 and 11% for GFR. The 97.5% limit of agreement (LoA) was 25,4 to 6.52 min and 2368 to 347 mL/min/100 g b.w. (Figure 1a and b). The data on day-to-day reproducibility for the different groups investigated (Table 1) indicate a good agreement of the repeated measurements. Comparison between groups Multi-factor ANOVA revealed that the day-related factors (day and day6group) have no significant effect on the FITC-Sinistrin t 1/2 (p-value day6group = 0.72; p-value day = 0.77). However, there is a significant influence of the group factor (p-value group ,0.0001). Comparison of the accumulated data of all examined groups with each other using a two tailed Student's t-test is summarized in Figure 2. GFR consistency in animals with similar age from the same supplier at different dates In a yearly difference two groups of mice (young and old C57BL/6) were purchased the same supplier. GFR measurements revealed similar results with no significant differences between the respective groups using a two tailed Student's t-test (Figure 3). Discussion Measurement of GFR is of critical importance for detecting kidney injury, predicting outcome, adapting drug dosages, and monitor therapeutic management in clinical setting. In the experimental setting GFR is also important for e.g. phenotyping of animal models, testing treatment strategies or analyzing nephrotoxicity. Currently, plasma creatinine concentration is often used to estimate GFR in mouse models. However, plasma creatinine has been shown to be a poor marker of GFR in mice [18]. To monitor changes of renal function especially in the context of AKI and nephrotoxicity it is instrumental to assess GFR repeatedly in short time intervals. Classical plasma sampling based methods or novel techniques like the intravenous applied fluorescent probe described by Molitoris et. al. are not feasible in conscious mice in this respect [19]. The data in this report demonstrate that the method used for transcutaneous GFR measurement meets the requirements for frequent and repeated GFR assessment in mice [14]. Using t 1/2 of FITC-Sinistrin as the key basic read-out, the missing of statistically significant differences in day-to-day measurements in all groups indicate a high consistency. A high reproducibility of our method was also reflected by the Bland-Altman analysis showing almost no bias between the two measurements. Moreover, the mean CV of 10.8% for t 1/2 and 11% for GFR, respectively, as a measure of reproducibility is in the range known from repeated GFR measurements in humans assessed by a variety of other techniques [20][21][22][23][24]. As the conversion of t1/2 to GFR is formula based the comparison between t1/2 and GFR are nearly identical, as expected. As one would assume, significantly slower excretion t 1\2 is detected in the 24 month old C57BL/6 compared to the 3-4 month old group in both experiments. This reduction of GFR with older age is in line with observations by Hackbart et. al [25]. In contrast to previous data, we found a rather tight intra-group variability. Old fragile mice do profit from our gentle method that puts far less strain on the animal than maintenance over 24 hours in a metabolic cage. An important additional finding is the consistency of the GFR measurements in mice of the same age that come from the same supplier at different time points. This holds true even for older animals where age-related differences in GFRdecline could have been possible. It is important to note that the magnitude of CV and 97.5% LoA of the Bland-Altman analysis do not only result from measurement errors, e.g. due to mechanical pressure applied to the device [14], but also from known day-to-day variability of GFR. Day-to-day variability is most pronounced in healthy animals with normal renal reserve capacity and caused by adaption of kidneys to issues like food intake (protein), hydration status or blood pressure or renal blood flow [26]. Therefore, the observed differences in CV between the strains can be explained at least in parts by inter-strain differences in the renal reserve capacity. To minimize the impact of the circadian rhythm of GFR Table 1. Two consecutive assessments of FITC-Sinistrin half-life (t 1/2 ) and GFR as well as its coefficient of variation (CV) and Bland Altman parameter in five groups of male mice using transcutaneous measurement. the measurements in the animals were performed at the same time of day. A drawback of the technique is that GFR is not calculated directly from the assessed data, but an empirically derived conversion formula is needed. This formula is based on the assumption that the extra cellular volume (ECV)/100 g b.w. is comparable in all investigated groups. Even a change in ECV as long as it affects the whole organism does not influence the final excretion t 1/ because only the distribution time of the marker (time to max concentration, time to reach single exponential decay phase) and the overall measured signal intensity are affected. However, in case there is a different ECV due to e.g. local edema which is filled and cleared with different rates to other ECV compartments, this may potentially influence the measurements. The later is also true for the classical plasma sampling based bolus clearance experiments, because these models do not consider this Transcutaneous measurements were performed in male young (n = 20) as well as in a 24 months (m) old C57BL/6 (n = 18) mice in 2012. One year later (2013) comparable groups of C57BL/6 mice (young: n = 23; old: n = 24) from the same breeder (Janvier, France) were investigated in the same way. No significant difference between the two measurement dates in the respective groups were found by two tailed t-test (3-4 m: p = 0.82, 24m: p = 0.53) for both parameters. The boundary of the box closest to zero indicates the 25th percentile, the line within the box marks the median, and the boundary of the box farthest from zero indicates the 75th percentile. Whiskers (error bars) above and below the box indicate the 90th and 10th percentiles. In addition, outliers are graphed by dots. doi:10.1371/journal.pone.0071519.g003 extra compartment. The only precise way of measuring GFR thereby is by the use of only constant infusion techniques. We consider the short gas anaesthesia required for device mounting and substance injection as a negligible problem as it takes about 15 min until the single exponential decay phase is reached. In summary, the results of this study clearly indicate that the investigated transcutaneous technique for GFR assessment is a reproducible and reliable method for repeated GFR measurements. In contrast to other transcutaneous methods, the measurements can be performed in conscious, freely moving mice excluding the high impact of anesthesia on GFR. This is especially important when dealing with old, mostly fragile mice. Blood or urine samples are not required, which makes it an appropriate method for multiple measurements in the same animal in short time intervals.
2017-07-03T18:32:56.138Z
2013-08-19T00:00:00.000
{ "year": 2013, "sha1": "0b0637050955ece5db4b07e1f0be29316b75b4c9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071519&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b0637050955ece5db4b07e1f0be29316b75b4c9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216624761
pes2o/s2orc
v3-fos-license
Conceptual problems of contemporary additional education in the agro-industrial complex: employers’ personnel strategies The paper critically reviews the conceptual approaches existing in contemporary additional education in the agro-industrial complex. In particular, employer’s personal strategies are discussed in current conditions. The thesis about the importance of developing support systems for "continuing education" is associated not only with the satisfaction of the educational and professional needs of a person himself, but also with the mechanisms for the formation of new sectors of the economy. The changing conditions of professional activity require the individualization of profiles of professional development and flexibility of qualifications frameworks, giving rise to the task of updating the system of continuing professional education as one of the key areas for the development of "continuing education". The paramount importance of additional professional education is enshrined in a number of strategic documents, including priority projects in the field of education and the program "Digital Economy of the Russian Federation." With the continuing growth in the coverage of additional professional education and Russia's leadership among the G20 countries in terms of the population proportion in the age cohort of 25-64 years with tertiary education (about 59% and 27%, respectively), less than half of working Russians within five years are included in additional vocational education (Figure 2). At the present stage of development, Russia has achieved serious results in the production of agricultural products and foodstuffs. Investment in agriculture has stimulated double the production growth since 2000. Russia is steadily increasing food supplies to the world market. Over the past 17 years, exports of agricultural products and foodstuffs have grown by a factor of 16, reaching 20.7 billion USD. Even 20 years ago, Russia was an importer, and now it collects a record grain harvest for the fifth year in a row. Russian agriculture has become a modern and competitive industry that has a steady pace of development, being a real driver of economic growth. To secure a steady pace of development, as well as to modernize the industry, an appropriate staff is required. However, despite the dynamic development of the agro-industrial sector in Russia since 2005, employment in this sector has decreased by 1 million people. There are several reasons for this, including low wages, lack of conditions in the countryside, and lack of prestige of professions. According to HeadHunter, the average salary of a manager in agriculture in Russia is at least 40 thousand rubles, but in order to become a potentially successful candidate for a managerial position in the agro-industrial complex, one needs to have experience in a profile for at least 6 years. This requirement is made by employers to 74% of vacancies for managerial positions. Higher special education is also needed: in 68% of open vacancies, a wish is expressed to see candidates with a diploma from the Russian State Agrarian University -Moscow Timiryazev Agricultural Academy. Knowledge of foreign languages is usually singled out as a separate summary requirement, most often English [3]. Automation of production in agriculture requires the development of new professional competencies and requirements for agricultural workers. Those employed in modern agriculture should have knowledge in the field of robotics and software as well. To bring the agro-industrial complex to the appropriate level of competition with other industries that are traditionally high-tech, we need appropriate technologies and people who use these technologies in their daily working practice. In this regard, not only at the state level, creating impulses for the development of the industry, but also from the agro-industrial sector enterprises, it is necessary to set conceptually new tasks, the solution of which would help not only to create high-tech workplaces, but also facilitate the training of personnel already working in the industry. One of the pioneers in the development of a conceptual framework for staff training is the agroindustrial holding "Miratorg". This enterprise has one of the fastest growing products on the market of the agroindustrial complex and is unique in terms of its vertical integration -production is carried out according to the principle "from field to counter". In connection with the constant introduction of innovative solutions, technologies and expansion of production capacities, the holding provides training to personnel on a regular basis. In 2017, thanks to the implementation of goals to increase the efficiency of existing production and the launch of new production lines, where absolutely new meat products were presented, the volume of beef produced by "Miratorg" increased by 32% to 82 thousand tons. At the end of 2017, the holding retained its unconditional leadership in the direction of beef cattle in Russia. To solve the shortage of personnel, it was decided to create a new personnel strategy (see Figure 3). According to the personnel strategy, "Miratorg", as the largest employer in the regions of presence, created its own training center called the "Meat Business Academy", specialists of which not only study the needs of the holding for the future, but also the possibilities of regional labor markets to satisfy them. Teachers of the Academy provide training and retraining of specialists necessary for the holding. The training center has developed APE programs, which are implemented in various forms, depending on what tasks are defined to solve. Thus, when implementing the project for the construction of the Miratorg Veterinary and Sanitary Utilization Plant in 2017-2018, specialists with unique knowledge were required; they had the opportunity to master this large-scale idea. For their training, a personnel growth strategy was introduced, which, as an employer, allowed "Miratorg" not only to create a demand for highly qualified specialists in the regional market, but also to attract international consultants in the field of modern recycling technologies to the task. As a result of the launch of the project, it was possible to adjust the functioning of modern lines, which make it possible to obtain as a result of recycling the products subsequently used in the production of animal feed (see Figure 4). Thus, from January to September 2018, the company produced more than 3.9 thousand tons of meat and bone meal and 1.9 thousand tons of feed fat, which is 38% and 27%, respectively, more than in the same period of 2017. The increase in production volumes is due to the increasing capacity of the pig-breeding division and the increase in processing depth. The Veterinary and Sanitary Utilization Plant of the agroholding "Miratorg" produced 5.8 thousand tons of products, processing more than 16 thousand tons of raw materials in the first three quarters of 2018. To organize the work of the integrated high-tech precision farming system, which includes GPS technology, geographical information systems (GIS), and an agricultural holding, a team of specialists who underwent advanced training under advanced training programs within the corporate training center was created. In a short time, specialists were trained who became employees of the precision farming department. The division conducts regular monitoring of soil conditions. On the basis of the obtained data, the required doses of mineral and organic fertilizers are calculated, the seeding rates of tilled crops are adjusted depending on the zonality of the fertility of the fields, maps of the optimal direction of movement of the units along the fields are developed to reduce soil compaction, studies are conducted on the duration of amelioration and organic fertilizers on fertility. All necessary research is carried out in its own soil laboratory in Ivna, Russia. Here they also check the content in the soil of micro and macro elements necessary for obtaining a good harvest. Creating the coaching support in the development of leadership and innovative qualities; psychological services for personnel The institution of mentoring of the Ivnyansky subdivision of the agroholding. The analysis showed an increase in the phosphorus content by 64% and potassium by 55%. This is one of the most important trace elements necessary for the successful growth and development of plants. Efficiency, coherence of processes, timely retraining of specialists and cost reduction are one of the main tenets of the work conducted by the agricultural holding. The operation of the equipment in the fields allows one to track the program of satellite monitoring of transport. The agronomist no longer needs to go into the field every time to monitor the work of employees: on a tablet or smartphone using software, one can track where the equipment works, the used level of fuel, how much space has been processed and how much is left to process, the reason for possible downtime and other important indicators. The Miratorg Academy Training and Technical Center is located in close proximity to the production base. The infrastructure of the facility includes: a residential building for students and teachers, classrooms, a garage, an overpass, and a training ground for driving skills. Today, on the basis of the Academy, specialists in agronomy, engineering, and technical services from Bryansk, Smolensk, Tula, and Kaluga are trained. Training includes techniques and methods of working with modern agricultural equipment, as well as the basics of managing the quality of feed production and the introduction of precision farming systems. According to the concept of staff development, the training center is designed to train more than a thousand specialists annually. With the end of the agrarian season, the schedule of staff development begins annually. Each employee must improve their qualifications at least once a year. Experienced employees undergo recertification, and their trainings are held with the involvement of world leaders-suppliers of seeds and equipment. Within the framework of the program, it is necessary to submit a project to protection, which proves the necessity of applying certain foreign or domestic experience and technologies in production. Each student is given the opportunity to test their results. According to the concept of staff development, all new employees are also required to undergo training. Regardless of previous experience, those who want to get a job in a holding attend theoretical and practical classes of experienced teachers and trainers, thoroughly study production processes, then practice on advanced farms and only after that start independent work. It should be noted that the holding is actively working with regional authorities in order to achieve the goals of national projects, outlined in the May Decree of the President of the Russian Federation [1]. In particular, the Miratorg leadership is considering the idea of entering international markets and in this regard intends to improve the concept of staff training and retraining and further. Thus, further participation of the enterprise in regional development is planned -through expansion of activities in the regions of presence, which could not only solve employment problem but also contribute to changes in social indicators related to the qualifications of the regional workforce. Managers of the agroholding plans to attract the best specialists in the field of agriculture to develop new programs of additional professional education and to form a strategic program for the further development of the training center, taking into account the new tasks of modernizing the Russian economy. So, if at present, training is held in 25 disciplines, then in a year there will be 250 of them. Over two thousand people will be able to gain knowledge and qualification every year. In the near future, the construction of the second stage of the training center is planned for the employees working in the engineering and agronomic service. Currently, the company employs more than four thousand people, and in the next two to three years, this figure will increase to eight or nine thousand. Therefore, the issues of additional training and competence of personnel become crucial. The development of human capital has a key role in ensuring the sustainable growth of the Rusagro group of companies. To achieve this goal, the following activities are being implemented: − Conducting a comprehensive assessment of personnel in order to identify the deficit in qualifications and competencies, develop individual development plans, improve the quality of human resources and their effectiveness; − Conducting a regular assessment of the results of human capital development according to two main parameters: an index of staff quality and efficiency, analyzed on the basis of its performance, and a level of involvement; − Early formation of a personnel reserve of young professionals through cooperation with leading regional universities and colleges, as well as through internal training systems and the creation of a strong employer brand; − Automation of HR processes. Group employees have many advanced training opportunities. Every year, the group's business lines develop and implement personnel plans for training and developing employees based on the strategic and current business objectives, as well as the needs identified through a comprehensive assessment. Following the results of a comprehensive assessment (which is filled out by each employee), for a period of 1-2 years, an individual development plan, where he records all the training and development activities designed to improve his qualifications of the employee or ensure the transfer of his accumulated knowledge. The current system of training and development in Rusagro suggests the following ratio of methods: 70% of training and development takes place in the workplace by solving new problems; 20% is due to the transfer of experience (mentoring) and the development of others; and 10% is with the help of external trainers and consultants or attending external training events (trainings, seminars, conferences, etc.) [2]. The staff development concept of the Rusagro group of companies provides for advanced training in a form that is aimed at the fullest professional support of the company's employees in the training process. In addition to specialists conducting classes in the framework of additional professional education programs, psychologists and professionals in the field of motivation are involved in this project. Their task is to create the desire of employees to develop new technologies, to acquire knowledge, allowing not only to expand their horizons in their specialty but also to develop personal qualities. An integrated approach to the problem of staff development is also characteristic of other companies, which in different ways represent the organization of additional professional education and use different methodologies in the field of professional development (see Table 1) Table 1 Company name, characteristic The purpose of corporate personnel strategy Features of the implementation of programs The meat processing industry of the "Talina" Group is represented by three sites of LLC "MPC Atyashevsky" in the Republic of Mordovia. The meat produced at the holding's pig farms is processed at the production of LLC "MPC Atyashevsky" in the village of Atyashevo. Two more production sites are located in the city of Saransk and the As of January 1, 2017, the gross milk production at the holdings was 57.5 thousand tons. Over the past few years, the company has shown consistently high results in all production indicators. The company's management is implementing an investment project aimed at increasing the livestock production capacity by building additional barns. Compliance of employees with new production technologies Raining center in conjunction with the Penza State Academy of Agricultural Sciences (PSAAS). Training is organized through the creation of a special complex of three laboratories for the analysis of dairy products, feed and soil. The data of the presented analysis allow us to conclude that the dynamic development of the agroindustrial complex in Russia, as well as government support measures aimed at maintaining positive trends in the industry, dictate the need for business to develop promising personnel strategies. These documents allow to eliminate the solution of many problems that arise during the expansion of production or its modernization, including poor professional training, lack of motivation in training, the low prestige of agricultural professions among young people. In addition, the emergence of employers' personnel strategies makes it possible to organize a systematic work in the field of additional professional education together with Russian universities and move away from the practice of mechanically increasing the volume of training specialists for the agroindustrial complex. A breakthrough in agriculture, agricultural processing industries, the conquest of foreign markets by domestic producers is possible only when solving problems in training personnel for the assembly of agricultural enterprises equipped with new technologies.
2019-09-15T03:05:39.528Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "0c1f90851203eeb3bdc97d7da628345022d3963a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/274/1/012102", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f95a209797ead08e2eae131c3c28a3ea696db37a", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Education" ], "extfieldsofstudy": [ "Physics", "Business" ] }
34151025
pes2o/s2orc
v3-fos-license
A case management of hypertension in the elderly in sub-Sahara Africa: lessons from Granny Management of chronic disease conditions in the elderly is challenging. They usually have many co-morbidities requiring multiple drug regimens, and memory or cognitive problems that can interfere with management. Also, they sometimes have a degree of social problems as they might often live alone, and thereby cater for their daily activities with minimal assistance. Multiple drug use combined with their fragile health predispose them to adverse drug reactions, drug-drug interactions, and direct drug toxicity from overdosing. We report and discuss the lessons learnt from the case of an elderly woman in an urban setting in sub-Saharan Africa who presented with problems of drug dosing, adverse drug effect, and drug-drug interaction that might prove useful in the future management of hypertension with angiotensin converting enzyme inhibitors. Introduction Management of chronic disease conditions in the elderly is challenging. They usually have many co-morbidities requiring multiple drug regimens, and memory or cognitive problems that can interfere with management [1]. Also, they sometimes have some degree of social problems as they might often live alone, and thereby cater for their daily activities including the intake of medicines with minimal assistance [2]. Multiple drug use combined with their fragile health predisposes them to adverse drug reactions, drug-drug interactions, and direct drug toxicity from potential overdosing [3]. They may even sustain physical injuries with eventual poor outcomes as a result of medicine use [4][5][6]. We report and discuss the lessons learnt from the case of an elderly woman in an urban setting in sub-Saharan Africa, who presented with problems of drug dosing, adverse drug effect, and drug-drug interaction that might prove useful in the future management of hypertension with angiotensin converting enzyme inhibitors (ACEIs). Patient and observation Madame H, is a 74-year-old woman with long-standing hypertension managed with the angiotensin converting enzyme inhibitor perindopril 5mg daily for over 13 years. She also had osteoarthritis of the shoulders, wrists, and knees for which she was not taking any specific medicines. She had limited social assistance in the management of her conditions as no one was making sure she was taking her medication correctly. However, she was compliant to her anti-hypertensive treatment with good blood pressure control, until she developed Quincke's edema which was attributed to perindopril as it was her only medicine at that time. Her treatment was immediately switched to the calcium channel blocker amlodipine 5mg daily, and a short course of an anti-histamine cetirizine 10 mg daily to manage the conspicuous labial-facial edema. With subsequent visits, she was found to be taking perindopril in place of amlodipine (due to stock out) alongside cetirizine from the previous prescription. Amazingly, no labial-facial edema occurred. Both medicines were stopped and she was put on amlodipine 5mg daily, with poor control of her blood pressure on subsequent visits. Her treatment was switched to a fixed drug combination of amlodipine 5mg plus indapamide 1.5mg (thiazide-like diuretic) daily, with optimal blood pressure control for her age. Subsequently, she developed the conspicuous labial-facial edema. Investigation revealed she had a stock out of her fixed drug anti-hypertensive and reverted to perindopril 5mg (old stock not discarded). She was switched back to the fixed anti-hypertensive medicine. She also complained of neck, shoulder and wrist pain with electrical discharges for which high dose vitamin B (2 tablets twice daily) was prescribed. With subsequent visit, she complained of posterior neck and scalp pain, and clinical evaluation was remarkable for an unusually low sitting blood pressure reading of 104/67 mmHg on the right arm (control arm), for a regular pulse of 74 beats per minute. She was not in acute distress. Investigation revealed she was taking four times the prescribed anti-hypertensive medicine (2 tablets twice daily totaling 20mg of amlodipine and 3mg of indapamide daily), alongside the high dose vitamin B (2 tablets twice daily). Ethics approval and consent to participate: The report of this case was approved by the Institutional Review Board of the Yaoundé Central Hospital, Cameroon. The patient provided written informed consent. The patient consented for the publication of her case in the form of a scientific paper. Discussion This case highlights some of the challenges in managing chronic disease conditions in the elderly. Many lessons (old and new) have been learnt from this case and warrants sharing and further investigations. Firstly, the anti-histamine (cetirizine) appeared to reduce the risk of Quincke's edema. Secondly, extreme caution should be exercised when adding or switching medicines in the elderly. Thirdly, the number and the frequency of intake of a pill can influence the number and the frequency of intake of another pill, which might be potentially toxic. Fourthly, late adverse drug reaction to the ACEIs can occur up to thirteen years of medicine use. Frequent side effects associated with the use of ACEIs family of anti-hypertensive medicines are intractable non-productive cough and the conspicuous labial-facial angio-edema or Quincke's edema [3,7]. When any of these occur, the drug and related medicines are immediately stopped and black listed for the patient. Switching to another medicine with comparable efficacy such as the angiotensin receptor blockers is an option, but often carries a higher cost and low availability for treating a chronic condition in low-income settings [8]. This case suggests that anti-histamines can reduce the risk of frequent adverse events such as angio-edema associated with the use of ACEIs, despite the fact that it is bradykinin mediated [9]. However, this observation has to be studied further in a randomized double blind placebo-controlled trial. Drug overdose, whether intentional or accidental in the context of poly-pharmacy is also frequent in daily clinical practice [1]. As anecdotes, a 52-year old man treated for heart failure was concomitantly taking two ACEIs at full dose as the result of a switch from a more expensive to a less expensive ACEI, until he developed intractable non-productive cough shortly after he started both medicines. Also, a 74 year old man had to take over thirty medicines including steroid injections daily, cumulated from various consultations from many physicians until he developed iatrogenic Cushing syndrome and multiple organ damage. When switching to another medicine in the elderly, we should ensure that the old and/or frequently used medicines are discarded to reduce the risk of drug intoxication or drug-drug interaction [3]. Also, we should limit our prescription to the most needed medicines, and choose from the once daily intake or the most simple drug regimens [10]. In this case, adding twice daily vitamin B (that can be forgone) erroneously induced this elderly woman to quadruple her anti-hypertensive medicine with the risk of hypotension and falls [5,6,11]. Patients on ACEIs, especially the elderly, should monitored regularly for late onset adverse drug reactions [7]. Conclusion This case highlights some of the challenges in managing chronic diseases in the elderly. This case suggests that, the anti-histamine (cetirizine) can reduce the risk of Quincke's edema. Also, extreme caution should be exercised when adding or switching medicines in the elderly. The number and the frequency of intake of a pill can influence the number and the frequency of intake of other pills taken at the same time, which might be potentially toxic in the elderly. Adverse drug reactions to ACEIs can occur years after the onset of use.
2018-04-03T02:32:50.237Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "eb8979772e71ceccdfa129207175f6bc19461044", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11604/pamj.2017.26.165.10660", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb8979772e71ceccdfa129207175f6bc19461044", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }