id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
233645907
|
pes2o/s2orc
|
v3-fos-license
|
Detection of missed fractures of hand and forearm in whole-body CT in a blinded reassessment
Background We examined the visibility of fractures of hand and forearm in whole-body CT and its influence on delayed diagnosis. This study is based on a prior study on delayed diagnosis of fractures of hand and forearm in patients with suspected polytrauma. Methods Two blinded radiologists examined CT-scans of patients with fractures of hand or forearm that were diagnosed later than 24 h after admission and control cases with unremarkable imaging of those areas. They were provided with clinical information that was documented in the admission report and were asked to examine forearm and hands. After unblinding, the visibility of fractures was determined. We examined if time of admission or slice thickness was a factor for late or missed diagnoses. Results We included 72 known fractures in 36 cases. Of those 65 were visible. Sixteen visible fractures were diagnosed late during hospital stay. Eight more fractures were detected on revision by the radiologists. Both radiologists missed known fractures and found new fractures that were not reported by the other. Missed and late diagnoses of fractures occurred more often around 5 pm and 1 am. Slice thickness was not significantly different between fractures and cases with fractures found within 24 h and those found later. Conclusions The number of late diagnosis or completely missed fractures of the hand and forearm may be reduced by a repeated survey of WBCT with focus on the extremities in patients with suspected polytrauma who are not conscious. Level of evidence III Supplementary Information The online version contains supplementary material available at 10.1186/s12891-021-04425-z.
Background
Injuries of the hand may be missed in 3.5 to 25% of patients with polytrauma [1,2]. While there is no definition of 'missed', delayed', or 'late', we decided to use the term 'late diagnosis' for injuries that were detected 24 h after admission but during hospitalisation as they were eventually found [3][4][5][6]. Fractures of the upper extremity may be associated with reduced quality of life [7][8][9]. A timely treatment has been shown to be beneficial for a return to work [10]. We found that fractures of hand are more often detected in patients in cases with full inclusion of the hand in the whole-body CT (WBCT). This was more often achieved by placing the hands on the abdomen [6]. The ISS did not have an influence on the number of late diagnoses in our sample [6]. The question remained if the fractures are visible for the human eye on retrospection which may also depend on the slice thickness of the WBCT. Other causes for missed or late diagnoses can be fatigue which has been associated with worse diagnostic performance of radiologists [11]. This study is a follow up of a retrospective analysis of patient data on the sensitivity of WBCT for the detection of fractures of hand and/or forearm in intubated patients with suspected polytrauma [6].
Aim
We wanted to assess how many fractures of hand and forearm, that were diagnosed late, were visible in the WBCT on retrospection. Additionally, the influence of the time of day of the WBCT and slice thickness on the occurrence of late diagnoses was analysed.
Methods
The study is based on cases that were identified in the previous publication [6]. The sample consisted of patients who were admitted for suspected polytrauma, sedated and ventilated and received a WBCT. Sedated, intubated patients were chosen to avoid the influence of patient related factors like level of consciousness, selfawareness, and pain sensation.
Two radiologists from two different hospitals that did not provide data for the first study were tasked to examine WBCTs for bony injuries of hand or forearm, no further instructions were given. They were presented 44 cases with the clinical data provided on the CT request form. The cases were in random order and consisted of 12 cases with a late (> 24 h after admission) diagnosis of a fracture of the hand and/or forearm, 25 cases with a diagnosis within 24 h, and 7 control cases who had no injury. The control cases were chosen among cases who received additional imaging of the hand and wrist that showed no bony pathology. All fractures that had been diagnosed before discharge and were listed in the discharge papers are named 'reported fractures' in our manuscript. Further data after discharge were not available.
Radiologist 1 was a 4th year resident, radiologist 2 had more than 20 years of working experience. They were permitted to use all sequences of the WBCT, including the localiser, and do additional reconstruction if needed. They were asked to list all bony injuries of the upper extremities without further instructions. After blinded reassessment, all reported, and previously unreported fractures were tested for inclusion and visibility in the CT by three of the authors. Discrepancies were solved by majority vote.
Eight fractures were found that have not been reported during hospitalisation. Those missed fractures were added to the number of fractures with diagnosis later than 24 h. As they probably would not have needed a surgical treatment and likely full healing, we decided to not contact the patients. The patient should not be confronted with memories of the traumatic event. This decision was supported by the ethical board of our institution, also considering that there would be no therapeutic consequence by now.
Time of imaging and slice thickness of the axial layers analysed. For the time diagrams, fractures and cases were categorised into diagnosis '< 24 h', '> 24 h', and 'missed'. Cases with at least one missed fracture were labelled 'missed', then cases with at least one late diagnosis were labelled '> 24 h'. The other were '< 24 h'.
Statistical tests of categorial variables with at least 5 expected cases for each field were performed with Chisquare, and Fisher's exact for tables that did not meet the requirement. Differences between continuous variables were tested using the unpaired t-test. A p-value of ≤0.05 was defined as significant.
The local institutional ethics committee of the University of Greifswald (Ethikkommission an der Universitätsmedizin Greifswald, Greifswald, Germany) approved the study (BB 054/16a) and stated that there are no ethical or legal concerns regarding this study. The decision was based on the Helsinki declaration. The need for consent for the retrospective use of patient data was waived by the institutional ethics committee of the University of Greifswald.
Informed consent was obtained from the radiologists that participated in the study. The number of fractures that were known at discharge are shown with the number of fractures that were found within 24 h. On review of all imaging data we determined how many fractures were included in the CT scan area and how many could be recognised
Results
Seventy-two fractures of hand or forearm were reported in the discharge letter of 36 cases. Forty-nine were found within 24 h after admission by WBCT or additional diagnostic imaging. On reassessment of all reported fractures, 65 were visible resulting in 25% (16 of 65) of diagnoses that could have been detected additionally on admission in the initial WBCT (Table 1). Twenty-four requests forms had clinical data on suspected injuries. A suspected injury of forearm and hands was documented in 13 cases. Eight of those had no injury in the suspected area, of which three were control The 'missed' column shows the number of visible fractures from Table 1 that were not found by each radiologist. 'New' are previously not reported fractures of which 'confirmed' could be confirmed in the WBCT upon review. The 'total visible' column adds the number of visible reported and confirmed new fractures. One metacarpal fracture was described by both radiologists. All other new were only mentioned by one of both Fig. 1 Fracture of the pisiform bone that was found on blinded reassessment. The adjacent slices are shown in the additional video file (additional file 1) cases. Five cases had a corresponding injury. Fourteen cases reported the mechanism of injury only, of which four only mentioned 'traffic injury'. Five request forms gave information that the patient is sedated and intubated only, and one had no entry.
Blinded reassessment
Both blinded radiologists missed reported fractures and suspected 15 more fractures (Table 2).
We could confirm eight fractures on unblinded reassessment. One was found by both. Added to the 65 visible fractures that were reported in the discharge documentation, in total 73 fractures were visible in 33 cases and 11% (8/73) were missed. The remaining 11 cases were 7 controls and 4 cases with fractures not visible in WBCT.
Radiologist 1 missed all reported carpal and phalangeal fractures but found a previously unknown carpal injury (pisiform fracture: Fig. 1 and Additional file 1). Fig. 2 Fracture of the ulnar diaphysis that was only visible on the localiser as the arms were not included in the WBCT. Contrast and brightness were adjusted to better show the bones of the forearm Eighteen known fractures were missed by both radiologists, 26 by Radiologist 1 only, and 4 by Radiologist 2 only. Of note is the use of the localiser by Radiologist 2 to look for an injury, as one ulnar fracture was only included there (Fig. 2).
This fracture was labelled visible in WBCT. Three new fractures of the ulna were of the styloid process in association with a distal radius fracture and might be considered not relevant by some doctors. The remaining four carpal fractures would be treated by a splint, the metacarpal fracture could be fixated by K-wire or splinted depending on possible malrotation as there was no angulation.
Possible factors for missed/late diagnoses
On reassessment, all reported fractures were considered visible by both radiologists when confronted with the diagnosis. Reasons for missed fracture on reassessment included visibility in only one orientation, artifacts, and no reason. Two suspected fractures (scaphoid and metacarpal) on reassessment could not be confirmed or rejected and would have been followed up by additional imaging. They were not counted as fractures for this study.
Analysis of the time of day showed a tendency for missed or late diagnoses of potentially visible fractures for WBCT that were performed around 5 pm and 1 am (Fig. 3).
Cases with at least one missed or late diagnosis of a fracture of hand or forearm showed a similar distribution (Fig. 4).
The same figures showing only the number of missed and late fractures and cases are shown in Additional files 2 and 3.
Slice thickness ranged between 0.9 and 5 mm with an average of 1.7 (CI 1.5-2.0, SD 1.1) for all 73 visible fractures.
Fractures that were diagnosed late or missed showed no difference in WBCT slice thickness to those found within 24 h (late/missed: n = 12, 1.7 mm, CI 1.1-1.
Discussion
Fractures of forearm and hand in patients with multiple trauma can occur in 36% of cases [12]. In our original study population, we determined a prevalence of 12.1% for late diagnosis of fractures of forearm and hand [6]. In this subsample, we found 8 visible fractures in addition to 65 reported that were all visible in WBCT on reassessment. Missed fractures that were found during reassessment accounted for 11% of visible fractures in the WBCT. Even not counting three fractures of the ulnar styloid that were associated with a radius fracture, the remaining five fractures would add more than 7% that might not get proper treatment.
The treatment consequences of reported delayed diagnoses have been shown in our previous study [6]. In this study, one missed fracture might have needed a surgical treatment.
Previous studies reported a range of missed injuries up to 39%, corresponding to 65% of analysed patients who had a missed injury [3,13,14]. But they were eventually found during the treatment of the patient and not completely missed.
While both radiologists performed poorly in the reassessment of the WBCT as they missed more than 20 of 65 known fractures, but both found previously undetected fractures. One reason for the poor result might be motivation as they were doing the examination during breaks or after work. But that would not account for the number of new fractures that were found. Another explanation might be incomplete clinical data in the request forms. In both hospitals, the trauma surgeon would talk to the radiologist directly and discuss clinical signs that could hint to an injury. Depending on the patient's condition, the one responsible for requesting the WBCT might not want to delay the procedure by writing a detailed essay. As little clinical data was available for the reassessment, both radiologists who reassessed the cases had few clues on possible injury areas, and they had to scan all areas with the same attention. In our sample, late and missed diagnoses cannot be attributed to a lower image quality regarding slice thickness.
Tertiary trauma survey can detect 56% of early missed injuries within 24 h [14]. The rate might be increased by (missed). If at least one fracture was missed or found after 24 h, the label was set to missed or > 24 h. The same diagram ist shown without cases with fractures found within 24 h in additional file 3 addition of a radiological repeated survey along with the clinical examination.
In a systematic analysis of emergency radiographs for the extremities, the most common reason for a missed fracture was subtlety of the fracture [15]. The proposed solution was adequate training. Regarding our sample, the same might be true beginning with examination of all available data, including the localiser, and paying attention to hand and forearm when vital injuries have been excluded. While an experienced radiologist might have included inspection of the localiser in his routine, a less experienced viewer might forget it. The value of localiser examination was shown in a case series of abdominal CT scans that had a visible lung mass in the localiser that was not included in the axial sections and diagnosed several weeks later [16].
While not enough to be certain, we most missed fractures in our sample appeared late afternoon and shortly after midnight. In the participating centres that provided the cases, around 5 pm the first shift would end and the second would already have worked for several hours. Atypical working times may have a negative effect on psychomotor performance, lead to a higher risk of accidents and mood disturbances [17]. In addition, time of day was shown to matter for alerting attention in contrast to orienting and executive attention, the first being likely more important for assessment of radiographs [18]. Radiology reports were more often edited at end of shifts at 5 pm and with increasing working hours [19]. Fatigue and experience of radiologists has an influence on diagnostic efficiency and efficacy [20,21]. Shift workers who worked into the night were shown to show a peak in the Karolinska Sleepiness Scale as the night advanced [22]. As we analysed a defined area and the radiologists missed fractures on reassessment without discernible pattern, a satisfaction of search effect cannot be excluded but is not likely in our study [23].
Conclusions
The number of late diagnosis or completely missed fractures of the hand and forearm may be reduced by a repeated survey of WBCT with focus on the extremities.
|
2021-05-05T00:09:04.666Z
|
2021-03-17T00:00:00.000
|
{
"year": 2021,
"sha1": "11fdd0ac77e97aa3e25516bb7b42774456ac8e2e",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-021-04425-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af715b22096fbe7764875da12a978362bbfee18c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56231180
|
pes2o/s2orc
|
v3-fos-license
|
ROLE OF WEIGHT REDUCTION VERSUS PHYSICAL THERAPY IN MANAGEMENT OF OBESE PATIENTS WITH KNEE OSTEOARTHRITIS
Heba A. Seliem 1 , Eman E. El-Shahawy 1 , Samah F. Abbas 1 and Asmaa A.Baz 2 . 1. Department of Rheumatology and Rehabilitation, Faculty of medicine Zagazig University. 2. Ministery of Health ,Egypt. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History
ISSN: 2320-5407
Int. J. Adv. Res. 5(1), 2581-2588 2582 Obesity and knee OA are two of the most common chronic diseases and are often co-morbid. The risk of knee OA related to weight gain and obesity begins from an early age. Weight loss reduces the risk of incident knee OA and, in established disease, reduces symptoms, improves function and is likely to reduce disease progression (4).
The precise mechanism by which obesity leads to OA remains unknown, but is likely to be due to a combination of mechanical, humoral and genetic factors. Weight loss has clear medical benefits for the obese patient and seems to be a logical way of relieving joint pain associated with degenerative arthritis (5).
Exercise therapy reduces pain and activity limitations in OA of the knee. Classical exercise therapy is aimed at improving muscle strength, aerobic capacity, range of joint motion, and training of walking and activities of daily living (6).
Weight loss, preferably combined with exercise therapy, reduces pain and activity limitations in OA patients who are overweight (6).
Aim of the work:-
The aim of the work was to study the effects of physical therapy versus weight reduction and their combined effects in obese patients with knee osteoarthritis
Patients and Methods:-
This study was carried out at rheumatology and rehabilitation department at Zagazig university hospitals, Egypt on 156 obese patients (137 females and 19 males) aged between 38 and 57 years old with disease duration of 1 to 9 years of primary knee OA diagnosed according to The American College of Rheumatology (ACR) classification criteria of OA (7).
After taking a written consent from them for ethical consideration, patients were randomly divided into three groups: Group I: They underwent physical therapy only. Group II: They underwent weight reduction only. Group III: They underwent combined physical therapy and weight reduction. All patients in this study had radiographic severity grade I or II according to the Kellegren-Lawrence grading system (8).
Major exclusion criteria were: Other rheumatic diseases that might be responsible for 2ry OA, haematological disorders., CVS, renal or hepatic diseases, DM or other endocrinal disorders, patients having grade III andIV knee OA or history of knee trauma. Before inclusion in the study, fasting blood glucose was measured as well as hemoglobin and TSH. All patients meeting the previously mentioned inclusion criteria were initially assessed at 1st visit by full medical history, general and systemic examination, Local examination of both knees, assessment of pain severity using visual analogue scale (VAS). (9), measuring functional disability using WOMAC questionnaire (Western Ontario and McMaster Universities Arthritis Index) (10), determination of ambulation activity by measuring the walking speed (time needed (in seconds) to complete a distance of 100 meters on a treadmill as tolerably and quickly as possible) (11),measuring body mass index (BMI) for all patients included in the study using the following equation : BMI = (Weight (in Kgs) / Height (in meters) (12).
Calculating Estimated Nutrient Needs: Nutritional needs were calculated for patients in group II and III according to Harris Benedict Equation which calculates the resting energy expenditure (REE) (13) and recalculated every 2 weeks to change diet program.
Physical therapy programs:-Physical therapy programs were done for patients in group I and III in the form of TENS, quadriceps exercise, ultrasonic, cold packs, infra-red or faradic stimulation. These modalities were chosen according to severity of symptoms and reevaluated every 2 weeks to change the program.
Follow up sessions:-
All patients in this study were reevaluated after 2 and 4 months by: Full medical history and physical examination, reassessment of VAS scale and WOMAC index, reassessment of ambulation activity by measuring walking speed and reassessment of BMI.
2583
Statistical analysis:-The statistical analysis of data was done by using SPSS program (statistical package for social science) version 20.0. The description of the data was done in form of mean and standard deviation for quantitative data, frequency and proportion for qualitative data. For quantitative data one way ANOVA test was used to compare the means of the three groups. Repeated measure ANOVA test was used for the comparison of the means of each group at the three readings during the follow up. For qualitative data (frequency and proportion), Chi-square test was used. The correlations between the quantitative variables was tested using the Pearson correlation test.
The mean change was calculated by calculating the difference between the mean of the variable after the intervention and the mean of the variable at the baseline. All data were tested for normality of distribution prior to nay calculations. Statistical significant difference was considered at P<0.05, and highly significant difference at P<0.001.
Results:-
The study was a clinical trial conducted on 156 cases with disease duration of 1 to 9 years of primary knee OA who were divided into three groups, each of which included 52 cases. Seven cases were dropped during the study (4 refused to continue and 3 had depression). Finally, we had 149 patients divided into 3 groups; 50 patients in group I (44 females and 6 males) aged between 38-52 years who underwent physical therapy only, 50 patients in group II (45 females and 5 males) aged between 38-57 years who underwent weight reduction only using weight reduction program and 49 patients in group III (44 females and 5 males) aged between 38-55 years who underwent combined physical therapy and weight reduction (using same weight reduction program). Fig 1:-Correlation between the change of body weight with the change of WOMAC score in group II of patients.
Discussion:-
This study compared the efficacy of physical therapy, weight reduction and their combined effects on 149 obese patients (132 females and 17 males) aged between 38-57 years complaining for 1 to 9 years of primary knee OA.
2586
Patients were divided into 3 groups; group I underwent physical therapy only, group II underwent weight reduction only and group III underwent both physical therapy and weight reduction.
At the beginning of this study as shown in Regarding pain and functional scores of studied groups at the beginning of the study shown in table 2, WOMAC scores ranged from 6 to 44, VAS score ranged from 3 to 7 and walking speeds ranged from 78.8 to 105.7 seconds.
A special equation (Harris -Benedict equation), (13) was used to calculate nutritional needs for patients undergoing weight reduction in group II and III. Also, different modalities of physical therapy were used including TENS, quadriceps exercise, ultrasonic, cold packs, infrared and faradic stimulation which were chosen according to severity of symptoms for patients in group I and III.
In the present study, there was no significant difference between patients in all groups regarding their demographic data, pain and functional scores, clinical signs and radiographic grades of OA at baseline.
In this study, there was no significant difference between the three groups at baseline as regard body weight and BMI while highly significant difference was present between the three groups after 2 and 4 months. Highly significant difference was present regarding body weights and BMI of patients within group II and group III after 4 months while no significant difference was found in group I. Group II had the best improvement of body weight and BMI followed by group III while group I had the least as detected by comparing the mean change of the 3 groups after 4 months .
These results agreed with Huang et al., (11) who classified 126 obese osteoarthritic patients into 3 groups. Group (a) underwent weight reduction, group (b) underwent electrotherapy and weight reduction while group (c) underwent electrotherapy only. They reported significant change of body weight after 6 months in group (a) and (b). But, they used different protocol than this study as they used triple regimen for weight reduction including auricular acupuncture, diet control and aerobic exercises. Also a study by Lee and kean, (14) was in agreement with this study. They found significant difference of body weight among the studied groups in their meta analysis on a total number of 36 different studies on obese patients with knee OA. They all used diet control with exercise program to achieve weight reduction.
In the present study, highly significant difference was found regarding WOMAC, VAS and walking speed among the three groups at the end of the study from baseline. Improvement was better in group III followed by group II while group I had the least improvement as detected by comparing the mean change of WOMAC, VAS and walking speed of the three groups at the end of the study.
Results of the present study were in agreement with the results of Huang et al., (11).They used VAS and walking speed similar to us but they used different 112 functional index (lequesne index) to assess functional disability. They had significant difference of pain and functional indices at the end of their study in those groups who underwent combined weight reduction and physical therapy better than the other groups who underwent one method alone.
Results of the present study agreed also with the results of Christensen et al., (15). They also used Lequesne index to assess knee function while walking speed and VAS was used to assess knee pain. They found significant difference as regard knee pain and functional measures in patients who did both weight reduction and physical therapy better than patients that underwent one modality alone.
In the current study, change of body weight was significantly correlated with change of WOMAC score, VAS pain score and improvement of walking speed among patient in groups II and III as shown in tables 4 and 5 while no significant correlation was found in group I regarding all previously mentioned parameters as shown in table 3.
2587
The study of Huang et al., (11) was in agreement with results of the current study as they found that weight reduction was a practical adjuvant treatment in the rehabilitation of patients with knee OA. They suggested that the amount of weight reduction must be greater than 15% of the initial body weight to provide effective treatment of joint pain. Furthermore, if the weight reduced were more than 12% of initial body weight, an acceptable functional status would be obtained.
Another study agreeing with our study was done by Christensen et al., (16) who showed that the efficacy of weight reduction on improvement of function in 80 patients with knee OA and showed that weight reduction of 10% improved knee function by 28%.
Messier et al., (17) concluded that the combination of modest weight loss plus moderate exercise provides better overall improvements in self-reported measures of function and pain and in performance measures of mobility in older overweight and obese adults with knee OA compared with either intervention alone.
A study by Miller et al., (18) suggested that intensive weight loss intervention incorporating energy deficit diet and exercise training improves physical function in older obese adults with knee OA. But they said that greater improvements in function were observed in those with the most weight loss.
Results of the present study were in agreement with meta analysis made by Christensen et al., (15) which indicated that physical disability of patients with knee OA and overweight diminished after a moderate weight reduction regimen. The analysis supported that a weight loss of >5% should be achieved within 20 weeks period.
Grundy et al., (19) found that combination of weight loss and exercise provide better improvements in physical function and pain in obese adults with knee OA compared with either intervention alone. Exercise used in association with weight loss provides better improvements in physical capacity and muscle strength The study of Riddle and Stratford, (20) suggested a dose response relationship between changes in body weight and corresponding changes in pain and function. The threshold for this response gradient appears to be body weight shifts of ≥10%. Weight changes of ≥10% have the potential to lead to important changes in pain and function of the knee.
Jenkinson et al., (21) studied the effects of weight loss and quadriceps exercise on 389 men and women aged 45 and over with a body mass index (BMI) ≥ 28. They disagreed with this study as they suggested that simple knee strengthening exercises can significantly reduce knee pain and improve knee function in 114 overweight and obese people with knee pain. A moderate sustained weight loss was achieved with dietary intervention but with no apparent influence on pain or function.
At the end of the present study, we found that physical therapy is an effective method of reducing pain and improving function in obese patients with grade I and II of primary knee OA. Weight reduction is also an effective treatment modality in improving pain and knee function in obese patients with grade I and II of primary knee OA. Combination of both physical therapy and weight reduction gives superior results to using either method alone in obese patients with knee OA regarding reduction of disability, improvement of knee pain, physical function, daily activities and quality of life.
|
2019-04-22T13:04:55.914Z
|
2017-01-31T00:00:00.000
|
{
"year": 2017,
"sha1": "a6bb8bc518a0a3a40249aa329e3319c2e5a62b39",
"oa_license": "CCBY",
"oa_url": "http://www.journalijar.com/uploads/416_IJAR-14680.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c7483255b5b7a669e771ea433abd29c370cbdd44",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
240549300
|
pes2o/s2orc
|
v3-fos-license
|
“Un-forming” fibre-steered preforms: Towards fast and reliable production of complex composites parts
Automated Fibre Placement (AFP) allows for efficient deposition of composite prepreg materials at large scale in a reliable and reproducible way, while keeping human effort to a minimum. However, the technique is not perfectly suited to manufacturing small/medium parts with complex geometries. Deviation between as-designed and as-manufactured parts is almost inevitable, as is the occurrence of process-induced defects. In this study, an alternative design and manufacturing process is proposed. Instead of depositing composite tapes directly onto the complex mould, a flat tailored preform made from steered fibre tows is created first, and then the flat preform is subsequently formed into a 3D complex shape. The fibre path in the flat tailored preform is derived from a new virtual ‘un-forming’ process of a complex 3D part design with target fibre paths. To demonstrate the process, a small doubly curved composite part was un-formed. Fibre-steered tailored preforms were created using the continuous tow shearing (CTS) technique and then formed into the target shape using double diaphragm forming. The as-manufactured part was compared with the as-designed part as well as a part manufactured from straight fibre prepreg. The results demonstrated the feasibility of the virtual un-forming process and the potential of proposed manufacturing route.
Introduction
In the past two decades, the increasing demand for carbon fibre reinforced composites in the aerospace industry has been a key driver of the development and improvement of automated composite deposition [1]. Automated fibre placement (AFP) is the most widespread automated composite manufacturing technique, efficiently integrating multiple processes into a single step. One of the major benefits from AFP is the key role it has played in the advancement of variable angle tow (VAT) composites where fibre reinforcements are steered along optimised curvilinear trajectories. VAT laminates allow for broadening the design space and improvements in composites' load-bearing capability through tailoring of stress distribution for structural applications [2][3][4][5]. State-of-the-art AFP machines accept both dry fibre and prepreg tapes with width ranging from 1/8 ′′ -1 ′′ . The combination of the steering capability and the ability to deposit narrow tapes enables AFP systems to manufacture large and simply contoured structures (e.g. fuselage in Boeing 787 and Airbus A350 wing skins [6]) and has helped considerably in widening the range of application of composite structures.
However, many of the advantages of AFP become less obvious and lead to manufacturing defects when laying-up over a highly doubly curved surface and around tight radius. In these cases, the outer and inner edges of the tape are subjected to different stress states and the compressive stress generated along the inner edge of the tape results in out-of-plane wrinkles leading to poor layup quality which ultimately impacts the structural performance of the parts [7][8][9][10]. AFP processes involve fibre tapes being (pre-) heated, deposited and consolidated by a roller along a pre-defined path. Consequently, many machine parameters need to be adjusted (i.e. roller material, compaction pressure, temperature of the heating element, deposition speed etc.) and can all affect the final quality of the layup. Understanding how each of these parameters and their combination impact the layup quality is an extremely complex task and the easiest way to decrease the appearance of defects is often to simply slow down the deposition speed [9], particularly when directly laying-up on complex (i.e. doubly curved) 3D surfaces. In the cases where manufacturing defects are observed, the layup is often completely reworked. The time required for inspection and rework can normally take up to 63% of the overall process window, even exceeding the laydown time, and ultimately leads to high running costs and material wastage [11].
An alternative manufacturing technique to AFP is the forming of fibrous sheets (prepreg or dry) onto a 3D mould [12]. The part thickness can be built up by forming either each layer one by one (i.e. sequentially) or forming a prebuilt multi-layer stack directly [13,14]. The main advantage of forming over the AFP process directly onto the mould is its high material deposition rate. However, if the fibre paths in the preforms are straight, forming would lead to significant deviation between the fibre paths in the physical manufactured part and the design intent, where the optimal fibre orientations often form curved paths on the mould surface. This deviation results in less efficient structural designs and consequently an unnecessarily large safety factor. In addition, in a similar way to what is seen during the steering process in AFP, forming often leads to the generation of defects such as in-plane and out-of-plane wrinkles, which further impact the structural performance of the parts. As the preform is formed onto the mould, it is deformed to accommodate the change of shape. The near inextensibility of the fibres generates internal constraints in the preform material that may prevent the required distortions and thus lead to fibre path deviation away from the design intent. Other factors such as stacking sequence for multi-ply laminates and ambient forming temperature also profoundly influence the forming process and defect generation, and their significance was analysed in [14,15].
In this study, the feasibility of a novel manufacturing technique, where automated ply deposition and forming are combined, was demonstrated. A unique automatic ply deposition process that quickly lays up onto a flat tool whilst steering fibres was used to create custom preforms that can then be formed to the required shape in 3D. The fibre paths in the flat preform were determined using a novel un-forming, i.e. a reversed forming, simulation. The proposed technology is defined by three key steps (see Fig. 1 • Generating the 2D tailored preform design from the target 3D fibre path: This is achieved through a novel numerical simulation approach that derives the required fibre orientation in the flat tailored preform via the "un-forming" of the as-designed part with target 3D fibre paths.
• Continuous Tow Shearing (CTS) process to create fibre-steered preforms:
This process is an evolution from AFP that has been proven to minimise steering defects. The method allows for the creation of a steered preform by continuously shearing the fibre tows or tapes instead of bending them as in conventional AFP. The detailed description of the CTS process can be found in [16].
• Forming the fibre-steered preforms into the finished 3D part: The tailored preforms with steered fibre paths obtained in previous step is then formed on a mould of the required 3D shape using double diaphragm forming technique [17].
The numerical process by which the fibre orientation in the flat tailored preform is obtained holds some similarities with previous work from Rudd et al. [18], in which an optimised flat preform was derived via an concept of un-draping process (modified kinematic draping) from a 3D hemisphere geometry. Kinematic draping is a widely used tool to approximate fibre orientation during fabric draping onto complex 3D surfaces and is normally based on the pin-jointed net concept, with assumptions of inextensible fibre tow, zero preform shear stiffness and absence of process constraints [19]. In this work, a finite element analysis (FEA) based method, which accounts for the real physics of the problem (including prepreg material behaviour and boundary conditions), was used instead of a kinematic drape algorithm. This allows the fibre paths in the flat preform to be directly derived from only one un-forming simulation (i.e. no optimisation cycles are required in the case of the geometry studied here). Hence the final fibre orientations after forming, derived from a forming simulation, should show only small deviation from the targeted design. In addition to the improvements in laminate quality and production rate, the method also reduces waste compared to parts manufactured through forming of straight fibre plies.
This work is an expanded investigation based on previous study [20] in which the proof of concept and general workflow were initially demonstrated. The manuscript is organised in a chronological order from process modelling development to experimental demonstration. Section 2 provides a description of the simulation platform used to calculate the orientation of the fibre paths in the flat preform and the un-forming analysis. Section 3 reports the experimental work carried out to demonstrate the feasibility of the numerical technique. Finally, Section 4 discusses and concludes on the viability of the technique by comparing the results from the analysis, with the experimental measurements and the original design requirements.
General modelling strategy
In order to better present the entire process and each simulation's input and output, a workflow chart of the model strategy is illustrated in Fig. 2, in which line arrows indicate direct input or output within the corresponding model and hollow arrows indicate post-processing between different steps that were outlined in previous section.
An important factor in the forming of fibrous material is the boundary conditions applied to the preform. Even the two identical preforms being formed on the same tool through a forming process with a blank-holder and a double diaphragm forming (DDF) respectively will display fundamentally different wrinkle patterns due to the different tension on the preform [21]. These are typical manufacturing induced defects that a simpler kinematic modelling approach [18,22] would not be able to capture. There is, therefore, a strong need for a more accurate FE-based approach that can capture these load-path dependencies.
In the present study, DDF was selected due to several of its merits. It is capable of multi-layer forming, which is ideal for low cost and highvolume applications; secondly manufacturing defects generated from it and processing pressure present some similarities to industrial process such as hot drape forming [23,24]. Another benefit of DDF is the reduction of material wastage as, unlike forming with a blank holder, no excess preform material is required to be constrained during forming, which makes net-shaped preforms a possibility. In addition, the effect of the boundary conditions given by the blank holder on the forming response of a preform is complex in nature, and computationally expensive to simulate [25,26].
Because of the complex effect of boundary conditions and their influence on the stress built up within the preform, setting-up an unforming process model is not straightforward. Furthermore, the deformed shape of the diaphragm material after forming is unknown and cannot be used in a process where the boundary conditions of a forming process are simply inverted. A remedy was found by setting-up a procedure as described as follows (see Fig. 2): first an initial DDF process modelling of a 'surrogate' preform with representative material properties was carried out; then the reversed nodal displacement history of the diaphragm finite element model was then applied as a displacement field input to the simulation of the un-forming process, and finally the 'surrogate' preform is replaced by a material model with more realistic properties. Forming of the fibre-steered preform was finally performed to check if the fibre orientation in the formed fibre-steered part was consistent with the as-designed part (the 3D as-designed preform model). The un-forming and forming simulations were carried out using the commercial FE package (Abaqus/Explicit) with user defined material model (VUMAT).
Mould and sample design
To demonstrate the advantages of the proposed process and to validate the model, a twisted, swept and doubly curved 3D surface (referred as "twisted plate" hereafter) was designed and manufactured. Fig. 3 shows basic dimensions and overall configuration of the mould. The region that extends from the twisted plate's edges serves as a 'platform' to prevent the preform from being folded around the edges and corners of the mould during forming. The CAD geometry was built and then imported into Abaqus for further processing. Only the top surface of the virtual 3D mould was modelled and meshed with rigid material properties and membrane elements, respectively, to reduce computational cost. The CAD geometry of the top surface of designed mould was then again used Abaqus separately, to model the as-designed preform part.
Model set-up
The chord, in some areas of the twisted plate, varies along the span direction. This is problematic as simple structured mesh scheme (i.e., when the numbers of elements along chord direction are the same) does not allow the spanwise elements to have a constant width (chordwise), which could be made equal to the width of a steered fibre tow. To define the as-designed spanwise fibre path, the preform mesh was thus created with an in-house Python script that kept the true 3D width of each spanwise strip of elements constant and equal to the width of a fibre tow. These strips of elements represent the as-designed fibre tow paths on the part, and their elemental orientation represents the fibre orientation on the as-designed 3D part. Note that the way of deriving the as-designed 3D part design used in this study was inspired by a general design rule of maximising the amount of continuous fibre running in spanwise from root to tip in order to retain good structural performance of part under centrifugal force. In the context of this manuscript, it only served as a 'target' fibre orientation that the un-forming modelling started from. To accurately capture the material behaviour of the preform during the forming/un-forming process, an approach that was initially developed for the modelling of forming of woven fabric [21] and then later developed for capturing in-plane shear response of UD prepreg [27] was adopted here.
Reduced-integration shell elements (S4R in Abaqus/Explicit) and membrane elements (M3D4R in Abaqus/Explicit), with size of 4 mm, were superimposed by sharing their nodes, to represent the out-of-plane bending and in-plane material properties of the preform, respectively. The Young's moduli in the shell and membrane elements were set to different values in such a way that membrane elements were used to simulate the in-plane behaviour (extension in fibre and transverse directions and in-plane shear) of a unit area of the preform whilst the shell elements were used to represent the fabric out-of-plane bending behaviour. The Young's modulus of shell elements was back calculated from the flexural rigidity of the preform and the transverse and in-plane properties were set to small values. The membrane and shell elements thicknesses were both set to the actual preform thickness (i.e. 0.25 mm).
Specific values of the Young's moduli of the membrane and shell elements are given in Table 1. These elements collectively simulate the complete material behaviour of a prepreg sheet. This approach works well for both woven fabrics and unidirectional prepregs [21,27].
To accurately represent the orthotropic nature of the material during forming where rigid body rotation can be significant, the constitutive behaviour of the preform was implemented based on a hypoelastic material model via a VUMAT material subroutine in Abaqus/Explicit. A local 2D orthogonal coordinate system in the Green-Naghdi (GN) work frame (i.e. Abaqus VUMAT work frame that elemental strain input and stress output are calculated from) is defined, and fibre direction relative to this local coordinate is set as VUMAT inputs. During forming simulations, this coordinate system is subject to a rigid body rotation which is represented by a rotation tensor R, this rotation tensor can be decomposed by polar decomposition i.e. R = F⋅U ¡1 , where F is the deformation gradient and U is the right stretch tensor for internal straining of the material. A rotated local coordinate system can then be derived via the rotation tensor R from its initial state. With knowing the current deformation gradient, the deformed fibre direction can be calculated. Strain increments received by the VUMAT under the GN work frame are then converted to the fibre direction work frame for constitutive calculation while stress increments as output from the VUMAT in the fibre direction work frame are then converted back to the GN work frame for next iteration. Further Details of this material model are not elaborated in here as they can be found in the supplementary information and [21]. This VUMAT subroutine is made freely available by Bristol Compos Institute (BCI) and can be requested by e-mailing bci-g ithub@bristol.ac.uk.
Ogden's hyperelastic material model with representative parameters given in [17,21] and S4R elements, with element size of 4 mm, were assigned and used for modelling the diaphragms. The material parameters used are listed in Table 2.
Un-forming and forming simulations
As mentioned in Section 2.1, the un-forming process model was implemented by directly reversing the forming process. As un-forming cannot be physically realised with a separate experimental validation, the whole process (i.e. un-forming and forming simulations) was validated by comparing the predicted fibre orientation of the formed preform with that of the physical demonstrator in the as-designed state. Good agreement between these different cases proved the validity of the proposed method. As shown in Fig. 2, an initial forming simulation was first carried out. Displacement histories of the diaphragms' nodes were extracted and stored. When simulating the un-forming process, the nodal displacement histories obtained from the initial forming simulation were assigned to the diaphragms' nodes in the opposite direction and reversed time history. This allows both diaphragms to be deformed back to their original flat states.
As shown in Fig. 4a, the initial forming simulation model consists of four parts, i.e. two diaphragms, the mould and a 'surrogate' preform. During the initial forming, contact interaction (including friction) between the diaphragms and the preform may affect the deformation pattern as different materials respond differently when they are formed into the same shape. To take the tangential interactions between diaphragms and preform into account, Coulomb friction coefficients of 0.6 and 0.52 was assigned at interfaces between diaphragms and diaphragm-preform, respectively. These values were experimental determined in [17] and used in similar simulation works in [17,28].
Table 1
Material input parameters for the elements in the thermoset prepreg model. Table 2 Material parameters for Ogden hyperelastic material model used for the diaphragm material, from [17,21]. Parameters μ 1 , α 1 and μ 2 , α 2 are material constants derived by curve fitting of uniaxial and biaxial tension tests. The 'surrogate' preform model used in the initial forming simulation allows to obtain more accurate diaphragm movement by considering diaphragm/diaphragm and diaphragm/preform interactions, as well as the effects of the bending behaviour of the preform on diaphragms' deformation. A simplified elastic orthotropic material model with unidirectional fibre orientation aligned with the plate span was used. The assumption here is that the effects of steered fibre (i.e. preform bending) is small with respect to the diaphragm interactions. Small differences observed in the deformation of diaphragms in forming simulations of preforms with straight and steered fibres respectively presented later show that this assumption is reasonable. The diaphragms' deformation and their interaction with the preform can be significantly complex for multiply preform forming onto a complex mould in which it is anticipated that the benefits from using this surrogate preform will become even greater. It may be that in such a case, an iterative formingunforming process will be needed in order to reach the required preform design.
At the end of this initial forming simulation, the diaphragms and 'surrogate' preform fully conformed to the top of the mould. Their deformed shapes were then used as the initial configuration of the unforming simulations (see Fig. 4b). In particular, an anisotropic material with the fibres' orientation set as in the as-designed preform model was placed between diaphragms in the un-forming model. Hence, the asdesigned 3D preform is deformed by the double diaphragms during unforming from the twisted shape with element orientations along the asdesigned fibre tow paths (as-designed preform in Fig. 4b) to a flat tailored preform with steered fibre paths.
Material properties used in the initial and un-forming simulations as well as the model validation experiment are listed in Table 1. However, the in-plane shear behaviour and transverse deformation of prepreg is strongly influenced by deformation rate (forming rate in this case). During forming process of 3D complex moulds, different deformation rates will occur at different locations on the preform, which will be affected by the viscoelastic nature of the material [27,29]. The resin-dominated mechanical properties are also highly sensitive to temperature and can vary significantly [29]. Therefore, it is worth noting that the material properties listed in Table 1 and those used in simulations were representative and limited to the case studied in here, Fig. 4. Illustration of (a) initial forming model, (b) un-forming model. Note that the diaphragms' position relative to the mould and the preform and the size of the preform are for presentation only. and they would require update based on material characterisation testing and given testing facilities or manufacturing environment for forming of other complex shapes with different materials.
As mentioned in Section 2.3.1, shear modulus (G 12 ) or variation of shear modulus against shear angle in membrane element can be specified in the hypoelastic VUMAT material model to capture in-plane shear behaviour of preform during forming simulation [21,29]. In this work, the shear modulus of the membrane elements was given an notional value that is much lower than that tested for a similar material in [29], and this low shear modulus was expected to introduce more in-plane shear in the preform to test the robustness of the post-processing tools. However, for the given shape in this work, the numerical and experimental results showed that the material deformations, including transverse extension and in-plane shear, were insignificant to the overall result. In general, because of the orthotropic nature of prepreg, the fibre and transverse direction bending response of the prepreg, captured by shell elements in the hypoelastic VUMAT material model, were significantly different. Therefore, the flexural stiffnesses of the UD prepreg used in the study were also representative and require cantilever bend characterisation tests such as presented in [21,30] to obtain precise properties for future analyses.
As a result of deriving the as-designed 3D part with elemental orientation aligned with the required fibre orientations from the initial structural mesh, each strip of chordwise elements do not terminate exactly at the edges of the original boundary of the twisted plate region. Elements beyond the boundary were trimmed, thus the overall size of the part was preserved. Fig. 5a shows the as-designed preform model in 3D and its mesh, processed from the initial structural mesh. After the unforming simulation, to avoid any localised response (i.e. residual inplane shear) in the un-formed tailored preform model (see Fig. 5b) and to make it manufacturable for fibre steering in experiment, the unformed preform was further processed and re-meshed by a custom developed Python script so that spanwise strips of elements have a constant width, representing a CTS-manufacturable preform. It is worth noting that the numerical procedure described here would work in the exact same way for fibre-steering in AFP. Finally, the fibre-steered preform was then formed and compared with the original 3D asdesigned preform as shown in Fig. 5c and d, respectively. It is worth noting that the un-formed 2D fibre-steered preform model (see Fig. 5b), based on the as-designed preform model, was trimmed to fit to the surface of the mould but could not precisely cover the entire surface of the mould due to its mesh size and shape, therefore, it was slightly smaller than the mould surface. The 3D fibre-steered preform model used in the forming simulations (see Figs. 5c and 2) was meshed based on a single fibre trajectory extracted from the un-formed preform and its base span and chord were set to be equal to the maximum span and chord of the mould surface. This leads to the size of the formed fibresteered preform to be slightly larger than the mould surface and, by extension, the formed fibre-steered preform to be larger than the 3D asdesigned preform, as shown in Fig. 5d.
All numerical models were processed by with 16 CPUs, and the initial forming and tailored preform forming models were completed in under 30 min with appropriate mass scaling and damping to reduce dynamic effects. In the un-forming simulation, nodal displacement histories derived from initial forming simulation were written out in the form of text files and taken as boundary conditions in the un-forming model. The time required for these text files to be read into memory (~30 min) was longer than the actual simulation time of the un-forming model (~15 min). An efficient file exchange procedure may help to speed up the overall computational time even further.
Experiments
In this work, an experimental demonstrator of the proposed manufacturing technique was produced, and the results were used to validate the process models presented in Section 2.
A single fibre path was extracted from the mid-chord position of the simulated tailored preform design, as shown in Fig. 5b. A steered prepreg preform was then manufactured by the CTS technique using the extracted fibre path (see Fig. 6). The exceptionally small minimum steering radius of the CTS process allowed for production of such small fibre-steered preforms without defects. Since the width of the designed part is about 140 mm, two 100 mm wide prepreg tapes (MTM49-3/ T800, Solvay, BE) along the preform width were deposited onto a flat surface in the CTS process. To capture the fibre path with an image processing and make quantitative comparison between the preforms before and after forming, white lines parallel to the fibre directions were marked at 20 mm intervals on the prepreg tapes before it was fed into the CTS machine. The fibre path in a 2D cartesian coordinate format from the FEA results was converted to a CTS machine code, using a previously developed method from Kim et al. [31,32]. In addition, a double diaphragm forming test was carried out using a single UD prepreg sheet with 0 • fibre orientation aligned with the long edge of the plate was carried out. This allowed for highlighting the benefits of the proposed method compared with a more conventional manufacturing technique using straight fibre preforms.
A double diaphragm forming rig was designed and built to accommodate a small-to medium-sized tooling mould. Fig. 7a illustrates a cross-sectional view of the forming rig. During forming, the preform was placed between two diaphragms and held by a clamping frame on top of the upper diaphragm along the edges of the forming chamber. Prior to forming, 1 bar vacuum pressure was applied between the diaphragms and held throughout the forming process. After the preform was correctly positioned and secured between the diaphragms, the forming chamber was evacuated under 1 bar pressure for forming (see Fig. 7a). The mould, with a geometry identical to the numerical model used in the simulation, was manufactured by 3D printing (Fused Deposition Modelling) and was found to provide sufficient support. It was placed at centre of the forming rig. The peripheral areas between forming chamber and mould were filled with solid high-density Styrofoam blocks which were trimmed into shapes to avoid sharp corners and ensure a smooth surface transition between the forming chamber and the mould. Fig. 7b shows an overview of the forming rig with a formed fibresteered prepreg. Due to twist of the mould, the relative position of the mould to the forming chamber was also one of the key influencing factors to forming quality, as it affects the change of the contact area Fig. 6. Overview of CTS machine (left) and wide tape CTS machine steering a 100 mm wide unidirectional prepreg tape (right).
between the diaphragm and the mould during forming. For forming using a single preform in this study, due to the low modulus in the transverse direction (in-plane normal to fibre orientation) of the preform, tension applied by the diaphragm during forming can be detrimental to preserve the steered fibre directions. To minimise this tensioning effect on the preform, the mould was placed in such a way that the forming sequence is along the fibre direction from the apex to the bottom corners of the mould, as shown by the numbers in Fig. 7b.
A DSLR (digital single-lens reflex camera) camera was placed on a supporting frame above the forming rig focusing downward to the centre of the preform; images were taken after the preform was completely formed to the mould. These images were then post-processed for fibre path extraction.
Results and discussions
Numerical simulations of the initial forming, un-forming and forming of tailored preform were carried out according to the modelling strategy described in the Section 2. To reduce computational cost in the forming and un-forming simulations, the complete forming rig used in experiments was not modelled entirely, and the forming process was instead modelled by altering the pressure acting on the diaphragms. During forming simulations, the same uniform pressure distribution was applied to both diaphragm models in the opposite directions. Similarly to the modelling techniques found in [17,21], after the diaphragms and preform are stabilised, the pressure acting on the lower diaphragm was gradually reduced to 80% of its original value so that the double diaphragms and preform model were simultaneously formed onto the mould. Fig. 5b shows the un-formed tailored preform derived from the unforming simulation with the fibre trajectory highlighted. Because highly localised shear deformation was not observed, a single trajectory was sufficient to represent the fibre path for the entire preform, and this fibre trajectory was extracted and used to manufacture a fibre-steered preform using CTS. Fig. 5d compares the 3D formed tailored preform with the 3D as-designed preform. It can be seen that the predicted fibre path (represented by strips of elements) in the formed tailored preform is in good agreement with the original as-designed preform. Fig. 8a shows the flat preform deposited by the CTS machine. After the preform was manufactured, it was then transferred to the forming rig to form into shape. Fig. 8b shows the formed CTS preform. It can be seen from the figure and during the forming test, splitting between the two parallel 100 mm wide fibre tapes at the joined boundaries was not observed, which showed efficiency and robustness of the CTS technique in manufacturing defect-free preforms. Several trials were required before the preform could successfully be formed to the shape with full coverage and with a good alignment to the peripheral of the mould. Hence, each position in the tailored preform uniquely corresponds to one position on the mould. A visual assisting tool such as a projector above the forming rig could be helpful to align preform to the mould in future work. The occurrence of slippage between diaphragms and preform during forming can also affect the forming quality, and it is caused by relieving the compression acting on the preform leading to local relaxation of preform. This slippage was expected to be associated with prepreg tack, modulus mismatch of the two materials, vacuum pressure between the diaphragms and the boundary conditions of the diaphragms. To check for the occurrence of preform slippage against the diaphragms during forming, trials were carried out with the top diaphragm marked by grid lines. However, no diaphragm slippage was observed experimentally. Friction between diaphragms affects their relative movement in the tangential direction but does not significantly affect deformation in regions that are in contact with the preform. There are other factors that control the interaction between diaphragms, such as uniformity of vacuum pressure, type of diaphragm, boundary conditions, ambient temperature (some forming tests may require elevated temperature), etc. The level of characterisation performed here was deemed to be sufficient for the studied problem.
In other multiply forming cases where the moulds are even more complex than that in current study, a full numerical sensitivity study of the interfacial properties for diaphragms and diaphragm/preform and interply interactions may be required. The importance of these interactions varies from case to case and requires in-situ observation and bespoke numerical characterisation/validation based on the given preform stack, diaphragm type, mould shape, boundary conditions, forming facility, test environment, etc.
As mentioned in Section 3, the same forming test was performed with a UD prepreg sheet. Fig. 8c shows the formed UD prepreg with gridlines drawn before the forming test to indicate the fibre orientation before and after forming. The boundary of the twisted plate is also highlighted on the figure. Comparison between Fig. 8b and c shows that when using the UD preform, a large amount of material needed to be trimmed to fit to the required (highlighted) surface, which leads to production of waste. The UD preform also has a reduction of continuous spanwise fibre running from the root and the tip of the shape. However, in the formed CTS preform, the amount of continuous fibres was greater and trimming could be kept to a minimum. This demonstrates the advantages of the steered preform in forming of complex shapes. It is also noted that the tailored steered fibre preform appeared to be easier to form. This suggest that, in the case of more aggressive mould surface profiles, the technique may, as intended, help reduce the occurrence of defects. More importantly, this novel approach allows for manufacturing variable angle tow designs in such small and highly complex parts without defects, which is not possible using the conventional AFP process. This will enable highvolume production of more structurally efficient, complex composite components at reduced cost and waste.
For a more quantitative validation of the process models and of the proposed manufacturing approach, the deviation of fibre trajectories to the as-designed fibre paths in the FE forming simulations (see final step in Fig. 1) was compared with that in the physical demonstrator. As it was difficult to experimentally extract the fibre paths "in-situ" in 3D, the top view images of the as-designed preform obtained from the FE simulations were precisely overlaid to the corresponding images of the formed steered preform. Image processing scripts were developed to extract lines from an image and compare the deviation of fibre trajectories. The projected fibre orientations, i.e. marked lines on steered preform, were compared with the FEA mesh lines in the as-designed preform, as shown in Fig. 9a. A similar comparison was also made between the as-designed preform and the formed UD preform (see Fig. 9b). It can be seen that the fibre angle deviations observed over most of the area of the steered preform was below 5 • , while higher values were found at the corner of the preform, which was caused by misalignment of the initial position of the steered preform before it was formed to the shape. For the purpose of proof of concept, the current validation process was deemed to be sufficient as a foundation for future studies.
Conclusions
In this work a manufacturing process, particularly well adapted to the production of complex 3D composites structures, is proposed. Instead of depositing carbon fibres directly on the complex mould, a flat tailored preform made from steered fibre prepreg tapes was created. The flat preform was, subsequently, formed into a 3D complex shape. The fibre path in the flat, tailored preform is derived from a new virtual unforming process from the as-designed part. The fibre-steered preforms were created using the continuous tow shearing (CTS) technique and formed into shape using double diaphragm forming.
FEA process models and in-house scripts were developed for deriving as-designed fibre paths on arbitrary, yet complex shapes, the results of which were validated by forming of a fibre-steered preform. It was shown that a flat, custom fibre-steered preform could be obtained for the as-designed fibre paths via a numerical un-forming simulation. It was also observed that the part manufactured by the proposed process was Fig. 9. Deviation of the experimental projected fibre trajectories between (a) the formed steered preform and the preform with as-designed fibre trajectory derived in FEA; (b) the UD preform and preform with as-designed fibre trajectory derived in FEA.
closer to the as-designed part and required less material compared to a part manufactured through forming of a UD preform. This proposed process effectively decouples a complex or even impossible process (i.e. direct AFP on small to medium complex surface) into two easily manageable processes. Hence, flat steered preform are first created using CTS and then formed to the required 3D shape. The 'un-forming' simulation allows to derive a tailored near net-shape preform. Compared with a more traditional AFP deposition, the technique can help lower trial-and-error iterations and reduce material waste.
Longer term, the technique could be fully automated by integrating the CTS machine into a process chain whereby steered fibre preforms are manufactured and moved to a forming station via pick and place. DDF is also a highly scalable process that uses vacuum only to generate forming forces and is both fast and cost-effective. Both one shot DDF multi-ply forming of the preforms or ply-by-ply sequential single diaphragm forming are viable options to build the part thickness onto the tool.
This proposed approach is believed to be suitable for manufacturing small to medium composite parts with complex shapes, which are not currently manufacturable using AFP processes directly. The un-forming simulation is applicable to all 3D shapes, but the numerical fibre-steered preforms generated may not always be manufacturable by current fibres steering processes. If fibre-steering technology becomes inadequate to produce such preforms, information from numerical fibre-steered preforms and from intermediate steps during the un-forming simulation can also be useful in creating optimised lay-up design for the purpose of reducing manufacturing defects.
Furthermore, the proposed method allows for applying novel variable angle tow designs to such complex shapes for more advanced performance, that could not previously be achieved for such small parts due to limitations of current AFP technologies. However, the application of this approach to even smaller and more complex mould than that used in this study may lead to small fibre steering radii during deposition, which requires further investigation in future studies based on the development of the CTS process. The new method's extension to deal with parts of increasing complexity of shape and multi-ply forming, including at elevated temperatures, will be studied in the future.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-10-19T15:51:31.966Z
|
2021-09-23T00:00:00.000
|
{
"year": 2021,
"sha1": "1f637bd3a104c6a5e0afd9b59f5d435564112c4a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.compscitech.2021.109060",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2439846f73c42ede20f851ccf9db1dd756c7b829",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
253363063
|
pes2o/s2orc
|
v3-fos-license
|
Aggressive Pyogenic Spondylitis Caused by S. constellatus: A Case Report
Streptococcus constellatus (SC) is a species of Streptococcus belonging to the Streptococcus anginosus group, along with Streptococcus anginosus and Streptococcus intermedius. Despite its commensal nature, underlying risk factors and medical conditions might lead to various anatomic site infections caused by this opportunistic pathogen. Although SC infections have mostly been associated with bacteremia, some case reports of abscess and empyema formation have been documented. Herein, we report a case of a middle-aged female patient who initially presented with radiculopathy symptoms. Subsequent neurologic imaging revealed a pyogenic abscess along paravertebral muscles, which was found to be caused by SC. The patient was successfully treated with abscess drainage from the lumbar zone and antibiotics, and the symptoms of radiculopathy have completely resolved.
Streptococcus Anginosus Group
Streptococcus is a genus of Gram-positive spherical-shaped bacteria belonging to the family of Streptococcaceae. The majority of species that belong to this genus are oxidasenegative and catalase-negative, as well as facultative anaerobes [1]. The total number of identified species in the genus Streptococcus already exceeds 100, and with technological advancements and the accessibility of next-generation sequencing technologies, this number is likely to increase even more.
Streptococcus anginosus group (SAG) consists of three species of streptococci-Streptococcus anginosus (SA), Streptococcus intermedius, and Streptococcus constellatus (SC). This group of streptococci was formerly known as the milleri group though this nomenclature has generally been abandoned. These commensal organisms have been isolated from various anatomical sites and have been shown to cause pyogenic infections that lead to the formation of abscesses [2]. The SAG species are generally isolated from intra-oral samples and are known for their association with dental plaques and periodontal pathologies. Advanced sequencing tools, such as rRNA sequence analysis, have verified the phylogenetic relatedness of SA, S. intermedius, and SC [3].
The most widely studied SAG is SA. Even though it is a part of the normal flora of the human body, this opportunistic pathogen can cause serious infections in immunocompromised patients. The most common infections it causes are brain, lung, and liver abscesses; however, other clinical presentations have also been described [4]. Bacteremia caused by SA is not as common, and an abscess is frequently the main source [5]. According to the results presented in this study [5], the most prevalent infections caused by SAG were infections of the skin or soft tissue, followed by intra-abdominal infections, among which liver abscesses were the most common. On the other hand, infections of other tissues such as bone (osteomyelitis), respiratory tract (lung abscesses or empyema), and nervous tissue (brain abscess) were much less represented. Regarding the treatment of infections caused by species from SAG, they usually respond well to penicillin and ampicillin treatments. Significantly, many of the strains are resistant to macrolides and tetracyclines.
Streptococcus Constellatus
As previously stated, SC is a part of SAG and has the potential to spread to physiologically sterile anatomic sites and cause infections, which is predominantly observed in immunodeficient patients. It is a part of the normal mouth, gut, and urogenital tract flora. According to its anatomic distribution, SC may cause suppurative infections of multiple anatomic sites mostly in patients with immunosuppression or undergoing invasive procedures. A retrospective cohort study evaluating the pyogenic potential of species belonging to the SAG reported that SC was mostly associated with bacteremia, while the presence of abscesses or empyema was not as common [6]. Despite that, there have been some case reports of patients presenting with abscesses or empyema caused by SC.
Despite the fact that infections caused by SC are rather uncommon, some risk factors have been identified, such as male gender, smoking, frequent alcohol consumption, oncological conditions, chronic diseases of the respiratory tract (e.g., chronic obstructive pulmonary disease), intra-oral diseases (especially periodontal illness), diabetes mellitus, hepatitis, and HIV infections [7]. Furthermore, there are several means of SC dissemination to various anatomic sites where it can cause the infection: aspiration of intra-oral secretion, direct spread to adjacent tissues or due to trauma or surgical interventions, and hematogenous spread [8]. The most common microbiological tool for identifying SC is culture, although multiplex polymerase chain reaction (PCR) has also been used. Specific antibiotic treatments and their duration of SC infections differed in case reports, while the most commonly administered antibiotics were β-lactam/β-lactam inhibitor combination, carbapenem, clindamycin, ceftriaxone, metronidazole, and ciprofloxacin [7].
Here, we present a rare case of a female patient presenting with pyogenic spondylitis caused by SC with concomitant infection along the right-sided paravertebral and paracostal musculature.
Case Report
A 43-year-old female patient was admitted to the Department of Neurosurgery, University Hospital Center, Osijek, due to lumbar pain and L3 radiculopathy. Symptoms were first presented ten days before admission, and the patient underwent symptomatic therapy at a local hospital. Despite therapy, the symptoms persisted, and a computed tomography (CT) scan of the whole spine was obtained on the day of admission at another hospital. The right-sided gas inclusions were presented along the cervical, thoracic, and lumbar paravertebral muscles, with the largest collection at the segments of the L3 and L4 vertebra. Moreover, the purulent spondylitis of the L3 vertebra was noted with concomitant purulent epidural content and gas inclusion ( Figure 1). Additionally, a CT scan of the brain revealed pneumocephalus: multiple gas inclusions along the left-sided brain hemisphere and at the skull base ( Figure 2). According to her medical history, she had a history of previous COVID-19 infection, and she did not receive the SARS-CoV-2 vaccine. Additionally, she had a history of unspecified cardiomyopathy for which the patient did not provide an explanation, nor were there detailed medical records pertaining to this comorbidity. A physical examination of the patient revealed that the key muscle strength assessment of the right lower limb was 3/5 and for the left lower limb was 4+/5, respectively. No other chronic or familiar diseases were found via interview or from the medical records. The patient was disoriented upon admission, and her Glasgow Coma Scale (GCS) score was 12 (eyes 3, verbal response 4, motor response 5). Based on the clinical findings as well as radiologic imaging, she underwent urgent surgery, which was performed with local anesthesia in a regular fashion. The anesthesiologist was reluctant to perform endotracheal anesthesia due to the intraoral condition of the patient, carious teeth, and the assumed duration of the surgery, which lasted up to ten minutes. Upon skin incision and soft tissue preparation, the fascia was incised, and the subfascial space was opened. The drainage was placed along the right-sided paravertebral muscles at the level of L3 and L4 vertebrae and left in situ for the next three weeks. Despite the clinical condition of our patient, she was not hospitalized in the intensive care unit (ICU).
of previous COVID-19 infection, and she did not receive the SARS-CoV-2 vaccine. Additionally, she had a history of unspecified cardiomyopathy for which the patient did not provide an explanation, nor were there detailed medical records pertaining to this comorbidity. A physical examination of the patient revealed that the key muscle strength assessment of the right lower limb was 3/5 and for the left lower limb was 4+/5, respectively. No other chronic or familiar diseases were found via interview or from the medical records. The patient was disoriented upon admission, and her Glasgow Coma Scale (GCS) score was 12 (eyes 3, verbal response 4, motor response 5). Based on the clinical findings as well as radiologic imaging, she underwent urgent surgery, which was performed with local anesthesia in a regular fashion. The anesthesiologist was reluctant to perform endotracheal anesthesia due to the intraoral condition of the patient, carious teeth, and the assumed duration of the surgery, which lasted up to ten minutes. Upon skin incision and soft tissue preparation, the fascia was incised, and the subfascial space was opened. The drainage was placed along the right-sided paravertebral muscles at the level of L3 and L4 vertebrae and left in situ for the next three weeks. Despite the clinical condition of our patient, she was not hospitalized in the intensive care unit (ICU). During the early postoperative period, intravenous metronidazole, vancomycin, and meropenem were administered as an early empirical antibiotic treatment at the doses of 500 mg every 6 h, 1 g every 12 h, and 2 g every 8 h, respectively. Intraoperatively, the sample of purulent content was obtained for microbiological testing, which later revealed ® Figure 2. Non-enhanced axial CT images on admission revealed multiple intracranial gas inclusions (arrows). Porencephaly of the right-sided frontal horn of the ventricular system was noted years prior (star).
During the early postoperative period, intravenous metronidazole, vancomycin, and meropenem were administered as an early empirical antibiotic treatment at the doses of 500 mg every 6 h, 1 g every 12 h, and 2 g every 8 h, respectively. Intraoperatively, the sample of purulent content was obtained for microbiological testing, which later revealed the presence of SC, identified by Bruker MALDI Biotyper ® . Based on the microbiological assay of antibiotic sensitivity testing, treatment was modified, and intravenous antibiotic treatment was administered-metronidazole, fluconazole, ampicillin, and cefepime, while previously administered empirical antibiotics were discontinued. The obtained hemoculture was also microbiologically tested and was found sterile. The patient underwent an echocardiogram with uneventful findings. The left ventricle was of normal size with an ejection fraction of 61%. No clear signs of pericardial effusion or endocarditis were detected with this diagnostic method.
Laboratory testing at the admission pointed to the normal level of leukocytes, increased level of C-reactive protein (CRP; 126.5 mg/L), and mildly increased levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT) and gamma-glutamyl transferase (GGT) ( Table 1). Consistent laboratory examinations and antibiotic administration were applied and evaluated throughout the whole hospital stay at our department which lasted for a total of thirty days. Due to the highly increased values of D-dimers, low molecular weight heparin at the dose of 40 mg was administered twice per day. On the eighteenth postoperative day, laboratory examination revealed mildly higher values of leukocytes (10,9), decreased CRP values (29.8 mg/L), normal values of procalcitonin, and decreased values of D-dimers. Based on improvements in the clinical status of the patient and laboratory tests, the only antibiotic treatment for the remainder of the stay was ampicillin at the dose of 3 g every 4 h. The patient underwent a Color Doppler Ultrasound of the lower limbs, which excluded the presence of thrombosis, and an ultrasound of the abdomen, which excluded any pathologic findings. On the day of the discharge, laboratory testing revealed normal values of leukocytes, higher levels of liver enzymes, and decreased levels of CRP (16.6 mg/L). GCS of the patient upon discharge was 15, the patient was independent and the radiculopathy-related symptom, i.e., the right-sided leg pain was completely resolved. A follow-up CT scan of the brain revealed the complete resorption of previously described gas inclusions. The follow-up magnetic resonance imaging (MRI) of the whole spine revealed encapsulated purulent content along the thoracic medulla, cauda equina, and hyperintense signal on the T2-weighted image of the right-sided pedicle of the L3 vertebra ( Figure 3). A follow-up laboratory examination one month after discharge revealed completely normal findings.
A follow-up CT scan of the brain revealed the complete resorption of previously described gas inclusions. The follow-up magnetic resonance imaging (MRI) of the whole spine revealed encapsulated purulent content along the thoracic medulla, cauda equina, and hyperintense signal on the T2-weighted image of the right-sided pedicle of the L3 vertebra ( Figure 3). A follow-up laboratory examination one month after discharge revealed completely normal findings.
Discussion
As concluded by Kobo et al. [6], SC infections are usually presented as bacteremia, whereas abscess or empyema formation is infrequent. Pyogenic infections caused by SC are mostly described as lung infections, i.e., empyema [4,7,9]. Recently, Jiang et al. [4] conducted a retrospective study that revealed that the SC infection is the most prevalent in a middle-aged group of patients with a male predominance of 2:1. It should be noted that our patient was female. Additionally, the same study revealed S. anginosus as the most common bacteria from SAG that causes symptomatic infections. In terms of patient age and the onset of infection, patients between 35 and 54 years of age are exposed to the
Discussion
As concluded by Kobo et al. [6], SC infections are usually presented as bacteremia, whereas abscess or empyema formation is infrequent. Pyogenic infections caused by SC are mostly described as lung infections, i.e., empyema [4,7,9]. Recently, Jiang et al. [4] conducted a retrospective study that revealed that the SC infection is the most prevalent in a middle-aged group of patients with a male predominance of 2:1. It should be noted that our patient was female. Additionally, the same study revealed S. anginosus as the most common bacteria from SAG that causes symptomatic infections. In terms of patient age and the onset of infection, patients between 35 and 54 years of age are exposed to the highest risk of infection (around 30%)-as was the case with our patient [4]. Regarding other identified risk factors, our patient suffered from a periodontal disease, which was previously documented to increase the risk of infection by SAG [7]. In addition, she was not a smoker and denied frequent alcohol consumption.
As previously stated, SC is found in various body compartments and is not considered a pathogen per se. Clinically relevant infections caused by SC are in the majority of cases described in patients with weakened immune systems (patients undergoing immunosuppressive therapy or immunocompromised patients due to other medical conditions). Therefore, SC has been reported as a causative organism in a series of diseases, such as liver abscesses, empyema, and mediastinitis [9]. However, there has been some debate about SC being the single causative pathogen in these infections. Some researchers argue that these infections are of multimicrobial etiology.
To the best of our knowledge, and based on the review of the published literature, similar cases of pyogenic paracostal and paravertebral abscesses were not yet described. There are only a handful of case reports reporting pyogenic spondylitis caused by SC [10][11][12][13][14]. The case that was similar the most in the published literature was by Jin and Yin in which they presented a patient diagnosed with pyogenic spondylitis caused by SC [10]. That patient was of similar age as the patient in our case report; however, he was male and obese, with a body mass index (BMI) of over 40 kg/m 2 . Similar to the patient we described, the clinical presentation was signified by paraspinal tenderness at the level of L3-L5 vertebrae, as well as reduced muscle strength of lower limbs. Subsequent radiology imaging (MRI and CT scans) had clearly shown abnormal findings in the aforementioned lumbar levels of the spine. In contrast to our case, SC was initially isolated from the blood culture after which the diagnosis of pyogenic spondylitis caused by SC was finally confirmed.
After the systemic administration of antibiotics, the patient recovered, but he still suffered from weakened muscle strength and hypoesthesia. Another case that described a patient diagnosed with pyogenic spondylitis was by Gangone et al. [11]. This was an elderly patient (72 years of age) with the main complaint of lower back pain without radiation to the lower extremities. Paraspinal tenderness was also pronounced, though there was no sensory or motor deficit. CT-guided biopsy at the level of lower thoracic vertebrae Th 10-Th 11 was utilized to confirm the diagnosis of spondylodiscitis caused by SC. However, in this case, SC was not the only isolated bacterium-there was a synergistic infection by Streptococcus viridans. This patient was treated with intravenous antibiotics and made a full recovery. Lim et al. [12] reported a case of a 14-year-old male teenager with a history of progressively worsening lower back pain for one year, accompanied by a weight loss of more than 10 kg and loss of appetite. However, there was no motor or sensory deficit revealed during the physical examination. Moreover, the patient did not have any of the prior mentioned risk factors (dental disease, surgeries, trauma, etc.). In contrast to the previously mentioned case reports, this patient did not have pronounced paraspinal tenderness over the affected area. MRI of the lumbosacral spine revealed a decrease in the L4-L5 intervertebral disc space and disc destruction with minor paraspinal collection. The diagnostic method that was used was also a CT-guided biopsy and a pure culture of SC was grown from the sample. After 6 weeks of intravenous antibiotic treatment, the patient fully recovered. Furthermore, Wang et al. [13] reported a case of recurrent vertebral osteomyelitis and psoas muscle abscess in a 48-year-old male patient with an atrial septal defect. In this specific case, there were two causative agents, i.e., SC and Fusobacterium nucleatum. The patient was successfully treated conservatively with antibiotics and the likely source of infection was dental disease. The case of pyogenic spondylodiscitis due to poor oral health and which was caused by SC was also described by Potsios et al. [14]. This was a 64-year-old immunocompromised male patient presenting with sciatica and back pain and was also effectively treated with antibiotics.
Interestingly, Dai et al. [15] conducted a retrospective study in which they described a series of eleven patients treated for spinal epidural abscesses (SEAs). Of these, only one case was identified as SC infection with concomitant back pain, limb numbness, weakness, and fever. In this study, authors put an emphasis on the importance of thorough clinical and physical examination, as well as accompanying radiology imaging (CT and MRI scans), comprehensive laboratory testing, and microbiological analyses (blood culture and others). This is of crucial importance in patients presenting without any symptoms of the aforementioned clinical triad (focal spinal pain, neurologic deficit, and fever), which were less specific for the group of patients with SEA [15]. Therefore, a proper approach to treatment and correct diagnosis in such patients should be ensured by observing clinical symptoms, laboratory, and radiological findings.
In cases of spondylitis due to SC infection, systemic antibiotic treatment should be considered as the first-choice treatment, and intravenous antibiotics should be administered as soon as possible, although in cases of refractory analgesics and steroid treatment, surgical treatment should be considered. Surgical treatment is considered in cases of severe pain, motor, or sensory deficits due to the radicular or spinal cord compression caused by abscesses, etc., as was the case in our patient.
The main goals of surgical treatment are the decompression of nervous tissue, spinal stabilization based on radiological findings, and the evacuation and drainage of abscess, which allows for proper microbiological assessment and evaluation for further antibiotic treatment [16]. When antibiotic treatment is indicated, it should be administered according to bacterial culture results, although it might be initiated empirically depending on the condition of the patient. Even though many patients report the alleviation of symptoms in less than 6 weeks, antibiotics should be administered for up to eight weeks, according to previously conducted studies [16].
The initial course of our patient's preoperative symptoms pointed to a possibility of lumbar radiculopathy caused by degenerative spine disease (spinal stenosis), due to which she underwent unsuccessful analgesics treatment. Besides the overall clinical status of our patient, we have to emphasize that her unfavorable health, family, and socio-economic status were evaluated and considered. In addition, we have to emphasize that similar conditions that do not respond to conservative treatments should undergo radiological scanning after which, depending on the findings, a prompt surgical and antibiotic treatment should be strongly considered.
Conclusions
Lower back pain and radiculopathy that do not respond well to conservative treatment should be further assessed by a physician as soon as possible. The diagnostic work-up should include extensive radiologic evaluation and, if spondylitis is suspected, imagingguided or open surgery sampling for microbiological analyses should be performed. Acquiring adequate samples is essential in the diagnosis of spondylitis caused by SC. This enables microbiological analyses (cultures) and antibiotic sensitivity tests. Empirical antibiotic treatment prior to microbiological testing should be avoided-it should be reserved for patients with rapid clinical worsening. The surgical approach should be considered a second-tier treatment except in cases with clearly presented and radiographically confirmed pyogenic content in surgically favorable anatomic areas and concomitant symptoms.
|
2022-11-06T16:14:00.807Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "21ef6fb8a4eef468eb3ece8356a3e93cc9027762",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/12/11/2686/pdf?version=1667536445",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab43386ab3c90da7cb501d2bdc059b5a9154e5d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257312771
|
pes2o/s2orc
|
v3-fos-license
|
Temperament and longitudinal changes in physical activity – the Northern Finland Birth Cohort 1966 Study
Background Insufficient physical activity is risk factor for morbidity and premature mortality. This population-based birth cohort study investigated the cross-sectional and longitudinal associations between self-reported temperament at age 31 and self-reported leisure-time moderate to vigorous physical activity (MVPA) levels and changes thereof from the age of 31 to the age of 46 years. Methods The study population comprised 3,084 subjects (1,359 male and 1,725 female) from the Northern Finland Birth Cohort 1966. MVPA was self-reported at ages 31 and 46 years. Novelty seeking, harm avoidance, reward dependence, and persistence and their subscales were measured using Cloninger’s Temperament and Character Inventory at age 31. Four temperament clusters were used in the analyses: persistent, overactive, dependent, and passive. Logistic regression was used to evaluate the relationship between temperament and MVPA. Results The persistent and overactive temperament profiles at age 31 were positively associated with higher MVPA levels both in young adulthood and in midlife, while the passive and dependent temperament profiles were associated with lower MVPA levels. The overactive temperament profile was associated with a decrease in MVPA levels from young adulthood to midlife among males. Conclusion A passive temperament profile characterized by high harm avoidance is associated with a higher risk of low MVPA level than other temperament profiles over the life cycle in females. The results suggest that temperament may play a role in determining the level and sustainability of MVPA. Individual targeting and intervention tailoring for promoting physical activity should consider temperament traits. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-15303-9.
Background
Insufficient physical activity (PA) is one of the leading risk factors for non-communicable diseases such as cardiovascular diseases, cancer, and diabetes, and has a negative effect on mental health and quality of life. It is also the primary cause of premature death worldwide [1][2][3][4]. One-fourth of the world's population does not reach the minimum recommended levels of PA [5]. Several studies have confirmed the association between moderate to vigorous physical activity (MVPA) and health [6,7]. Improving PA in general is a public priority, but increasing individuals' PA levels is challenging. PA is a complex behavior that can change across the lifespan. Researchers have suggested that health behavior counselling should be tailored according to individual characteristics because the same intervention may not be effective for everyone [8]. Identifying the individual characteristics associated with adult PA could help develop tailored interventions for those who need it.
Human behavior is a multifaceted interplay of several components, and one of which is personality. Temperament is a foundation of personality that develops early in life [9] and has been defined as personality components that are inherited [10,11], emotionally based, and developmentally stable [12]. Temperament has been described in terms of habits and skills that are elicited by simple stimuli perceived by the physical senses, reflects the way in which one approaches and reacts to the world, and influences one's behavior and the way in which one interacts with others [11]. Understanding one's temperament can help understand how one reacts and relates to the environment [10].
One of the ways to assess temperament is Temperament and Character Inventory (TCI) which is based on Cloninger's psychobiological model and divides temperament into four genetically independent temperament traits: novelty seeking, harm avoidance, reward dependence and persistence [13]. Novelty seeking is the tendency to seek excitement, which can manifest itself as imprudent decision-making, rapid loss of temper, extravagance, and active avoidance of frustration [13]. Harm avoidance is characterized by excessive worrying, pessimism, shyness, and being fearful, suspicious, and easily fatigued. Reward dependence is the tendency to respond intensely to reward signals, especially verbal signals of social approval and support. Persistence is the tendency to continue working on something notwithstanding fatigue or frustration [13]. Based on this model, the TCI has been widely used to measure individual differences in these four main temperament traits [13].
Previous studies using the TCI have found that low novelty seeking, and high harm avoidance scores are associated with low MVPA levels and obesity [14] and that exercise dependence is positively associated with high harm avoidance and persistence scores [15]. Several studies have used the five-factor model (FFM) to investigate the association between personality and PA [16,17]. In line with studies using the TCI, studies using the FFM have shown that higher neuroticism scores are associated with lower PA levels and more sedentary lifestyles, whereas higher extraversion, openness, and conscientiousness scores are associated with higher PA levels [16][17][18].
To our knowledge, no large population-based longitudinal studies extending to midlife have used information on temperament combined with prospective data of leisure-time MVPA. Most previous studies have used small sample sizes and cross-sectional data which makes it difficult to draw conclusions on the association between temperament and PA level stability. In addition, the growing evidence on temperament indicates that prediction and understanding can sometimes be improved by looking at combinations of temperament attributes rather than temperament traits in isolation [19]. Therefore, we aimed to evaluate cross-sectional and longitudinal associations between temperament and leisure-time MVPA levels from young adulthood (age 31) to midlife (age 46) using Cloninger's temperament traits and clusters in a large population-based birth cohort. Based on previous literature [14,[16][17][18], we hypothesized that those with passive temperament profiles especially those who score high in harm avoidance would be physically less active than those with other temperament profiles.
Study design and participants
The study population was based on the Northern Finland Birth Cohort (NFBC) 1966 [20,21]. The NFBC1966 is an ongoing longitudinal birth cohort of individuals whose expected dates of birth fell in 1966. The cohort members have been carefully monitored from the prenatal period onwards with interviews, postal questionnaires, and clinical health measurements. The subjects and their parents provided written informed consent for the study. Personal identity information was encrypted and replaced with identification codes to ensure full anonymity. This follow-up study was approved by the Ethics Committee of the Northern Ostrobothnia Hospital District, Oulu, Finland (94/2011), and was conducted in accordance with the Declaration of Helsinki. For the cross-sectional analysis, we included data from 3,084 subjects who attended the clinical examination in the 31-year followup study [22] and completed MVPA and temperament questionnaires and 2,985 subjects who had completed baseline temperament and MVPA questionnaires in the 46-year follow-up study. For the longitudinal analysis, we included data from 2,963 participants who completed baseline temperament and MVPA questionnaires at ages 31 and 46 years (Fig. 1).
Measures Temperament
Temperament was measured using Cloninger's TCI, which includes four genetically independent temperament traits and their subscales: novelty seeking (exploratory excitability, impulsiveness, extravagance, and disorderliness), harm avoidance (worry/pessimism, fear of uncertainty, shyness, and fatigability), reward dependence (sentimentality, attachment, and dependence), and persistence . The character items of the TCI were not used in the surveys. The TCI (version IX) consist of 240 true/false items, including 107 temperament items (35 harm avoidance, 40 novelty seeking, 24 reward dependence and 8 persistence items). Temperament trait summary scores were calculated for the four main temperament traits.
Self-reported leisure-time physical activity
At the ages of 31 and 46 years, the weekly frequency of MVPA was enquired with the following questions: "How often do you participate in MVPA during your leisuretime?" [23]. MVPA was described as inducing at least some sweating and breathlessness. PA frequency had six response options for both intensity levels: (1) once a month or less often, (2) two to three times a month, (3) once a week, (4) two to three times a week, (5) four to six times a week, and (6) daily. The weekly duration of MVPA were enquired with the following questions: How long at a time do you engage in MVPA during leisure time?" [23]. PA duration at a time had the following response options for both intensity levels: (1) not at all, (2) less than 20 minutes, (3) 20-39 minutes (4) 40-59 minutes, (5) 60-90 minutes, and (6) more than 90 minutes [23]. The amount of MVPA were calculated by multiplying its frequency by its duration. The question on the frequency of MVPA has been shown to be associated with physical performance at the age of 31 [24].
The participants were divided into two groups according to weekly MVPA (minutes in a week): (1) low MVPA (0-149 min in a week; not meeting the PA recommendations) and (2) high MVPA (more than 150 min in a week; meeting the PA recommendations) [7]. The participants were then divided into four groups: (1) stable low (low MVPA level at the ages of both 31 and 46 years, (2) increased (low MVPA level at 31 but high at 46), (3) decreased (high MVPA level at 31 but low at 46), and (4) stable high (high MVPA level at both 31 and 46).
Other variables
The following variables were considered as potential confounders and included as covariates in the analyses of temperament and physical activity: Severity of anxiety and depression at the age of 31, perceived health at the age of 31, the level of education, smoking status, alcohol consumption, marital status, employment status, and perceived health at the age of 46.
The Hopkins symptom checklist (HSCL-25) was used to evaluate the presence and severity of anxiety and depression symptoms during the previous week. It includes 25 items, 10 of which are related to anxiety and 15 to depression [25]. The HSCL-25 score was calculated by dividing the total score (sum score of the items) by the number of items answered and ranged from 1 to 4. The total score was calculated only if no more than five of the 25 values were missing. A cut-off of 1.75 was considered to suggest depressive symptoms [26]. HSCL-25 was further classified as (1) major symptoms and (2) no or minor symptoms.
Education level at the age of 46 years was classified into following categories: (1) No professional education or vocational or college level and (2) University or polytechnic degree.
Smoking status at the age of 46 was classified as (1) current smoker (2) non-smoker and employment status was classified as (1) other and (2) employed. Marital status was classified into following categories: (1) other and (2) Married or cohabiting. Alcohol consumption at the age of 46 was classified as (1) high-risk drinker and (2) abstainer or moderate drinker [27].
At age of 31 and 46 years, perceived health was asked with the question "How would you describe your overall health at the moment with five options (excellent, good, fair, poor, and very poor)?" The options were further categorized into two classes: (1) other (fair, poor, or very poor) and (2) Good (excellent and good) [28].
Statistical analysis
The characteristics of the study population and physical activity at 31 and 46 years of age were expressed as frequency distributions and means and standard deviations (SD) for normally distributed variables and as medians and interquartile ranges for variables with skewed distributions. The independent t-test for continuous variables with normal distribution and the Mann − Whitney U test for variables with skewed distribution and Chi-square test for the categorical variables were used to examine differences between the genders. Cross-sectional associations between temperament clusters and the level of MVPA were evaluated using the Kruskal − Wallis test, and the pairwise comparison was made if overall p-value was significant. For the illustrative purpose the means with standard deviation for the MVPA minutes in a week at the age of 31 and 46 years were calculated.
Cluster analysis was performed to form groups of individuals who were as similar to each other as possible while being as different from individuals in other groups as possible in terms of TCI traits. The final cluster model included four distinct clusters according to the temperament trait scores. Cluster 1 individuals were characterized by high persistence (persistent), Cluster 2 individuals had high novelty seeking and low harm avoidance scores (overactive), Cluster 3 individuals had the highest attachment and dependence scores (dependent), and Cluster 4 individuals had high harm avoidance and low novelty seeking scores (passive). Clustering was performed separately for males and females. Star plots describing the clusters for males and females are seen in Fig. 2a and b. The average score of each cluster on each of the twelve TCI subscale, with the line closer to the middle of the plot represents lower scores and the line closer to the edge of the plot represents higher scores. The details of the clustering method used in this study have been described previously [29].
To evaluate how temperament clusters associated with longitudinal changes in MVPA levels from young adulthood to midlife, multinominal logistic regression analyses were carried out. The crude and adjusted odds ratios (OR) were used to quantify the strength of the associations between these variables. When predicting longitudinal changes in MVPA levels, stable low was used as a reference. The presence and severity of anxiety and depression symptoms at the age of 31 years and perceived health at the age of 31 years, level of education, smoking status, alcohol consumption, marital status, and perceived health at the age of 46 were treated as Allowing results to be compared to other studies, we also performed the analyses using the scores of the four main TCI traits. At first, associations between TCI scores and MVPA variables were analyzed with Spearman's rank correlation test. To examine the predictive properties of TCI scores for longitudinal stability and changes of MVPA, TCI scores were transformed to z-scores, and multinominal logistic regression analyses were conducted. Crude and adjusted ORs with 95% confidence intervals (CI) were calculated.
All analyses were conducted separately for males and females since there were differences in physical activity levels and TCI-scores between genders. The level of statistical significance was set to p < .05, and all tests were two-tailed. All statistical analyses were performed using IBM SPSS Statistics version 24.
Results
The characteristics of the study population in midlife and means and standard deviations of temperament traits (TCI scores) across the clusters at the 31 years in males and females are presented in Table 1 and in Additional file 1. In this study, the participants who continued to the 46-year follow-up study and returned PA questionnaires were more often female than male (56% vs. 44%; p < .046).
Males who continued to the 46-year follow up study perceived their health more often as very good or good than fair, poor or very poor (66% vs. 45%; p = .040). Females had higher mean (SD) scores than males in novelty seeking (20.7 [1.7]; p < .001). Males reported higher MVPA (minutes in a week) than females at both time points (Table 1). Both genders showed a slight increase in MVPA time between the two time points.
The self-reported MVPA levels among males and females in the four TCI clusters at the age of 31 and 46 years are presented in Table 2. The MVPA levels differed between the TCI clusters, with the passive participants having the lowest PA level in both time points. Post hoc analysis revealed significant difference in MVPA level between the overactive and passive clusters in both genders in both time points. Post hoc comparisons also showed that the females in the persistent and overactive clusters reported higher MVPA levels than those in the dependent and passive clusters.
At the age of 31 years, the rates of belonging to the group of engaging more than 150 min of MVPA in a week were 25%, 34%, 24%, and 22% among males (χ 2 [3, N = 1359] = 12.60, p = .006) and 24%, 28%, 16%, and 15% among females (χ 2 [3, N = 1725] = 29.29, p < .001) in the persistent, overactive, dependent, and passive clusters, respectively. At the age of 46 years, the rates of belonging to the group of more than 150 min of MVPA in a week were 32%, 31%, 28%, and 26% among males (χ 2 [3, N = 1303] = 3.73, p = .292) and 37%, 33%, 25%, and 22% among females (χ 2 [3, N = 1682] = 30.04, p < .001) in the Associations between temperament clusters and changes in leisure-time physical activity from age 31 to age 46 The frequencies of TCI clusters and longitudinal changes in MVPA levels from age 31 to age 46 are presented in Table 3. More than half of the participants belonged to the group of engaging less than 150 min of MVPA in a week between young adulthood and midlife. Table 4 shows the associations between TCI clusters and longitudinal changes in MVPA levels according to the logistic regression analysis. A stable high MVPA level from young adulthood to midlife was more common among males (OR = 1.76, 95% CI: 1.19 − 2.59, p = .004) and females (OR = 2.12, 95% CI: 1.50 − 3.01, p < .001) in the overactive cluster and among females (OR = 1.75, 95% CI: 1.23 − 2.49, p = .002) in the persistent cluster. It was less common among females (OR = 0.44, 95% CI: 0.28 − 0.68, p < .001) in the dependent cluster and among females (OR = 0.42, 95% CI: 0.26 − 0.68, p < .001) in the passive cluster. These associations remained statistically significant after adjusting for presence and severity of anxiety and depressive symptoms at the age of 31 years, A decreasing MVPA level from young adulthood to midlife was more common among males (OR = 1.76, 95% CI: 1.22 − 2.55, p = .003) in the overactive cluster. An increasing MVPA level was more common in females (OR = 1.64, 95% CI: 1.25 − 2.26, p < .001) in the persistent cluster and less common in passive cluster (OR = 0.65, 95% CI: 0.47 − 0.89, p = .007). These associations remained statistically significant after adjusting for presence and severity of anxiety and depressive symptoms at the age of 31 years, perceived health at the age of 31 years, level of education, smoking status, alcohol consumption, employment status, marital status, and perceived health at the age of 46 (Table 4). A decreasing MVPA level from young adulthood to midlife was less common among males in the passive cluster (OR = 0.64, 95% CI: 0.42 − 0.98, p = .039), however this associations did not remain significant after the model was adjusted for presence and severity of anxiety and depressive symptoms at the age of 31 years, perceived health at the age of 31 years, level of education, smoking status, alcohol consumption, employment status, marital status, and perceived health at the age of 46.
Associations between temperament traits and leisure-time physical activity at ages 31 and 46
The correlations between baseline temperament traits (TCI scores) and MVPA levels at the ages of 31 and 46 years are shown in Additional file 2. Lower harm avoidance scores were associated with longer MVPA times in both genders at both time points. Higher reward dependence scores were associated with longer MVPA times among males at the ages of 31 (rho = 0.07; p = .014) and 46 (rho = 0.11; p < .001). Higher persistence scores were associated with longer MVPA times in both genders at the ages of 31 and 46. Higher novelty seeking scores were associated with longer MVPA times at the age of 31 only in females (rho = 0.07; p = .006). The associations between TCI scores and longitudinal changes in leisure-time MVPA level from the age of 31 to the age of 46 years according to the logistic regression analysis are presented in Additional file 3. Higher novelty seeking scores were associated with decreasing MVPA levels among males (OR = 1.25, 95% CI: 1.06 − 1.47, p = .009) and remained significant after adjusting for presence and severity of anxiety and depression at the age of 31 years, perceived health at the age of 31 years, level of education, smoking status, alcohol consumption, employment status, marital status, and perceived health at the age of 46.
Lower harm avoidance scores were associated with decreasing MVPA levels in both genders and remained after adjusting for above mentioned covariates. Lower harm avoidance scores associated with stability of high MVPA levels in both genders and remained significant after adjusting for covariates in females. Higher reward dependence scores were associated with increasing MVPA levels among males (OR = 1.25, 95% CI: 1.11 − 1.44, p = .007) and remained significant after adjusting for covariates. Higher persistence scores were associated with increasing MVPA levels among females (OR = 1.26, 95% CI: 1.11 − 1.44, p < .001), and with stability of high MVPA levels among males (OR = 1.38, 95% CI: 1.16 − 1.65, p < .001) and females (OR = 1.36, 95% CI: 1.15 − 1.61, p < .001), but also with decreasing MVPA levels among females (OR = 1.27, 95% CI: 1.08 − 1.50, p = .004). The mean TCI scores across the PA groups according to changes in leisure-time MVPA from the age of 31 and to the age of 46 years are presented in Additional file Table 4.
Discussion
This population-based birth cohort study evaluated the cross-sectional and longitudinal associations between temperament and leisure-time MVPA levels from young adulthood (age 31) to midlife (age 46) using Cloninger's temperament traits and clusters and a large data set from the NFBC1966. The results suggest that temperament may play a role in determining the level and sustainability of MVPA.
As expected, those with a passive temperament profile (high in harm avoidance and low in novelty seeking) reported lower levels of leisure-time MVPA at both time points, especially when compared to those with an overactive profile (high in novelty seeking and low in harm avoidance). Stability of high MVPA levels were less common among males and females with a dependent temperament profile (highest attachment and dependence) and among females with a passive temperament profile. Moreover, as expected, lower harm avoidance scores were associated with longer MVPA times. Stability of high MVPA level from young adulthood to midlife were more common among males and females with an overactive temperament profile (high in novelty seeking and low in harm avoidance) and among females with a persistent temperament profile. However, decreasing MVPA levels were associated with an overactive temperament profile (high novelty seeking, low harm avoidance) among males.
Several studies have investigated the association between PA and personality as measured by the FFM [17,18]. Cloninger's psychobiological model has been shown to have important similarities to the FFM [30]: Harm avoidance has been found to be positively related to neuroticism; novelty seeking has been shown to be positively related to extraversion; persistence has been shown to be positively related to conscientiousness; and reward dependence has been shown to be positively related to agreeableness and extraversion [31,32].
Our findings are consistent with previous studies suggesting that higher harm avoidance scores are associated with lower MVPA levels [14] and overall PA levels [17,18,33]. High harm avoidance has also been associated with anxiety [34], depression [35], pain responsiveness [36] and weight gain [37]. As a tendency to feel negative emotions, such as worriedness, insecurity, and shyness, in social situations [38], harm avoidance may inhibit the willingness to cope with unfamiliar situations and reduce the exposure to opportunities to be physically active [17,39]. Approaches that consider past PA experiences and address individual barriers to PA might improve PA counselling outcomes.
Reward dependence is characterized by social attachment. In our study, higher reward dependence scores were associated with increasing MVPA levels among males but also with low MVPA levels at young adulthood and midlife in both genders. A previous study reported that individuals with a dependent temperament profile (highest attachment and dependence) scored low on scales for psychosis proneness scales as well as on the hypomania personality scale, which is likely to reflect low energy levels [29]. It has been suggested that individuals who score high in reward dependence may benefit from group interventions, as they may be especially suited to using social support and to be motivated by social norms and group pressure [37].
Our findings are in line with a previous study indicating that a conscientiousness temperament (the tendency to be organized and disciplined), which is conceptually like the persistence temperament trait of the TCI, was positively associated with higher MVPA levels [18]. It has been suggested that for individuals who score high in such temperament trait, the motivation for PA tends to have internal rather than external sources [40]. Persistent people tend to be industrious and hard-working despite fatigue and frustration, and they intensify their efforts in response to anticipated rewards [13]. In our study, individuals with a persistent profile had the highest PA questionnaire response rates.
Our results are also in line with previous findings suggesting that extraversion and sensation seeking, traits similar to novelty seeking, are associated with higher vigorous PA levels [33,41]. In our study, individuals with an overactive temperament profile (high in novelty seeking and low in harm avoidance) had higher MVPA levels than individuals with a passive temperament profile at the ages of both 31 and 46 years. However, a decrease in MVPA from young adulthood to midlife was associated with novelty seeking at age 31 among males. The link between novelty seeking and MVPA may be due to a heightened tendency to seek strong sensation stimuli and excitement, which might be fulfilled through MVPA [11,17] and participation in high-risk sports [42]. Individuals who score higher than average in novelty seeking and average in the other traits have been characterized as impulsive, excitable, extravagant, and disorderly. They are readily engaged in new interests and activities, but they tend to neglect details and are quickly distracted and bored. They are also easily provoked to prepare for flight or fight [43]. It has also been reported that higher novelty seeking is associated with several unhealthy lifestyles and risky behaviors such as smoking [44], eating disorders [45], alcohol use [44,46], or even drug abuse [47]. Nevertheless, a previous study has reported that novelty seeking decreases with age [48]. Therefore, the fall in novelty seeking can also appear as a decrease in leisuretime MVPA.
The main strengths of this study are its populationbased prospective birth cohort design with a large sample size, relatively high response rates, and a long follow-up period. The participants were born in the same geographic area and were of the same age. Certain limitations should also be acknowledged. The temperament identification was based on self-administered questionnaires and no single questionnaire captures all aspects of temperament. Moreover, our results are limited to TCI, which has, however, been widely used to measure individual differences in these four main temperament traits. TCI clusters are based on the NFBC 1966 data. However, advantages of using temperament clusters are that in many cases the clusters are more strongly associated with dependent variables than the individual temperament traits and clusters comprised of multiple temperament traits capture more information about individual differences and risk profile [29,49]. Yet allows for a more simpler data structure and less statistical tests [29].
It is noteworthy that temperament may influence the discrepancy between self-reported and accelerometer-measured PA. A previous study reported that extraversion, trait similar to novelty seeking, was associated with reporting higher PA level compared to accelerometer-measured PA in college students [17]. It has also been reported that females with an overactive profile (high in novelty seeking and low in harm avoidance) may tend to underestimate their weight in postal questionnaires compared to measurements at the physician's office [29]. This suggests that caution should be exercised when interpreting self-reported positive lifestyles and health-related variables [29]. Finally, the information on the frequency and intensity of PA at the age of 31 and 46 years were based on self-administered questionnaires and thus susceptible to response bias [50]. Measuring physical activity with an accelerometer would have strengthened the validity of the study [51]. Unfortunately, we did not have accelerometer measurement available at the age of 31 years. Although PA questionnaires tend to overestimate the true PA levels, they can provide essential information about PA behavior or perceived time spent in specific activities in adults and are used for grouping people into categories based on their physical activity [39,52].
Conclusion
This population-based birth cohort study provides new evidence on the relationship between temperament and longitudinal changes in leisure-time MVPA from young adulthood to middle age. Stable high MVPA level were less common among males and females with a dependent temperament profile (highest attachment and dependence) and among females with a passive temperament profile (high harm avoidance and low novelty seeking). Stable high MVPA level from young adulthood to midlife were more common among males and females with an overactive temperament profile (high in novelty seeking and low in harm avoidance) and among females with a persistent temperament profile. Identifying individuals with insufficient PA may help device personalized interventions for promoting PA at the individual and population levels. Individuals high in novelty seeking could benefit from personalized life and PA counselling taking into account their temperament, as well as from feedback strategies to help change their behavior. Information about temperament can also guide health care professionals to identify individual barriers and enablers of PA and personalize behavior counselling. Future therapeutic and PA approaches aiming to prevent physically inactive lifestyles should also consider temperament traits.
Data availability
The NFBC data is available from the University of Oulu, Infrastructure for Population Studies. Permission to use the data can be requested for research purposes via an electronic material request portal. In the use of the data, we follow the EU general data protection regulation (679/2016) and the Finnish Data Protection Act. The use of personal data is based on cohort participants' written informed consent at their latest follow-up study, which may cause data use limitations. Please contact the NFBC project center (nfbcprojectcenter@oulu.fi) and visit the cohort website (www.oulu.fi/nfbc) for more information.
Declarations Ethic approval and consent to participate
The follow-up study was approved by the Ethical Committee of the Northern Ostrobothnia Hospital District, Oulu, Finland (94/2011), and it was conducted in accordance with the Declaration of Helsinki. The subjects and their parents provided written informed consent for the study. Personal identity information was encrypted and replaced with identification codes to ensure full anonymity.
Consent for publication
Not applicable.
|
2023-03-04T14:32:53.062Z
|
2023-03-03T00:00:00.000
|
{
"year": 2023,
"sha1": "974dda9507f85c5d5bb9977214409bd95c9a49b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "974dda9507f85c5d5bb9977214409bd95c9a49b5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195189284
|
pes2o/s2orc
|
v3-fos-license
|
Filaggrin and atopic march
There is an increasing number of experimental, genetic and clinical evidence of atopic dermatitis expression as a pre-condition for later development of other atopic diseases such as asthma, food allergy and allergic rhinitis. Atopic dermatitis is a heterogeneous, recurrent childhood disease, also present in the adult age. It is increasingly attributed to systemic features and is characterized by immunological and skin barrier integrity and function dysregulation. To maintain the protective function of the skin barrier, in particular the maintenance of pH, hydration and antimicrobial functions, the filaggrin, among others, plays a significant role. Filaggrin is a multifunctional, histidine-rich, insoluble protein. The lack of filaggrin is associated with various cutaneous (e.g. ichthyosis vulgaris, allergic contact dermatitis) and non-cutaneous (e.g. diabetes, inflammatory conditions of the gastrointestinal tract) diseases and may be a result of genetic, immunological factors combined with environmental factors. In this review we summarised (emphasized) recent findings in understanding the role of filaggrin in atopic dermatitis and other diseases, participants in the atopic march.
Introduction
The term atopy, which was first coined by Coca and Cooke in 1923, represents immunoglobulin (Ig) E-mediated type I hypersensitivity reactions (1). Childhood atopic diseases typically develop in mucosal surfaces, such as skin, respiratory and gastrointestinal system, showing a high degree of comorbidity. Prevalence of atopic diseases, including atopic dermatitis (AD), asthma, allergic rhinitis (AR), and food allergies (FA), has increased in recent decades and currently affects up to 20% of the population worldwide (2). These diseases seem to be closely related. The process by which several atopic diseases are interconnected throughout life, in this case "progression" of AD into asthma and AR is an epidemiological phe-nomenon commonly referred to as "atopic march" (3). The term refers to "time progression" from AD to asthma and AR during childhood, suggesting that AD is an "entry point" for subsequent atopic diseases (4-6) ( Figure 1). However, there are opinions that AD is not a causal factor for the atopic march, and that the sequence of events does not always have to be the same (7). Belgrave et al. in addition to some other researchers, question the paradigm of the atopic march, considering it's too simplified and needs to be revised (8,9). They explain it by the fact that most of the studies involving atopic march have been conducted based on cross-sectional statistical analysis on the general population, not taking into account the heteroge- Čepelak I. et al. Filaggrin and atopic march neity of the chronology of the development of symptoms. They hypothesized that children with AD that later develop asthma and AR may represent a specific phenotype. Although mentioned in the context of the atopic march, evidence of AD related to FA is unclear and deficient (10). According to Hill et al. atopic diseases have some common multiple genetic and environmental predisposing factors, share immune characteristics of one or more allergen-specific Th2 responses, and include type 2 effectors phases in which specific IgE, granulocyte activation and other inherited factors occur (11). Most of the investigations of the causal nature of "progression" are aimed at seeking evidence that supports the premise that AD of early childhood encourages the development of FA and respiratory allergies via systemic sensitization resulting from impaired skin barrier function. Thus, the hypothesis that the main cause of atopic diseases is a defect in epithelial barrier integrity is accepted (12,13). The most investigated causes of the epidermal skin barrier impairment in AD, as the initiator of atopic march, are a lack of filaggrin (FLG) associated with decrease of ceramide and significant activation of epidermal proteases (14).
In this review, we focus on the FLG molecule and its importance in maintaining normal skin barrier function and on current knowledge of the signifi-cance of FLG deficiency in diseases associated with atopic march. An overview of the recent knowledge of the nature of these conditions and their association with the role of FLG will be given. Appropriate scientific papers were selected on the PubMed using the following headings and keywords, and their combinations: filaggrin, profilaggrin, filaggrin mutations, atopic dermatitis, eczema, epidermal dysfunction, allergic rhinitis, food allergies and asthma. The search included epidemiological, genetic and clinical examinations, mainly on patients of various ages, regardless of the ethnic group.
Atopic march development
Causes of skin barrier dysfunction can be intrinsic and extrinsic. Of the intrinsic causes, apart from the mutation of the gene encoding FLG, mutations of numerous other genes, such as the SPINK5 gene encoding a serum protein (Kazal type 5), and CDSN genes encoding corneodesmosin were investigated (15,16). Extrinsic causes are scratching, environmental irritation, microbes, viruses, protein allergens, etc. The reactions that follow skin barrier impairment and precede Th2 inflammation and IgE synthesis include various cytokines (e.g. interleukine (IL)-25, IL-33 IL-4, IL-13, IL-5, etc.) and various
Incidence of allergic symptoms (%)
Asthma AD Food allergy Rhinitis cell types (e.g. basophilic and eosinophilic granulocytes, dendritic cells, mast cells and other types of cells). However, the TSLP (thymic stromal lymphopoietin), a type I cytokine produced by keratinocytes through the protease-activated receptor-2 (PAR-2) mediated nuclear factor kappa B (PAR-2/NF-kB) pathway, has the greatest significance. This cytokine has an essential role in the initiation of allergic inflammation in the skin. It activates Langerhans cells that promote differentiation of naive T-cells in Th2 cells in lymph nodes (16,17). In addition, TSLP reduces the FLG expression in human skin, and acts as a sensory neuronal activator, resulting in itching (18,19). Some authors consider that TSLP in the circulation can be a marker of lung responses to allergens. The TSLP could also be an important therapeutic target for reducing asthma and AR in children with AD (20).
Genetics play a greater role predisposing AD to AR and AD to asthma than environmental factors (21). In addition, the relationship between AD and asthma, and between asthma and AR, does not depend on common environmental factors of early life. Therefore, various genes were studied such as genes responsible for epidermal integrity (FLG, CLD, etc.), genes responsible for the functioning of the immune system (e.g. IL-13, IL-33, etc.), the nonspecific immune response participant genes (e.g. glutathione-S-transferase) and the genes responsible for chronic tissue inflammation (e.g., COL29A1 gene encoding collagen, ADAM33 gene expressed in fibroblasts and smooth muscle cells, etc.) (5,10). Despite differing opinions on the paradigm of atopic march, it is currently accepted that the primary disorder is the disturbed structure of the stratum corneum (SC) or barrier function. Thereafter, an increased absorption of allergens (allergic sensitization) predisposes the patients to the development of other atopic diseases. Genetically or acquired loss of FLG with epidermal structure impairment results in significant changes in both skin hydration and increased skin pH. Consequently, serine proteases activity is increased. Serine proteases via suitable mediators recruit and activate innate cell types that release IL-25, IL-33 cytokines, and particularly cytokine TSLP. The TSLP promotes the activation of dendritic cells, which migrate into the lymph organs and activate naive T cells and B cells, finally resulting in Th2 immune response. Increased penetration of allergens due to the disturbed structure of the skin barrier contributes to the Th2 immune response. The TSLP and the proinflammatory cytokines migrate to various pathways entering into systemic circulation and consequently in the airway and nasal mucosa resulting in asthma and AR. On the other hand, Th2-cytokines such as IL-4, IL-13, IL-25, as well as cytokines of other T-cell subtypes, whose number increases in AD depending on the subtype of disease, may suppress FLG expression in keratinocytes or aggravate inborn barriers defect through positive loop feedback. The pattern of atopic march development, including FLG deficiency and currently known major factors, is illustrated in Figure 2.
What is filaggrin?
Filaggrin (from "FILament AGgregating pRoteIN) is a structural, S100 calcium-binding epidermal SC protein (22). It binds the intermediate filament of keratin producing micro-fibrils and is responsible for the normal SC function. It is a product of the proteolytic activity of the serine protease group enzymes to a large (molecular mass (Mm) > 400 kDa), insoluble and functionally inactive precursor molecule -profilaggrin. Profilaggrin is a complex, highly phosphorylated molecule rich in histidine and is the main constituent of the F-type keratohyaline granules of the stratum granulosum (SG). The profilaggrin expression is under the control of transcription factors such as the Activator protein 1 (AP-1) transcription factor family members (23). Profilaggrin encodes the FLG gene, one of the genes that encode proteins with epidermis formation function. It is localized on the short arm of chromosome 1 and belongs to the group of socalled epidermal differentiation complex (EDC), located in region 1q21. During the epidermal differentiation process of keratinocytes in the boundary between SG and SC, the profilaggrin molecule is dephosphorylated under the influence of phosphatase and becomes more soluble (24). Further action of different proteases results in cleavage of the profilaggrin molecule in several stages, sepa-rating the N-and C-terminal domains of the molecule, splitting the central part into trimers and dimers, and finally into 10-12 functional monomersthe FLG molecules. Each FLG molecule consists of 324 amino acids, with an Mm of 37 kDa. The resulting FLG monomers aggregate keratinic filaments along with the catalytic activity of the transglutaminase-1 enzyme, resulting in the marked morphological and cytostructural changes of keratinocytes (become flattened corneocytes). In addition to FLG, many other proteins, such as loricrin, involucrin, small proline-rich proteins, are involved in the construction of the cornified envelope (25). In the further course of epidermal differentiation, separation of the part of FLG from the structure of the cornified envelope occurs resulting in post-translational conversion of arginine residues into the citrulline residues. The transformed FLG molecule is subject to the action of caspase-14 and is degraded. The FLG degradation products include a mixture of hygroscopic amino acids such as glutamine, histidine, alanine and their derivatives such as trans-urocanic acid (tUCA), degradable product of histidine and pyrrolidone carboxylic acid (PCA), degradable glutamine and glutamic acid product (26). These compounds along with some other such as hyaluronic acid, lactate, sodium, potassium, magnesium, phosphate, calcium, and citrate are the main constituents of so-called natural moisturizing factor (NMF) of the skin (27). The main characteristics of profilaggrin, FLG and its degradation products are presented in Table 1. In addition, inflammatory and immune changes may result in filaggrin deficiency. Therefore, there is a positive feedback loop. Skin barrier impairment caused by deficiency of filaggrin; reduced filaggrin function leads to increased activity of TSLP, which through dendritic cells acts to promote both Th2 cell adaptive immune responses and Th2 innate immune cell response (marked as 1). Allergen uptake and presentation to dendritic cells in a Th2 context (marked as 2). Th2 cell expansion and activation (marked as 3). B cells activation and IgE switch (marked as 4). Neutrophil recruitment and activation (marked as 5). Eosinophil recruitment and mediators' release (marked as 6). Immunoglobulin E binding to the high-affinity IgE receptor (FcεRI), on mast cell and basophil receptors (marked as 7). Th -T helper. IL -interleukine. IgE -immunoglobulin E. TSLP -thymic stromal lymphopoietin. To date, approximately 60 loss-of-function FLG mutations have been identified, with a difference in the spectra of mutations between different populations (28). The frequency of FLG mutation in the general population is 8-10%. The most common mutations in the European population are the R501X and 2282del4 mutations, resulting in premature transcription or inability to appropriately process profilaggrin in FLG. Mutations R244x and S3247X were found with significantly lower frequency (29). Lack of functional FLG due to the FLG gene mutation and to inflammatory and proinflammatory mediators that may affect FLG expression (acquired deficiency), results in a dysfunction of processes necessary for the corresponding protective SC role.
Filaggrin and skin barrier
The skin barrier function is largely dependent on SC. The interior of the corneocytes consists mainly of keratin filaments aggregated by FLG, which is one of components that provide a scaffold for the extracellular lipid matrix. Filaggrin degradation products maintain in part for the both, water-holding capacity and acidic pH of the SC (25). Natural moisturizing factor components are essential for skin barrier integrity, i.e. for skin hydration, skin pH modulation, immunosuppressive properties, antimicrobial defence, photosensitivity and skin elasticity (27). Because of water loss, there is the loss of skin elasticity. Inadequate humidity of the skin can also stimulate the production of proinflammatory mediators. The skin pH, normally ranging from 4.5 to 5.5, is increased. Endogenous causes of skin pH increase are amino acid deficiency, disruption of de-composition of amino acids into tUCA and PCA, degradation of fatty acid from epidermal phospholipids, and changes of Na + /H + transmembrane transport. Increased pH results in increased activity of serine proteases that can activate some cytokines and cause inflammatory reactions. The exogenous causes of pH increase include the microbial skin flora metabolites, the activity of sweat glands, and sebum derived compounds. Changing pH from neutral to alkaline values favours the development of various pathogens on the skin surface, most commonly Staphylococcus aureus and Candida albicans. All these changes result in reduced SC cohesion, enabling penetration of allergens and other pathogens in the skin.
Filaggrin mutations, i.e. FLG deficiency is also associated with various skin diseases (ichthyosis vulgaris, eczema herpeticum, atopic dermatitis, periodic infection with Staphylococcus aureus, allergy to nickel, allergic contact dermatitis in combination with AD, eczema, etc.) but also non-skin diseases (e.g. asthma in combination with AD, peanut allergies, allergic rhinitis, diabetes) (30).
Therapeutic approaches
New therapeutic approaches directed to epidermal barrier defects are being attempted (31). Primary intervention in the treatment of diseases involving FLG deficiency is directed to improving profilaggrin processing into FLG, the recombinant FLG supplementation, as well as the NMF supplementation. It is believed that such a therapeutic approach could have a significant effect on a broader range of diseases associated with FLG gene mutation carriers, persons with FLG deficien-
Atopic dermatitis and filaggrin
Atopic dermatitis is a common multifactorial skin disease of childhood, and it is postulated as the first manifestation of the atopic march (32). The prevalence of AD has increased in recent decades and currently affects up to 20-30% of children and to 3% of adults worldwide (3). It has some common features with FA, asthma and AR including abnormality of skin barrier function, allergen sensitization and type 2 immune response (33). Clinical manifestations of AD are the result of synergis-tic action of genetic and immune mechanisms in combination with environmental factors (climate, nutrition, obesity, etc.). The disease is characterized by inflammation, immune dysregulation, skin barrier dysfunction, and in most but not all (10-40% of AD patients) with IgE-mediated sensitization to food and environmental allergens. Major symptoms include xerosis, eczema lesions, dry itchy skin, and chronic or relapsing dermatitis. Consequently, the continuous itchy-scratch cycle results in trans-epidermal water loss and secondary skin infections (34) (Figure 3).
The form of AD with a normal serum IgE concentration is known as a non-atopic or intrinsic AD. It is not related to other atopic diseases, but the clinical picture does not differ from atopic or extrinsic AD (36). In addition to skin, systemic sensitization is also associated with AD. According to Brunner et al. various comorbidities are investigated, such as association with cardiovascular, neuropsychiatric, malignant diseases, but proof of this connection requires further longitudinal studies (37). Traditionally, the dysfunction of the adaptive immune system is considered a primary pathogenic mech- anism that initiates AD. Specifically including Th2/ Th1 imbalance and inflammation, which secondary results in skin-structure barrier impairment, which is considered an epiphenomenon. In a number of studies that followed the discovery of FLG gene mutation, an epidermal barrier impairment including FLG deficiency was considered as a primary pathogenic initiation mechanism of AD (38).
According to Thyssen et al. the important observations that put FLG into the centre of pathogenesis of AD are: impaired FLG values in skin with or without lesions in patients with AD, the fact that approximately 50% of patients with moderate or severe AD have at least one FLG mutation, that a decrease in the number of copies of FLG increases the risk of AD, that the decreased values of FLG were demonstrated in skin lesions despite the absence of FLG mutations, and that some of the existing therapies can repair FLG values (39). The development of AD, along with genetic and immune mechanisms, is also strongly influenced by environmental factors such as industrialization, stress exposure, obesity, excessive use of antibiotics, sedentary lifestyle, etc (40).
Specifically, the damaged skin-barrier function allows penetration of various allergens/haptens, environmental pollutants, and toxins resulting from bacterial colonization on the one hand, and on the other hand by sensitization to allergens. For example, it has been shown that phthalate metabolites, common ingredients of cosmetic products, increase the risk of AD, and that children with FLG P478S mutations absorb phthalates (41). The pathogenesis of AD is also linked to FLG-2 gene mutations that encode the filaggrin-2, a protein similar to FLG by localization, amino-acid composition and biochemical properties (42). Also, other genes may play a role in the pathogenesis of AD, e.g. HRNR gene encoding the hornerin, RPTN gene encoding the repetin, a SPINK5 gene encoding Kazal-type 5 serine protease, LOR gene encoding loricrin, and other genes by location close to epidermal differentiation complex (43-45). In addition, gene mutations that are involved in the immune response (e.g. IL-4, IL-5, IL-13, IL-18, IL-31, IL-4RA, a binding protein gene e.g. claudin (CLDN1) and desmoglein 1 protein (DSG1) have been investigated (46,47). In the focus of the investigation of pathogenesis of AD and atopic march is also a study of various epigenetic modifications such as methylation of DNA, FLG, TSLP, and variants of enzymes involved in synthesis and profilaggrin processing (48). Most studies have shown that the lack of structural and functional epidermal molecules does not only disturb the structure of the barrier, but is also a mediator of immune and allergic processes. There is a reciprocal relationship between the skin barrier dysfunction and the immune response in pathogenesis of AD. Therefore, AD can still be considered as a disease of skin barrier impairment and immune disease (49). Mechanisms of AD progression into other atopic diseases are not quite clear. One of the participants in this process is the platelet activating factor (PAF), an endogenous phospholipid inflammatory mediator released by various types of cells, e.g. alveolar macrophages in allergic and inflammatory conditions (50). Release of PAF is associated with increased vascular permeability, eosinophilic granulocyte attraction, and bronchoconstriction.
Filaggrin, asthma, and allergic rhinitis
Data on FLG expression in human digestive tract and unified respiratory system are contradictory and more often negative (51,52). Moreover, mechanisms linking FLG variants with asthma, AR or FA are not entirely understandable. However, based on some animal models, it is considered that skin sensitization, including the Th17 cell subpopulation, facilitated by acquired FLG defects or mutations can indirectly result in local but also systemic inflammation in distant organs (53,54).
Asthma
An estimated 300 million people worldwide suffer from asthma (55,56). Asthma is caused by genetic predisposition and various external stimuli (e.g. airborne allergens), which result in partial or full reversible bronchoconstriction. It is often presented differently in children and adults; there are different phenotypes such as intermittent, persistent, aspirin sensitive and severe form. Symptoms mainly involve severe breathing, chest tightness, dyspnoea and cough, especially at night and in the morning (55). Asthma is the result of a disturbed interaction of the respiratory epithelium, the innate and adaptive immune system and the environmental factors. In this regard, a large number of genes, particularly genes responsible for the Th2 cytokine expression in Th2 cells (IL-4, IL-13, IL-5), other T-cell subtypes, dendritic cells, eosinophil and neutrophil granulocytes, mast cells, macrophages and IgE were investigated (57). In the pathogenesis of asthma, the role of ADAM33 protein, responsible for bronchial hyperresponsiveness, stimulation of smooth muscles of the airways, fibroblast proliferation and cytokine production has been studied (58). Moreover, the nitric oxide vasodilator system, various growth factors (e.g. transforming growth factor-beta, granulocyte-macrophage colony-stimulating factor), inflammatory lipid mediators such as leukotrienes and prostaglandin D2, TSLP which induces key changes in dendritic cells and other molecules were also investigated (59). Palmer and colleagues first described the relationship between the R501X and 2282del4 variants of FLG genes and asthma in persons with AD (60). Case control, family and population studies and/or other FLG mutations, which then followed, more clearly indicate the meaning of a defective epidermal barrier in pathogenesis, primarily AD, and then other atopic disorders -AR and asthma (61,62). Generally, the results of investigating the association of these conditions with groups of subjects of different geographic origin show that those patients with FLG mutation have a moderately higher risk of developing asthma than those without genetic mutation, regardless of AD status (63). The risk is significantly higher (3 to 6-fold) in patients with AD and in carriers of FLG mutation (64). It appears that the presence of a particular FLG mutation also has implications for some clinical features of asthma such as the severity of the disease, the number of acute exacerbations, the degree of airway obstruction, and the quantity of asthma control medications to control the disease (65).
Allergic rhinitis
Rhinitis is one of the most common chronic conditions in children, but also in adults. It is an inflam-mation of nasal mucosa, characterized by nasal congestion (rhinorrhoea), itching, sneezing, and nasal blockage. Causes of rhinitis may be allergies, infections, drugs, hormones, various irritants, and may also be of idiopathic origin (66). Allergic rhinitis is characterized by a Th2 immune response including mediators such as IL-4, IL-5, IL-13, whose activity results in the formation of IgE and recruitment of eosinophilic and basophilic granulocytes, mast cells, which further release different mediators, such as histamine and cysteinyl leukotriene, as a vasoactive substance (67). About 40% of AR patients had asthma, and up to 80% of asthmatic patients reported AR (68). These data are not surprising since, besides the anatomical connection, the inhaled air is heated, moistened, and filtered in the nose to maintain homeostasis of the respiratory system (69). In the literature, therefore, more work is devoted to the connection of AR and asthma. Of the mechanisms considered in this connection, it seems that the most acceptable one is the systemic inflammatory response, as cited by Chawes (70). The same author found a significant association of FLG mutations with AR, but not with non-allergic rhinitis. Other authors also report about such association (71,72), which supports the hypothesis of the primary role of epidermal barrier impairment in this disease.
Open questions
Most of the scientific articles included in this review consider FLG deficiency, in particular caused by the FLG mutation, as one of the causes, the key risk factor, the AD modifier, and consequently the possible causes of atopic march.
The disruption of the skin barrier function associated with the lack of FLG in AD is, along with other known pathogenic factors, the basis for the allergic sensitization to food allergens and aeroallergens, and the consequent development of asthma and AR. The key questions in this context are 1) how to predict AD and when to prevent the occurrence of AD; 2) whether timely repair of skin barrier could prevent progression of AD and thus the possible development of asthma and AR; and 3) what an appropriate therapeutic strategy should Čepelak I. et al. Filaggrin and atopic march include? To get the correct answers to these questions, it is necessary to define the best biochemical, preferably non-invasive markers for the detection and monitoring of the disease.
The ideal marker would be the one that would make it easier to set up a diagnosis, help assess the severity of the disease, identify the phenotypes AD, and predict the individual response to the therapy. However, the ideal marker does not exist.
Moreover, laboratory assays currently used in practice are mainly focus on monitoring immune or inflammatory changes (e.g. total and specific IgE, eosinophilic granulocyte counts, cytokine release assays) and less to control of skin-barrier function impairment including FLG deficiency. These are for example the determination of TWEL and pH of the skin (73,74).
For the last few decades, scientists have been trying to establish methods for detection of FLG deficiency, its degradation products, or products of profilaggrin processing as well as other barrier proteins. The analytical samples include skin biopsy, animal skin biopsy samples, human skin equivalents and reconstructed skin samples, samples collected by tape stripping technique and serum (75)(76)(77)(78)(79)(80). Non-invasive in vivo methods for investigation of skin components are implemented in clinical practice. Among novel methods, Raman spectroscopy, an optical biopsy method, offers the possibility for real-time characterization of skin components, including FLG (81). The concentration of the NMF ingredients, for example amino acids and their derivatives is determined using the HPLC method (76). Activities of enzymes involved in the processing of profilaggrin into FLG, and FLG paraffin-embeded samples are also determined, by the use of immunohistochemical staining (82,83). In the studies there are also particularly important methods of genotyping of FLG, as well as proteomic analysis of FLG deficiency.
Of the non-invasive markers, it appears that significant potential for the clinical course of AD and therapeutic effect has TSLP expression determination using tape-stripping approach to sampling and mass spectrometry as well as determination of serum TSLP concentration by enzyme-linked immunosorbent assay (ELISA) (84,85). Also, thymus and activation-regulated chemokine (TARC), highaffinity ligand for CC-chemokine receptor 4 (CCR4) in serum or plasma of patients has been proposed as a marker of disease activity (86,87).
A significant number of biomarkers include direct or indirect evaluation of FLG deficiency. However, taking into account the multifactorial nature of AD, it is necessary to define more clearly the most reliable biomarker for a single phenotype AD (12). Accordingly, it will also be possible to apply appropriate therapy with the aim of repairing the skin barrier.
In addition to the appropriate preventive measures, effective therapeutic approach to maintain adequate skin barrier function and inflammatory control are also needed. Within primary non-pharmacological prevention, some authors advise avoiding various types of allergens, while some promote tolerance. Early sensitization in the first year of life, especially to food allergens, e.g. peanuts and milk, increases the risk for asthma to the third year of life of a children even 7-fold (88). Other authors state that sensitization to food allergens can develop before food is consumed, and that it occurs through inflammation of the skin (allergens can be found in home dust and creams and oils used for the treatment of new-born babies) (89). It is therefore considered that early oral consumption of such food can lead to tolerance, and that transcutaneous exposure to food allergens can lead to food sensitization. Based on the described causal link between early sensitization through damaged skin barrier and later development of atopic diseases such as asthma and AR, Johanson and Hershey promote immune tolerance as a "therapeutic" strategy or early introduction of "allergic" food (90). It has been described that the permeability of new-born skin is increased due to the insufficient presence of ceramide, which is associated with increased risk of later development of AD (91). A review by Lowe et al. focused on skin as a goal for the prevention of atopic march includes the concept of preventive skin treatment with emollients from the first weeks of life to 6 or 8 months of age, i.e. prevention before the onset of symptoms of AD (92). Few well-designed larger tri- als are currently underway to demonstrate the long-term effects of moisturizing on the incidence of AD and on the incidence of FA and allergic airways disease. The authors discuss the advantages and disadvantages of such a strategy, as well as the choice of emollients, and their safety and acceptability (92).
As a part of the primary therapeutic prevention of further AD development, as well as prevention of progression in asthma and AR, methods for compensation of reduced FLG level or its degradation products (NMF) are also developed.
According to Irvine, the group of candidates for FLG replacement therapy would include patients with FLG mutations, patients with variation in FLG repeat numbers, intra-genetic copy number variation, patients with secondary reduced FLG (acquired deficiency), severe and persistent type of AD that have allergic sensitization and in the future could develop asthma, and patients with genodermatoses (93). Currently, about 80 clinical trials focused on topical AD treatment are ongoing, including drugs that would have the ultimate effect on FLG increase (94). Animal and human investigations of drugs focused on mutant alleles, drugs increasing FLG expression by promotion of a healthy allele, direct FLG substitution drugs, hence recombinant FLG are currently underway (95). Other therapeutic measures are the renewal of extracellular skin lipid profiles (various emollients with a mixture of ceramides, cholesterol, and fatty acids), regeneration of pH, and skin hydration. As immune dysregulation remains the key problem in AD and shares pathophysiological features with other atopic disorders, it is very important to control inflammation. Various biological drugs such as dupilumab (monoclonal antibody directed against the IL-4 receptor α subunit that blocks signaling of both IL-4 and IL-13), nemolizumab (target IL-13), ustekinumab and lebrikizumab (targeting IL-31) are in research (31, 96,97). However, further investigations should be used to determine the timing of the initiation of preventive measures, the effectiveness of various forms of therapy, safety, and the mode of administration/dosage and the duration of the effect.
Conclusion
It is indisputable that the interactions between impaired epidermal barrier and dysregulation of the innate and adaptive immune system, in association with environmental risk factors, are involved in pathogenesis of AD. Atopic dermatitis is the most commonly occurring disease in an early life, which is associated with increased risk of asthma and AR. The scientific literature is dominated by works that AD of hereditary or other origin are considered as an initiating event in the atopic march and the lack of FLG as a key component of the epidermal differentiation complex, as one of the first participants in pathogenesis of AD. In this context, if the skin barrier function is impaired, or if it is not repaired in the case of a pathogenic process already exists, it will be possible to provide a continuous allergen entrance, and the immune system will continuously produce antibodies. It is therefore expected that, along with various preventive measures recommended for maintaining the barrier function, therapeutic-oriented barrier restoration in this context of FLG replacement and AD prevention will minimize the risk of allergic immunization as well as the development of asthma and AR. To monitor the effect of such therapy and the therapy that is in use, it is also necessary to find suitable laboratory markers. The complete pathogenesis of AD and the pathway to explain atopic march, despite numerous new findings, including the significance of FLG deficiency, are still not quite clear due to their complexity and are in the focus of numerous researchers.
Accordingly, researchers should continue to work on the identification of molecular basis and environmental triggers of AD, on the association of the immune system with FLG deficiency, but also with other proteins of the epidermal differentiation complex. That will help to establish appropriate prevention and therapeutic algorithms to prevent or at least slow the atopic march.
Potential conflict of interest
None declared.
|
2019-06-21T23:05:39.922Z
|
2019-06-15T00:00:00.000
|
{
"year": 2019,
"sha1": "cdba4ac9ad26ddab9516869ee16ac5dbdc6d6c0c",
"oa_license": "CCBY",
"oa_url": "https://www.biochemia-medica.com/assets/images/upload/xml_tif/BM29_2_020501.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cdba4ac9ad26ddab9516869ee16ac5dbdc6d6c0c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236208363
|
pes2o/s2orc
|
v3-fos-license
|
Small bowel obstruction caused by secondary jejunal tumor from renal cell carcinoma: A case report
BACKGROUND Secondary jejunal tumor from renal cell carcinoma (RCC) is extremely rare in clinical practice and is easily missed and misdiagnosed because of the low incidence and atypical symptoms. CASE SUMMARY A 38-year-old male patient was diagnosed pathologically with left RCC after radical nephrectomy in 2012. The patient then suffered multiple lung metastases 2 years later and was treated with oral sorafenib without progression for 6 years. In 2020, an emergency intestinal segmental resection due to intestinal obstruction was required, and postoperative pathology confirmed a jejunal secondary tumor from RCC. The patient had a smooth recovery following surgery. Three months after surgery, the patient was diagnosed with left adrenal metastasis, and subsequent sintilimab therapy has stabilized his condition. CONCLUSION This report is written to remind urologists and pathologists of the potential for small intestinal secondary tumors when a patient with a history of RCC seeks treatment for digestive symptoms. Enteroscopy and abdominal contrast-enhanced computed tomography are essential means of examination, but severe cases require immediate surgical intervention despite the lack of a preoperative examination to distinguish tumor attributes.
INTRODUCTION
Renal tumors are one of the most common tumors of the urinary system. Approximately 85% of renal tumors are renal cell carcinoma (RCC), and approximately 70% are clear cell histology [1]. Although the diagnosis and treatment of RCC are progressing rapidly, approximately 15% of RCC patients were diagnosed with metastases at the first visit [2], and approximately 20%-30% with localized RCC will experience recurrence after radical or partial nephrectomy [3]. Small intestinal secondary tumors from RCC occur infrequently. The most common symptoms are obstruction, anemia, pain, nausea, vomiting, weight loss, and other nonspecific malignancy symptoms [4], which are not specific and easily misdiagnosed. Additionally, postoperative surveillance of RCC may rarely be concerned about intestinal metastases, which are easy to ignore. Herein, we present a case of small bowel obstruction caused by a jejunal secondary tumor from RCC to review the characteristics of its diagnosis, treatment, and prognosis.
Chief complaints
On March 8, 2020, a 38-year-old male patient presented to the emergency department with a 20-d history of reduced aerofluxus and defecation, accompanied by intermittent vomiting and abdominal colic.
History of present illness
The patient visited the hospital on July 30, 2012, presenting with a left renal tumor found via abdominal ultrasonography and computed tomography (CT). Laparoscopic radical nephrectomy was performed on August 4, 2012, and the postoperative pathology suggested a renal clear cell carcinoma measuring 6.0 cm × 6.0 cm × 5.5 cm (PT1b). The patient was diagnosed with pulmonary metastasis 2 years later through positron emission tomography-CT (PET-CT) and subsequently treated with oral sorafenib for 6 years without metastatic progression.
Personal and family history
The patient had no special personal and family history.
Physical examination
Physical examination revealed a surgical scar on the left waist, abdominal distension, and severe abdominal tenderness around the umbilicus with mild rebound tenderness. The liver and spleen were impalpable.
Imaging examinations
Abdominal CT revealed small intestinal obstruction caused by a large irregular mass in the jejunum lumen ( Figure 1A). There was no time for further examinations such as enteroscopy and abdominal contrast-enhanced CT.
FINAL DIAGNOSIS
The diagnosis was confirmed as small bowel obstruction due to a jejunal secondary tumor from RCC.
TREATMENT
Emergency segmental resection and end-to-end anastomosis of the small intestine were performed. The postoperative specimen was a cauliflower-like broad-based crunchy lesion measuring 6.5 cm × 5 cm × 5 cm ( Figure 1B). The results of the histologic examination were consistent with metastatic renal clear cell carcinoma. The lesion presented as masses of clear tumor cells with small nuclei and abundant cytoplasm surrounded by a net-like fibrovascular stroma (Figure 2A and B). The patient had a smooth recovery following surgery.
OUTCOME AND FOLLOW-UP
Three months after the small intestine surgery, PET-CT showed left adrenal metastasis from RCC ( Figure 3). Then sintilimab (200 mg, every 21 d) was given to control the disease progression. The therapy was continued for eight cycles, and the patient was in a stable condition by January 2021.
DISCUSSION
Metachronous metastasis of RCC to the small intestine is a distinctly rare occurrence and is complicated by concomitant lung metastases and subsequent adrenal metastasis [5]. In a retrospective analysis of 44 small intestinal secondary tumors, colon cancer was the most common primary tumor (43.2%), followed by pancreatic and ovarian cancer (11.4% each). RCC accounted for only two in the study (4.5%) [6]. Furthermore, an autopsy report showed that in all 687 cases confirmed as RCC by histology, only two cases were found to have small intestinal metastases, accounting for 0.5% of the 392 metastatic cases [7].
In 15 patients with small intestinal metastases secondary to renal cancer, as summarized by Mueller et al [8], 5 patients presented with gastrointestinal bleeding, 5 with intussusception, 2 with small bowel obstruction, 1 with biliary obstruction, and 1 with both gastrointestinal bleeding and bowel obstruction. Therefore, for patients with a history of renal cancer, the possibility of small intestinal secondary tumors should be considered if the above symptoms are present. Either misdiagnosis or delayed treatment may endanger the health of the patients.
When seeing a patient with similar symptoms and a history of cancer, the disease's vital signs and severity should be assessed initially to determine the appropriate examination and treatment planning. Routine blood tests, liver function tests, bilirubin tests, stool occult blood tests, abdominal X-ray, especially enteroscopy, and abdominal contrast-enhanced CT are practical accessory examinations of diagnosing small bowel tumors and evaluating accompanying symptoms. Serum creatinine tests and other renal function evaluations are also indispensable for the operation history of primary renal tumor [9]. If the situation permits, a full-body imaging examination should be performed to exclude other possible metastatic lesions. In this case, preoperative diagnosis, which is essential for small bowel tumors, was not complete due to difficulty in applying enteroscopy and abdominal contrast-enhanced CT under emergency conditions.
Metastatic small bowel tumors should be differentiated from other malignant lesions such as adenocarcinoma, carcinoid, and lymphoma, the three most common cancers in the small intestine[10]. Valuable information for differential diagnosis can be obtained by examining the morphological features of the lesion via imaging. Small intestinal adenocarcinoma classically presents with a short annular stricture. Carcinoid July 6, 2021 Volume 9 Issue 19 tends to infiltrate the mesentery, causing angulation, kinking, rigidity, and separation of small bowel loops. Lymphoma always presents as a non-obstructive lesion with wall thickening combined with lymphadenopathy and systematic symptoms [11,12]. Metastatic cancers in the small bowel are usually large [13] and should be considered when the patient has a history of malignancy.
Since renal carcinoma is insensitive to chemotherapy and radiotherapy, resection of metastatic lesions is feasible and effective in patients with a previous nephr-ectomy [14]. Alt et al [15] reported a retrospective study of 887 patients with confirmed RCC who underwent radical nephrectomies, where 125 patients who underwent complete surgical removal of all metastases had a longer cancer-specific median survival (4.8 years vs 1.3 years; P < 0.001). Palliative interventions, including enterostomy, bypass surgery, or urgent selective arterial embolization, may benefit the patients with unresectable small intestinal metastatic lesions accompanied by acute intestinal obstruction or bleeding [16,17]. For patients with multiple metastases who are not suitable for surgery, targeted therapy and immunotherapy have become first-line treatments with symptom relief and improved prognosis [18].
CONCLUSION
This case study has been presented to remind urologists and pathologists of the possibility of small intestinal secondary tumors when a patient with a history of RCC presents with digestive symptoms. Enteroscopy and abdominal contrast-enhanced CT are essential means of examination, but severe cases require immediate surgical intervention despite the lack of a preoperative examination to distinguish tumor attributes.
|
2021-07-25T05:26:22.975Z
|
2021-07-06T00:00:00.000
|
{
"year": 2021,
"sha1": "e6e4ab9dadb1bb12d3b38d3aa0682ce0ae70a5d8",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v9.i19.5339",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6e4ab9dadb1bb12d3b38d3aa0682ce0ae70a5d8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204117656
|
pes2o/s2orc
|
v3-fos-license
|
Mucin histochemical profile in a variety of human colonic non-neoplastic and neoplastic lesions
Introduction: Mucin deregulation occurs in gastrointestinal lesions progressing to cancer. Evidences advocate utility of mucin profile to predict malignant transformation in preneoplastic colonic lesions, however data are controversial. Objectives: This clinicopathological descriptive study identifies the histochemical profiles of acidic and neutral mucins in human colonic tissues including inflammatory, non-neoplastic and neoplastic lesions. Mucin alteration in different histological grades of adenocarcinoma is also evaluated. Materials and Methods: After collection of relevant data, tissue samples from a cohort of 88 patients recruited from King Abdul Aziz Specialized Hospital, Taif, Saudi Arabia were examined. Dual Alcian Blue (AB; pH 2.5)/ Diastase (D) Periodic Acid Schiff (PAS) technique was used to differentiate acidic and neutral mucins respectively. Statistical analysis including descriptive statistics, Pearson’s chi-square test and linear regression was performed using SPSS. Results: Acidic and neutral mucins were positive in 67.1 and 46.6% of lesions, respectively. All ulcerative colitis tissues were positive for acidic mucin and 75% for neutral mucin. All non-neoplastic polyps were positive for both mucins. Adenomatous polyps demonstrated reduced mucin but more neutral (59.3%) than acidic one (44.4%). About 86% of adenocarcinomas revealed acidic mucin and 22.2% showed neutral mucin with trends for acidic mucin to decrease with grade and for neutral mucin to increase with grade. Conclusion: Altered both acidic and neutral mucin profiles occur in colonic lesions of different pathologies. As colonic adenocarcinoma progresses from grade 1 to grade 3, acid mucin declines with a corresponding increase in neutral mucin. Mucin profile may be of value in evaluating the preneoplastic colorectal lesions. © 2019 Published by Innovative Publication.
Introduction
Epithelia are protected from the external environment by a mucus barrier composed largely of high-molecularweight glycoproteins; the mucins. Mucins exhibit either secreted or transmembrane loci at the epithelial lining of the gastrointestinal and respiratory tracts and the duct linings in the pancreas, kidney, liver and mammary gland. Secreted mucins act as a physical barrier; however transmembrane mucins provide this physical barrier with ectodomains made up of O-glycosylated tandem repeats. 1,2 For colonic epithelium, mucus forms an adherent gelatinous sheet that protects the cells from injurious luminal contents including proteases, bile, ingested toxins and a substantial number of commensal bacteria. [1][2][3] The colon is now known to harbor a two-layered mucin system. 2 Through histochemical techniques, colonic mucins are categorized as highly glycosylated neutral mucins and less glycosylated acidic mucins which are, consecutively, subdivided into sialomucins and sulfomucins. 4,5 Normal colonic tissue mucins are predominately acidic with scanty neutral mucin, 6 and both can be sharply differentiated using Alcian blue (AB) and Periodic acid Schiff (PAS) special stains. 7 Deregulation of mucin has been reported in a variety of gastrointestinal tract lesions progressing to cancer. 9 Carcinomas of esophagus, gastro-esophageal junction, breast, prostate, lung and pancreas commonly overexpress transmembrane mucins to exploit their role in promoting growth and survival. 1,10 In this respect, altered mucin profile has been linked to changes in cell polarity and cellstromal interaction. 11 Recent evidences advocate the utility of mucin expression to predict malignant transformation of preneoplastic colonic lesions. 12 However, only a few studies have examined the mucin profile in the premalignant lesions of the colon, such as adenomas and dysplasia in ulcerative colitis (UC) and the mucin profile in colorectal cancer (CRC) as well. 13 Moreover, the data yield from these studies has been contradictory. 8,14 In the Middle East, 13 specifically in Kingdom of Saudi Arabia (KSA), the mucin expression profile of CRC is not well-known. To the best of our knowledge, there has been only one report that studied the impact of mucin production on the prognosis of patients with CRC. 15 Therefore, this study is conducted with the aim of identifying the histochemical profiles of acidic and neutral mucins in a series of human colonic tissue samples including inflammatory lesions, non-neoplastic polyps, adenomatous polyps and adenocarcinomas.
We also investigated mucin changes in relation to different grades of colonic adenocarcinoma.
Materials and Methods
This clinicopathological descriptive cohort study included 88 patients recruited retrospectively from King Abdul Aziz Specialized Hospital in Taif city, KSA during the period between December 2017 and February 2018. Patients' clinicopathological data, archived microscopic slides and tissue paraffin blocks were obtained from Pathology Department at the same hospital. Staining and study procedures were conducted at the Clinical Laboratory Sciences Department at College of Applied Medical Science, Taif city, KSA.
Inclusion and exclusion criteria
This study included biopsies from patients diagnosed with a variety of colonic and appendiceal lesions including inflammatory lesions (follicular appendicitis, suppurative appendicitis and ulcerative colitis); colonic non-neoplastic and adenomatous polyps and colonic adenocarcinomas. Patients who underwent endoscopic colonic biopsy, polypectomy, appendectomy or colectomy during the study period were enrolled in the study provided that their clinicopathological data and archival material required for histopathological examination were available for further study.
Exclusion criteria were cases who received preoperative chemotherapy or radiotherapy, incomplete records and unavailability of histopathological slides or paraffin tissue blocks.
Histochemical staining
Tissue sections cut at 3 to 4 µm were prepared from each FFPE tissue block. The sections has been deparaffinized and rehydrated in a graded series of alcohol then submitted for dual staining with Alcian Blue (AB) at pH 2.5 and Diastase (D) pretreated Periodic Acid Schiff (PAS) to demonstrate acidic and neutral mucins, respectively. The AB; 2.5/DPAS technique was implemented using the standard protocols. 7,14 Histologically adjacent normal tissue from the margins of the tumors served as a control tissue.
Slide examination and evaluation of staining
The routinely prepared H&E-stained slides obtained from archival material were reviewed and re-evaluated by two pathologists to confirm the diagnosis of each. Colorectal adenocarcinomas were classified as grade 1, grade 2 and grade 3 using the WHO criteria. Slides stained with AB; 2.5/DPAS were independently assessed by two pathologists. The nature of mucin was analyzed by evaluating 5 randomly selected fields at 100x magnification. The neutral mucins stained magenta, while the acidic mucins stained blue.
Data analysis
After coding and tabulation of the collected data, statistical analysis was performed using Statistical Package for the Social Sciences (SPSS) program version 18.0. Descriptive statistics were done and data were expressed as numbers and percentages. Pearson's chi-squared test (χ2) was used to assess the difference between groups. Linear regression analysis at 95% confidence interval was performed to find out the trend of acidic and neutral mucins in relation to the histological grades of colonic adenocarcinomas. A p value of 0.05 or less was considered as statistically significant.
Ethical considerations
Conduction of this study was retrospect and has not influenced any biopsy or operative decision. The study was conducted upon approval of the local ethical committee at College of Applied Medical Science, Taif city, KSA.
After being oriented about the study purpose, informed consents were obtained from the patients or their relatives to access their data and use their archived microscopic slides and tissue specimens. All procedures were followed in accordance with the current revision of Helsinki Declaration.
Results
As detailed in (Table 1), the study included 88 patients ranging in age from 5 to 100 years. Most patients (29.5%) were included in the age group from 41 to 60 years. About 61% of patients were males and the remainder was females. The histopathological diagnoses of the study cases comprised: acute suppurative and follicular appendicitis (11.4% and 8% respectively); ulcerative colitis (4.5%); nonneoplastic and adenomatous polyps of the colon (4.5% and 30.7% respectively) and colonic adenocarcinomas (40.9%). Tissue samples were obtained by a variety of procedures including appendectomy; endoscopic biopsy; polypectomy and colectomy. There was a significant statistical difference when either age groups or tissue sampling procedures were compared to the histopathological diagnosis (p values of 0.002 and ≤0.0001 respectively).
The histochemical profile of acidic and neutral mucins for the 88 studied appendiceal and colonic lesions is demonstrated in table 2, based on the histopathological diagnosis ( Figure 1). Acidic mucin was positive in 67.1% of lesions, while neutral mucin positivity was demonstrated in 46.6% of examined tissues. Appendiceal lesions revealed a positivity of 57.1% for both acidic and neutral mucins in follicular appendicitis and a slightly higher neutral mucin (60%) in appendixes with suppurative pathology. For ulcerative colitis, all cases demonstrated acidic mucin and 3 out of 4 cases revealed positivity for neutral mucin. All non-neoplastic polyps were positive for both acidic and neutral mucins. Overall, adenomatous polyps demonstrated reduced mucin content but neutral mucin was more frequently positive (59.3% of cases) than acidic mucin (44.4% of cases). On the contrary, adenocarcinomas ( Figure 2 ) were overall predominately positive for acidic mucin (86.1%) and less frequently positive for neutral mucin (22.2%).
When acidic and neutral mucin profiles were compared to different histological grades of colonic adenocarcinoma (Table 3), almost 94.7 % of grade 1 adenocarcinomas demonstrated positive staining for acidic mucin with a trend of the staining frequency to decrease in progression from grade1 to grade 3(71.4%) (Figure 3 ; left panel). Conversely, neutral mucins were demonstrated in a distinctly lower number of grade 1 adenocarcinomas (10.5%), with a trend of the staining frequency to increase up to 42.8% in grade 3 adenocarcinomas (Figure 3; right panel). Yet, linear regression analysis at 95% confidence interval revealed insignificant p values for both acidic and neutral mucins despite that the data are fitted to the regression line (R square values of 0.9777 and 0.9859 respectively).
Discussion
Colorectal cancer (CRC) is the third most common cancer worldwide among both males and females. It constitutes 6.1% of the overall cancer incidences and 9.2% of mortalities. 16 In Saudi Arabia, CRC accounts for 11.9% of diagnosed cancers being the second commonly occurring cancer among Saudi populations with the main burden on older population (median age of 60 years for males and 56 years for females). 2,17,18 The traditionally known colonic precancerous pathological lesions include dysplastic ulcerative colitis, aberrant crypt foci (ACF) and colonic adenomas including the serrated ones. 12,19,20 Lately, the concept that hyperplastic polyps and mixed polyps are entirely innocuous lesions has been reverted. 9 Moreover, mucin-depleted foci (MDF) have been recognized to serve as preneoplastic lesions of CRC that can be used as a biomarker in colon carcinogenesis. 20,21 Hence, recent evidences advocate the utility of mucin expression to predict malignant transformation of colonic preneoplastic lesions. 12 Based on this assumption, the current study was designated to assess acidic and neutral mucin profiles in a variety of human colonic non-neoplastic and neoplastic lesions including those of the appendix using mucin histochemical staining. In addition, the mucin profile was compared in different grades of colonic adenocarcinoma. In order to achieve this, tissues from 88 patients diagnosed with different colonic and appendiceal lesions were enrolled in the study. Overall, the AB; 2.5/DPAS technique revealed that the acidic mucin is more frequently positive in colonic lesions than neutral mucin. Inflammatory lesions of the appendix as well as UC revealed increased neutral mucin and reduced acidic mucin as compared to normal colon. Neutral mucins were identified in 75% of UC tissues, which was quite higher than neutral mucin positivity in other colonic lesions such as adenomatous polyps and adenocarcinomas. Previous studies have demonstrated a change in mucin secretion in chronic inflammatory conditions of the colon. In inflamed colonic mucosa, there seem to be differences in the mucin structure as a result of reactive oxygen speciesmediated injury and the associated process of mucosal repair dominated with increased epithelial turnover. 19,22 Thus, evidences suggest that the thickness and effectiveness 19 The question remains, however, as to whether a primary defect in epithelial mucin could be a factor in the etiology of UC. Altered glycosylation of mucin side-chains and changed expression of core proteins at transcriptional, translational or post-translational levels have also been assumed to occur in inflammatory bowel diseases. 9 In the present study, both mucin types were detected in all cases of non-adenomatous polyps. While conventional knowledge claims that hyperplastic polyps and mixed polyps show a reduction in the ratio of goblet cells to columnar cells, a recent evidence indicates that the so called columnar cells constitute the source of excess mucin due to mucin gene up-regulation in this subset of cells. 9 Adenomatous polyps showed a reversed mucin profile when compared to normal colon, as neutral mucin was more frequently detected (59.3%) than acidic mucin. In adenomas, increased neutral mucin could be explained by the arrested maturation of crypt differentiation to recapitulate crypt base. 9 It is well known that goblet cells with neutral mucins are present in the lower two-thirds of the crypts in the caecum and the colon. 23 A preceding study considered this change in mucin production as a marker to identify precursor lesions in adenoma-carcinoma sequence of colorectal carcinogenesis. 12 Additionally, changes in mucin glycosylation and distribution were found to play an important role in the same sequence. 13 On the contrary, adenocarcinomas in this study recapitulated the normal colonic tissue that demonstrates predominance of acidic mucin (80%) and scantiness of neutral mucin (20%). 14 Similarly, an earlier study performed to assess the histochemical composition of mucin in CRC indicated the presence of acidic and neutral mucin mixed secretions with a peculiar prevalence of acidic ones (more than 80%). 24 A former study including patients from Saudi Arabia demonstrated the role colonic mucin alteration in the progression of CRC. 15 Nevertheless, an important finding brought in the current study is the trend of acidic and neutral mucins to change by the grade of colonic adenocarcinoma. Acidic mucin revealed a trend to be high in grade 1 lesions (94.7%) and to decrease in frequency with increasing tumor grades. Conversely, neutral mucin was detected in 10.5% of grade1 adenocarcinomas, but demonstrated a trend to increase in higher adenocarcinoma grades. This mucin profile in CRC could be related to midgut epithelial metaplasia affecting neoplasms of the distal large bowel. 9 A recent comparable study revealed also a general decline in the acidic mucin as the adenocarcinomas of the colon progress from the well-differentiated adenocarcinoma (73.3%) through the moderately differentiated and poorly differentiated adenocarcinomas (63.7%), whilst there was a corresponding upsurge in the positivity of neutral mucins (26.3% and 36.3% respectively), but the authors reported an opposing pattern in a small subpopulation of the high-grade colonic cancers. 14 In this context, acidic mucins have been suggested to inhibit tumor growth, 25 and to be involved in the control of cell division. Therefore, a down regulation of acidic mucin expression is suggested to drive the colonic tissue to malignancy. 14 In contrast to our findings, Jain, et al. 8 reported increased secretion of mostly acidic mucin and also neutral mucin in CRC and Ali et al. 7 demonstrated positivity for acidic mucin and negativity for neutral mucin in all cases of CRC. Differences may be attributed to the various pathways of neoplastic progression that contribute to the molecular and biological heterogeneity exhibited by CRCs. 12 The on/off switching of gastrointestinal mucin genes could be a biomarker for genetic instability, thereby explaining the associations with gastrointestinal neoplasia. 9
Conclusion
In this study, altered acidic and neutral mucin profiles has been demonstrated in a variety of non-neoplastic and neoplastic lesions from the colon. A decline of acid mucin from grade 1 to grade 3 adenocarcinoma of the colon with a corresponding increase in neutral mucins was documented in this work. Determination of mucin profile may be of value in evaluating the preneoplastic colorectal lesions. Future efforts are warranted to determine the utility of mucin profile in risk stratification of different colonic lesions to enhance early stage management of this malignancy.
Acknowledgments
The authors' institutions are acknowledged.
Ethical Statement
The study was conducted following the aforementioned ethical considerations.
Conflict of Interest
The authors declare that they have no conflict of interest.
|
2019-09-26T08:53:12.624Z
|
2019-09-28T00:00:00.000
|
{
"year": 2019,
"sha1": "df4fe0a0e6d7bbd105451de0dec43136dc4d6e10",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18231/j.jdpo.2019.034",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "30be25cba8009d0b9fbea56955d78cbe663ffba1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
269423646
|
pes2o/s2orc
|
v3-fos-license
|
Sea Anemone Membrane Attack Complex/Perforin Superfamily Demonstrates an Evolutionary Transitional State between Venomous and Developmental Functions
Abstract Gene duplication is a major force driving evolutionary innovation. A classic example is generating new animal toxins via duplication of physiological protein-encoding genes and recruitment into venom. While this process drives the innovation of many animal venoms, reverse recruitment of toxins into nonvenomous cells remains unresolved. Using comparative genomics, we find members of the Membrane Attack Complex and Perforin Family (MAC) have been recruited into venom-injecting cells (cnidocytes), in soft and stony corals and sea anemones, suggesting that the ancestral MAC was a cnidocyte expressed toxin. Further investigation into the model sea anemone Nematostella vectensis reveals that three members have undergone Nematostella-specific duplications leading to their reverse recruitment into endomesodermal cells. Furthermore, simultaneous knockdown of all three endomesodermally expressed MACs leads to mis-development, supporting that these paralogs have nonvenomous function. By resolving the evolutionary history and function of MACs in Nematostella, we provide the first proof for reverse recruitment from venom to organismal development.
Introduction
In recent years, the starlet sea anemone Nematostella vectensis has been developed into a model system in the study of evolutionary developmental biology (Hand and Uhlinger 1992).This is due to Nematostella being a member of the ancient phylum Cnidaria (sea anemones, jellyfish, corals, and hydroids) that diverged from Bilateria more than 600 million years ago, yet sharing a striking level of conservation in genomic features to bilaterians, such as synteny and gene content (Cartwright et al. 2007;Putnam et al. 2007;Technau et al. 2015;Rentzsch and Technau 2016;Zimmermann et al. 2023).In addition, Nematostella has been established as a model to unravel the evolution and function of genes due to its ability to complete a full life cycle in the lab, its fully sequenced chromosomal-level genome, and the development of new tools for its genetic manipulation, such as transgenesis methods, gene knockdowns, and knockouts (Layden et al. 2016;Al-Shaer et al. 2021).While this has been essential in unraveling characteristics of the cnidarian-bilaterian common ancestor, it has also revealed key insights into the evolution of novel innovations, in particular the evolution of venom (Moran et al. 2015;Surm and Moran 2021;Babonis et al. 2022;McCulloch et al. 2023).
Members of Cnidaria are venomous and employ specialized organelles called nematocysts as miniature venom injectors (Kass-Simon and Scappaticci 2002).These highly complex intracellular biological structures are composed of various structural polymers that are packed in stinging cells called nematocytes; then, upon specific signals, the nematocysts discharge at remarkable speed and puncture their target (Kass-Simon and Scappaticci 2002;David et al. 2008).Among the list of cnidarian model systems, which is continuously expanding, Nematostella's venom system is arguably the most well-studied (reviewed in Surm and Moran 2021).These works have revealed key insights into the molecular mechanisms underlying the nematocyte developmental origin (Steger et al. 2022;Tournière et al. 2022), such as nematocytes originating from a common neurosecretory progenitor, to their diversity, such as connecting single transcription factor (NvSox2) which can switch between two alternative stinging cell fates (Babonis et al. 2023).Genetic manipulation techniques have also been crucial in characterizing toxin genes and their novel cell types (Columbus-Shenkar et al. 2018;Sunagar et al. 2018;Sachkova et al. 2019) as well as resolving longstanding evolutionary theories related to venom biology such as resolving their impact on fitness (Surm et al. 2024).
In addition, these genetic tools have also been valuable in tracking the genesis of toxins from genes with nonvenom functions, for example, the recruitment of conserved sea anemone neuropeptide into nematocyte following a Nematostella-specific gene duplication event (Sachkova et al. 2020a).This process is known as "recruitment," and it is assumed that many toxins originate from gene duplication of proteins that carry physiological roles, with their new paralogs undergoing neofunctionalization to become a toxin (Fry et al. 2009).Originally, recruitment into venom was described as a one-way process, where physiological genes transform into toxins; however, a previous study in squamate reptiles hinted that this may indeed be a twoway process where "reverse recruitment," i.e. the transformation of a venom protein back into a physiological nonvenom protein, can also occur (Casewell et al. 2012).However, reverse recruitment has yet to be proven experimentally.
In this work, we studied members of the membrane attack complex and perforin family (MACPF) in Nematostella (we refer to these as MAC).This family includes β-pore-forming toxins (PFTs) that are found in a wide variety of organisms from bacteria to mammals and are mostly employed for lysing cells by generating pores in their membranes (Anderluh et al. 2014).MACs were found to be toxins in the nematocysts of two sea anemones, Actineria villosa and Phyllodiscus semoni (Nagai et al. 2002;Oshiro et al. 2004;Satoh et al. 2007;Uechi et al. 2011).Using comparative genomics and phylogenetics, as well as interrogating the publicly available cell atlases, we find that members of MAC were recruited into the nematocytes of the last common ancestor of Anthozoa (soft corals, stony corals, and sea anemones).We find that following two rounds of lineage-specific duplications in Nematostella, three additional MAC paralogs were recruited into endomesodermal cells strongly suggesting that they carry nonvenomous functions.This is further supported by evidence that depleting these three endomesodermal MACs interferes with normal development in Nematostella.Additionally, two of these endomesodermal MAC paralogs still retain some weak expression in stinging cells and represent a "transitional form" between a toxin and a nonvenom protein-encoding gene.These findings are the first experimental proof of the reverse recruitment of venom and highlight the power of gene duplication in the rapid evolution of molecular innovation.
Evolutionary History of the MAC Family across Anthozoa
Using a phylogenetic framework, we investigated the distribution of MAC genes across Anthozoa.To do this, we used a combination of predicted proteomes generated from anthozoans with sequenced genomes, as well as additional transcriptomes from sea anemones (supplementary table S1, Supplementary Material online).Specifically, we find in the Nematostella (from the Edwardsioidea superfamily) genome seven sequences that encode for proteins composed of a signal peptide and a single MACPF domain, which we named NveMAC1 through 7 (Fig. 1).Similarly, we find that the MAC gene family has undergone numerous amplification events in other Hexacorallia genomes, resulting in the presence of four copies in the stony corals Stylophora pistillata and Acropora millepora each.Among sea anemone genomes, we find considerable variation in their copy number, with six copies in the other edwardsioidean: Scolanthus callimorphus, 14 copies in metridioidean: Exaiptasia diaphana, and two and three copies in the actinioideans: Actinia tenebrosa and Actinia equina, respectively.
Next, we generated a maximum-likelihood tree which reveals the presence of two distinct clades, one clade including only sea anemones and the other including soft corals, stony coral, and sea anemones (Fig. 1).Within both clades, the broad clustering is consistent with anthozoan phylogeny, with Octocorallia (soft corals) being the most diverged compared with Hexacorallia (which includes stony corals and sea anemones).Within the sea anemone-specific clade, the superfamilies Metridioidea and Actinioidea cluster together and species from Edwardsioidea are the most diverged.This topology is also consistent with actiniarian phylogeny.Beyond this broad clustering, we see repeated evidence of species-specific clustering, in Hexacorallia, suggesting lineage-specific duplications are underlying much of the evolution of this gene family.For example, we find that Nematostella (Nve) MACs 1 to 4 cluster together, in E. diaphana MACs 5 to 14 cluster together, and all S. pistillata and A. millepora sequences cluster in a species-specific manner.We also find evidence of gene loss events which is likely contributing to the patchy distribution of sea anemone sequences in the anthozoan clade which includes sequences from the genome of S. callimorphus but also sequences from the transcriptomes of species coming from all three actiniarian superfamilies.This suggests that gene loss events happened independently in both Nematostella and E. diaphana, suggesting that the evolution of the MAC gene family is highly dynamic.
While we report lineage-specific duplications among Nematostella MACs 1 to 4, we also find that some Nematostella and S. callimorphus MACs are orthologous, cluster together phylogenetically, as well as share chromosomal macrosynteny (Fig. 2, supplementary table S2, Supplementary Material online).Specifically, we find that NveMACs 5 and 6 are orthologous with S. callimorphus (Sca) MACs 4 and 3, respectively.They also share macrosynteny, with NveMAC5 and 6 being found on chromosome 3 in Nematostella which is homologous to chromosome 8 in S. callimorphus where ScaMAC 3 and 4 are found.We also find that NveMAC7 is orthologous with ScaMAC1 and ScaMAC2, clustering together as well as sharing macrosynteny (Fig. 1).This strongly supports that NveMACs 5 to 7 are likely the ancestral sequences, with MACs 1 to 4 evolving via Nematostella-specific duplications.
MBE
MACs from the three sea anemone superfamilies investigated can be found in both MAC clades.The clade containing only sea anemone sequences, however, includes proteins known to be a component of sea anemone venom profiles.Specifically, the sequences from P. semoni (PsTX60A and PsTX60B) and A. villosa (AvTX60B) have been isolated from nematocytes and shown to be toxic to shrimp and hemolytic toward sheep red blood cells (Nagai et al. 2002;Oshiro et al. 2004;Satoh et al. 2007;Uechi et al. 2011).Using AlphaFold, analysis of the predicted structure of NveMACs 1 to 4 with the predicted structure of PsTX60A and PsTX60B showed broad overlap among all sequences, suggesting that potentially NveMACs 1 to 4 might also have hemolytic activity (supplementary fig.S1, Supplementary Material online).This hinted to us that these sequences may therefore also play a role in the venom composition of other sea anemones including Nematostella.
Spatiotemporal Expression of MAC in Nematostella
The recent advancements in single-cell sequencing in nonmodel organisms have allowed the establishment of cell atlases of multiple different anthozoans including Xenia sp., S. pistillata, E. diaphana, and Nematostella (Hu et al. 2020;Levy et al. 2021;Steger et al. 2022;Cui et al. 2023).Strikingly, we find at least one MAC is expressed in nematocytes from each of the anthozoan cell atlases The nematocyte-specific expression is characterized using cell atlases from different anthozoans including Xenia sp., S. pistillata, E. diaphana, and Nematostella (Hu et al. 2020;Levy et al. 2021;Steger et al. 2022;Cui et al. 2023).Additional NveMACs are found to have nematocyte-specific expression using bulk RNA-seq of the Nematostella transgenic line expressing NvNcol3::memOrange2, a nematocyte marker (Sunagar et al. 2018;Gahan et al. 2022).Sequences found to have high levels of expression in the Endo-atlas of Nematostella (He et al. 2023) also contain a gray/ circular cartoon representing the endomesodermal segments.MACs found in the venom and isolated from nematocytes of anthozoans are highlighted with an asterisk as well as in bold with a magenta nematocyte cartoon.Ultrafast bootstrap values and SH-like approximate likelihood ratio test above 85 are indicated.Taxon identifies can be found supplementary table S1, Supplementary Material online.Silhouettes were made from BioRender.com.S1, Supplementary Material online).In Nematostella, which arguably has the most comprehensive cell atlas, generated using a detailed scRNA-seq data set across development, we find that NveMACs 1 to 3 have expression in nematocytes.To complement this, we investigated bulk RNA-seq of the Nematostella transgenic line expressing NvNcol3:: memOrange2, a nematocyte marker (Sunagar et al. 2018;Gahan et al. 2022).We find that all Nematostella MACs are upregulated in NvNcol3::memOrange2 positive cells, further supporting that Nematostella MACs are expressed in nematocytes, including the more ancestral sequence NveMACs 5, 6, and 7. Finally, a recent endomesodermal-enriched scRNA-seq data set from planula was generated to construct a 3D spatial gene expression atlas of Nematostella.From this data set, we find that two MACs, MACs 1 and 4, are expressed at relatively high levels (>10) compared with the other NveMACs, as well as all other Nematostella toxins, which all had low to no expression in the Endoatlas (supplementary tables S1 and S3, Supplementary Material online).This finding suggests that these two NveMACs are expressed in the endomesoderm in addition to nematocytes.This is further supported by RNA-seq data from previous experiments where loss-of-function mutations in HOX genes (ANthox1a, ANthox6a, ANthox8), which are important regulators of endomesodermal segmentation, were introduced to Nematostella embryos (He et al. 2023).The sequencing revealed that those mutations resulted in significant down regulation of NveMAC1 and NveMAC4 (supplementary table S8, Supplementary Material online).No other NveMACs or Nematostella toxins were found to be dysregulated following the knockout of the HOX genes.However, a similar pattern is observed for other genes known to be expressed in the endomesoderm (He et al. 2023), further supporting that NveMAC1 and NveMAC4 are expressed in the endomesoderm.
Due to the conflicting results coming from different Nematostella single-cell and RNA-seq data sets which revealed that NveMAC1 and 4 are likely expressed in both endomesodermal cells and nematocytes, we aimed to characterize the expression of NveMACs experimentally using in situ hybridization (ISH).Interestingly, of the four genes that are expressed at levels detectable by ISH, only NveMAC2 is being expressed exclusively in nematocytes, whereas we find that NveMAC1 is expressed exclusively in endomesodermal cells (Fig. 3a).To verify the specificity of the obtained ISH patterns, we microinjected to Nematostella zygotes short-hairpin RNA (shRNA) against NveMAC1 and showed that the staining is greatly reduced in comparison with larvae of the same age that developed from zygotes injected with control shRNA (supplementary fig.S2, Supplementary Material online): NveMAC1 knockdown: 3/189 stained embryos; control shRNA, 145/146 stained embryos.In parallel, we explored the temporal expression of NveMACs using NanoString nCounter data,
MBE
revealing that NveMAC1 expression is restricted to a very short period, peaking at the planula stage (Fig. 3b and supplementary fig.S3 and tables S4 and S5, Supplementary Material online).In contrast, NveMAC2 expression is relatively stable throughout the life of Nematostella.To support this, we extracted RNA-seq count data across early development (0 to 240 hpf) from the NvERTx database.(Helm et al. 2013;Fischer et al. 2014;Warner et al. 2018) and performed principal component analysis (PCA) to help quantify and visualize the temporal expression of NveMACs.The PCA confirms that NveMAC1 and NveMAC2 have distinct temporal expression patterns evident with each grouping separately (Fig. 3c).Taken altogether, NveMAC1 and NveMAC2 have distinct spatiotemporal expression patterns, with NveMAC2 likely playing a role in Nematostella venom evident by its expression in nematocytes and stable expression across the life history of Nematostella, whereas the relatively short time window of the expression of NveMAC1 and expression in endomesodermal cells raises the unexpected possibility that it might be involved in development.
Exploring the spatiotemporal expression of NveMAC3 and 4 reveals they exhibit an "intermediate" expression pattern between NveMACs 1 and 2. In the early planula, they are weakly expressed in nematocytes, similar to NveMAC2, and strongly expressed in endomesodermal cells, resembling the same cells that express NveMAC1 (Fig. 3a).Furthermore, we find that NveMAC4 temporal expression pattern mirrors the expression pattern of NveMAC1, evident by its grouping with NveMAC1 in our PCA (Fig. 3c).Interestingly, NveMAC3 temporal expression groups with NveMACs 1, 2, and 4 on PC1 axis
MAC Depletion Interferes with Nematostella Development
To investigate the functional role of NveMACs 1, 3, and 4, we depleted their expression in embryos by injecting shRNA targeting NveMACs specifically.Knockdown efficiency was confirmed in 4-day-old planula using qPCR, revealing that all shRNA used resulted in significant knockdown efficiency (supplementary table S6, Supplementary Material online; >50% knockdown and P-value < 0.05) of the target NveMAC compared with animals injected with control shRNA.After confirming the knockdown efficiency, additional animals were tracked until 10 d postfertilization (dpf).The control shRNA-injected embryos developed normally, undergoing metamorphosis and progressing into primary polyps.shRNA-injected embryos targeting specific NveMACs (1, 3, and 4) also resulted in normal developments, with animals undergoing normal metamorphosis (supplementary table S7, Supplementary Material online).
Given the overlap of RNA expression for NveMACs 1, 3, and 4 in the endomesoderm, we suspected that some compensation might be occurring.To test this, we coinjected shRNAs to target NveMACs 1, 3, and 4 simultaneously.We confirmed that this approach still results in a significant knockdown (Fig. 4a to c, >50%; P-value < 0.05) of all three NveMACs (1, 3, and 4) compared with planula injected with an equal concentration of control shRNA.Strikingly, embryos injected with a combination of shRNAs targeting MACs 1, 3, and 4 have developmental defects, with only ∼65% of 10 dpf of these animals developing into primary polyps compared with control (Fig. 4d to f, P-value = 0.0008), in which 85% of animals underwent metamorphosis into primary polyps by the same time point.Developmental defects in NveMACs 1, 3, and 4-depleted animals suggest these members of the MAC family are essential for proper Nematostella development.
Discussion
In this work, we unraveled the evolution of a gene family of proteins containing the MACPF domain in Nematostella as MBE its role transitions from venom to development.Broadly, proteins that contain the MACPF domain are part of a superfamily of PFTs that have a large distribution, covering various groups of the tree of life, from bacteria to mammals (Anderluh et al. 2014;Moreno-Hagelsieb et al. 2017).The structure of MACs enables their function in lysing cells by generating pores in their membranes (Anderluh et al. 2014;Moreno-Hagelsieb et al. 2017).This ability to cause pores in cell membranes is useful for a variety of roles in eukaryotes ranging from immunity to development (Anderluh and Lakey 2008).Two prominent examples are Astrotactins from mammals and Torso-like from flies that carry important developmental roles (Berkowicz et al. 2017;Johnson et al. 2017).
Notably, some proteins that contain a MACPF domain, such as Astrotactin-2, can carry functions even if they are unable to lyse cells (Ni et al. 2016;Moreno-Hagelsieb et al. 2017).
The MACPF domain-containing proteins have undergone considerable variation at the sequence level, yet their predicted 3D structure is relatively well conserved.Notably, the sequences from P. semoni, which are known to lyse blood cells, share this high structural similarity (Nagai et al. 2002;Satoh et al. 2007).Given this conservation in structure, we suspect that all seven Nematostella MACs maintain the capacity to lyse cells, especially copies 1 to 4 which share the greatest similarity to P. semoni MACs.This ability to disrupt cell membranes can serve different cellular functions, including embryonic development (Anderluh and Lakey 2008).We find that development is disrupted when MAC1, 3, and 4 are depleted simultaneously.A potential explanation for this finding is that these genes are redundant and the expression of even one of them is enough to allow normal development.The recruitment of several MACs into developmental processes is a striking example for neofunctionalization.
Reconstructing the evolutionary steps that have led generation of a gene family is a complex and challenging process, largely due to the various potential trajectories that can occur following gene duplication.For the NveMAC gene family, we have predicted two possible evolutionary scenarios which rely on different molecular processes: reverse recruitment and subfunctionalization (Fig. 5a and b).However, to disentangle between these two scenarios, we first need to reconstruct the phylogenetic history of the NveMAC copies present in the Nematostella genome.From our macrosynteny and phylogenetic analysis, we find the NveMACs 5 to 7 are likely the ancestral sequences due to them being orthologous to other sequences from the Edwardsioidea superfamily.Furthermore, NveMAC6 is found to neighbor NveMAC5, suggesting that NveMAC5 evolved from a tandem duplication of NveMAC6.Understanding this evolutionary history of NveMACs allows us to then reconstruct the scenarios that led to the observed function and expression domains that we find in the different NveMACs.
For the first scenario, the most parsimonious explanation is that the ancestral sequence of NveMACs 1, 3, and 4 likely was expressed in both nematocytes and endomesodermal cells.For this to occur, the ancestral MAC (AncMAC) found in the last common ancestor of all anthozoans was recruited into the nematocytes first (Fig. 5a).Multiple rounds of gene duplications eventually led to the birth of the AncMAC1/3/4 gene which was reverse-recruited out of nematocytes and venom into endomesodermal cells to function in development.Following duplication leading to the birth of MAC1 and MAC4, we suspect that the residual expression of NveMAC1 in nematocytes was nearly lost as now it is so low that it cannot be detected by ISH.This supports that MAC1 is undergoing specialization in the endomesodermal cells.Taken together with evidence that MAC1 and MAC4 share very similar temporal expression patterns, it suggests that the specialization we see in NveMAC1 occurred after the duplication and generating MACs 1 and 4. We also suspect that specialization of MAC3 and MAC4 is also occurring with the nematocyte expression being lost; however, this loss is occurring via a more gradual process.
Alternatively, subfunctionalization may also explain the evolutionary trajectory of NveMACs.Although a relatively rare event following gene duplication, subfunctionalization is a process where the gene duplicates become specialized via partitioning of the ancestral functions (Lynch and Force 2000).In the case of NveMACs, a process of subfunctionalization would hypothesize that in the last common anthozoan ancestor, the AncMAC was recruited into both endomesodermal cells and nematocytes (Fig. 5b).Having expression in both the endomesoderm and nematocytes was then maintained following repeated duplication events until NveMAC1, and NveMACs 2, 6, and 7 underwent specialization to be expressed almost exclusively in the endomesoderm and nematocytes, respectively.This would also suggest the combined expression we find in NveMACs 3 and 4 is reflecting the AncMAC expression.For this scenario to be true, it would require the ancestral expression in the endoderm and nematocyte to be retained following four duplication events until the birth of NveMAC1, with all previous copies becoming specialized to be expressed in nematocytes after the duplication event.This is plausible only if the increased expression of NveMACs in the endomesoderm is deleterious, which would cause strong selection pressure to lose the endomesodermal expression following the birth of a new NveMAC.If indeed the ancestral NveMAC had expression in both endomesoderm and nematocytes, it would also assume that orthologs of AncMAC would also have expression on both cell types.This scenario would therefore assume that a MAC found in the last common ancestor of Anthozoa had expression in both endomesoderm and nematocytes and became specialized to nematocytes independently four times.However, we found no evidence that members of this MAC gene family had expression in any other cell type.Our alternative hypothesis of reverse recruitment is more parsimonious in which AncMAC was expressed in nematocytes and gained dual expression in NveMAC1, which later specialized to just the endomesoderm, and Sea Anemone MAC Superfamily Shows a Transitional State • https://doi.org/10.1093/molbev/msae082MBE this dual expression has also been retained in NveMAC3 and 4. Furthermore, the reverse recruitment scenario does not require any assumptions regarding deleterious effects of MAC expression in the anthozoan endomesoderm, making it more plausible.
It should be noted that while we see no evidence of nematocyte expression of NveMAC1 in our ISH results, the scRNA-seq analysis suggests that this gene still exhibits some expression in nematocytes.This residual expression is caught by ISH for NveMAC3 and NveMAC4, suggesting it is more profound for these genes.The very weak expression of NveMACs 1, 3, and 4 in cnidocytes compared with their endomesodermal expression domain suggests that this is "vestigial expression" that might becoming nonfunctional.Furthermore, the expression of NveMAC3 and NveMAC4 both in nematocytes and in endomesodermal cells suggests that they may be a molecular transitional state between the ancestral expression domain in nematocytes to the derived expression domain in the endomesoderm.
Plausibly, maintaining the ancestral nematocyte expression of NveMAC3 and NveMAC4 may not be deleterious and therefore is kept due to neutrality.Our phylogenetic analysis suggests that these genes arise from more recent duplication events in which not enough time has occurred for them to lose this ancestral nematocyte expression via drift.This is consistent with previous systematic studies investigating gene duplicates in fungi: In this work, the authors show that genes involved in complex interactions, such as those essential for cell growth, are sensitive to increased gene expression noise associated with increased copy number and tend to not evolve via gene duplication (Wapinski et al. 2007).Contrastingly, genes that are responsive to stress and have dynamic gene expression levels tend to evolve via gene duplication.This is consistent with the function of venom-related genes, which are highly responsive to stress in Nematostella and have undergone significant copy number variation across populations to meet their ecological requirements (Sachkova et al. 2020b;Smith et al. 2023;Surm et al. 2024).
Overall, our findings have uncovered the striking evolutionary history of this gene family containing a MACPF domain in Anthozoa.We reveal that gene duplication is driving the recruitment of different members of this gene family into different cell types ranging from nematocytes, neurons, and endomesodermal cells.We have also discovered that members that have undergone more recent duplication events are currently going through a transitional state, where they have gained expression in a new cell type while still maintaining residual expression from their ancestral copy.This transitional state is an exciting discovery as piecing together the exact evolutionary process that leads to genes deriving their function is extremely difficult.The mechanistic basis for the expression of these genes in multiple spatial domains remains to be discovered.
Comparative Genomics and Phylogenetics
We analyzed transcriptomes from ten sea anemone species, spanning three of the five Actiniarian superfamilies (Actinioidea, Edwardsioidea, and Metrioidea).These transcriptomes that were sampled from either multiple tissues or tentacles were downloaded from the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) using FASTQ-DUMP in the SRA toolkit.Raw reads retrieved were assessed for quality and trimmed using Trimmomatic (Bolger et al. 2014).Trinity was used to assemble transcriptomes de novo from the filtered raw reads (Grabherr et al. 2011;Haas et al. 2013).BUSCO (v4) was used to validate the quality and completeness of the transcriptomes (Manni et al. 2021).We predicted open-reading frames from each transcriptome using ORFfinder (https://www.ncbi.nlm.nih.gov/orffinder/) and performed BLASTp (E-value 1e −05 ) using sea anemones MAC toxins as queries.Sequences retained were then used to determine the presence of a signal peptide using SignalP (v5.098) as well as a single MACPF Pfam domain (PF01823.22).A similar pipeline was also performed to identify MACs across Anthozoa using genomes from Xenia sp., A. millepora,S. pistillata,N. vectensis,S. callimorphus,A. tenebrosa,A. equina and E. diaphana (CC7).Sequences were then manually curated, and those including large insertions or deletions were removed.
The refined list of full-length MACs was used for phylogenetic analyses to determine the reconstruct its evolutionary history across Anthozoa.Protein sequences were aligned using MUSCLE in MEGA 11 (Tamura et al. 2021).Protein alignments were imported into IQ-TREE, and the best fit of protein model evolution was determined using ModelFinder (Nguyen et al. 2015).Using the Bayesian information criterion, a WAG+I+G4 model was selected as the best-fit model of protein evolution.Phylogenetic trees were generated from alignments using 1,000 ultrafast bootstrap iterations and the SH-aLRT test (Guindon et al. 2010).The tree was visualized using Interactive Tree Of Life (Letunic and Bork 2016).
Homologous chromosomes were found among the genomes to determine if anthozoan MACs show any evidence of macrosynteny among anthozoans.This was achieved first by identifying 1,103 single-copy orthologs using OrthoFinder (Emms and Kelly 2019) with the predicted proteomes annotated from anthozoan genomes that were generated using long-read sequencing technology.This included A. millepora, Nematostella, S. callimorphus, and A. tenebrosa.The chromosomal locations for the single-copy orthologs were then compared to generate a macrosynteny map of chromosomes among the genomes.For the remaining genomes, we extracted all predicted proteins from the same scaffolds containing MACs and performed BLASTp (1e −5 ) against Nematostella proteins and counted their chromosomal location in the Nematostella genome.For E. diaphana, scaffold50 was found to contain all MACS used in this study and was downloaded from Reef Genomics (http://aiptasiav2.reefgenomics.org/).Proteins were then predicted using the FGENESH (Solovyev et al. 2006) online server (http:// www.softberry.com/berry.phtml).
Sea Anemone Culture
Nematostella embryos, larvae, and juveniles were grown in 16‰ sea salt water at 22 °C.Adults were grown in the same salinity but at 17 °C.Polyps were fed with Artemia salina nauplii three times a week.Induction of gamete spawning was performed according to a published protocol (Genikhovich and Technau 2009a).The gelatinous egg sack was removed using 3% L-cysteine (Merck Millipore, Burlington, MA) and followed by microinjection of shRNAs.All Nematostella individuals used in this study belonged to the common lab strain originating from Rhode River MD (Hand and Uhlinger 1992).
nCounter Analysis Total RNA from different developmental stages of Nematostella was extracted as previously described (Columbus-Shenkar et al. 2018).Briefly, RNA was extracted using Tri-Reagent (Sigma-Aldrich, St. Louis, MO) according to manufacturer's protocol, treated with TURBO DNAse (Thermo Fisher Scientific, Waltham, MA) and then re-extracted with Tri-Reagent.RNA quality was assessed on Bioanalyzer Nanochip (Agilent, Santa Clara, CA).Each sample was prepared from hundreds of specimens in order to normalize for any individual variation.Gene expression of MACs was analyzed using the nCounter platform (NanoString Technologies, Seattle, WA; performed by Agentek Ltd., Israel) as previously described (Columbus-Shenkar et al. 2018), using technical triplicates, each made from a different batch of RNA sample.For each MAC transcript tested, we used two probes each.Normalization was performed using the geometric mean of the expression levels of five reference genes with stable expression across development (Columbus-Shenkar et al. 2018).
shRNA Generation and KD Experiments Two shRNA precursors for each MAC gene were designed as previously described (Karabulut et al. 2019;Lewandowska et al. 2021).Reverse complement sequence of shRNA precursors were synthesized as DNA ultramer oligo by Integrated DNA Technologies (Coralville, IA), mixed with T7 promoter primer in 1:1 ratio in a final concentration of 25 µM, denatured at 98 °C for 5 min, and cooled to 24 °C.shRNAs were synthesized with AmpliScribe T7-Flash Transcription Kit (Epicentre, Charlotte, NC) for 15 h followed by 15 min treatment with 1 µL of DNase I.The in vitro transcribed products were purified using the Quick-RNA Miniprep Kit (Zymo Research, Irvine, CA).shRNAs were used for microinjection at concentrations ranging from 400 to 1,200 ng/uL.Approximately 100 injected planula (4 dpf) were flash frozen in liquid nitrogen and stored at −80 °C and used for downstream qPCR analysis.MACs1 Sea Anemone MAC Superfamily Shows a Transitional State • https://doi.org/10.1093/molbev/msae082MBE to 4 were first targeted individually, and then, MAC1, 3, and 4 were targeted simultaneously by combining three validated shRNAs each targeting MAC1, 3, or 4 specifically.
Reverse-Transcription Quantitative PCR
To quantify the knockdown efficiency of our shRNAs, we analyzed the expression levels of MACs using reversetranscription quantitative PCR (RT-qPCR).A minimum of three biological was used for each shRNA or combination of shRNAs.First, RNA was extracted from injected embryos following the same protocol as previously described (Lewandowska et al. 2021).Five hundred ng of RNA was converted into cDNA in a 20 μL reaction.cDNA was constructed using iScript cDNA Synthesis Kit (Bio-Rad, Hercules, CA) according to the manufacturer's protocol.Real-time PCR was prepared with Fast SYBR Green Master Mix (Thermo Fisher Scientific) on the StepOnePlus Real-Time PCR System v2.2 (ABI, Thermo Fisher Scientific).The expression levels of tested genes were normalized to previously validated housekeeping gene (Columbus-Shenkar et al. 2018), and the relative gene expression was calculated using the 2ΔΔCt method.The significance level was calculated by two-tailed Student's t-test to ΔCt values for each of the pairwise comparisons to control shRNA.
Assessment of Phenotype Following KD of MACs
We injected either a specific shRNA or a combination of shRNAs.Control shRNA were injected in parallel at an equal concentration, and morphology was tracked for injected animals until 10 dpf.Each experiment consisted of at least three biological replicates.For the combinations of shRNAs to target MAC1, 3, and 4, 400 ng/µL was injected of each specific shRNA, whereas 1200 ng/µL of control shRNA was injected in parallel.After 10 dpf, animal physiology was visualized under an SMZ18 stereomicroscope equipped with a DS-Qi2 camera (Nikon, Tokyo, Japan).
ISH
ISH was performed as previously described (Genikhovich and Technau 2009b).Embryos older than 4 d were treated with 2 u/µL T1 RNAse (Thermo Fisher Scientific) after probe washing in order to reduce background.Stained embryos and larvae were visualized with an Eclipse Ni-U microscope equipped with a DS-Ri2 camera and an Elements BR software (Nikon).For each gene, at least 20 individuals from each developmental stage were tested.The specificity of the NveMAC1 probe was confirmed by performing ISH on 4 dpf animals that were injected with either control shRNA or shRNA to knockdown MAC1.This was repeated and the ratio of stained animals was compared.
Meta-analysis of Bulk RNA-seq scRNA-across
We performed a comparative analysis using previously published RNA-seq data of two reporter lines and three mutant lines.Data from both reporter lines were generated from Nematostella primary polyps that express a fluorescent transgene, under either the promoter of NvNcol3:: memOrange2, a nematocyte maker (Sunagar et al. 2018;Gahan et al. 2022), or NvElav1::memOrange, a neuronal marker (Nakanishi et al. 2012;Tournière et al. 2020).The data from mutant lines consisted of HOX homozygote knockout lines (ANthox1a, ANthox6a, ANthox8) which play an important role in regulating endomesodermal segmentation in Nematostella (He et al. 2023).Raw reads were downloaded from the SRA (NvNcol3::memOrange2: PRJEB40304, NvElav1::memOrange PRJEB36771, HOX mutants: PRJNA727015).Raw reads were trimmed and quality filtered by Trimmomatic.Reads were mapped to a modified Nematostella genome.Mapping was performed using STAR and the gene counts quantified using RSEM.Differential expression analyses were performed using scripts from Trinity using both DESeq v2.139 and edgeR v3.1675.Gene models used in all downstream analyses were from previously published annotations (Schwaiger et al. 2014).Differentially expressed genes were defined by false discovery rate (FDR) < 0.05 and fold change ≥ 2. Genes identified by both methods were considered as differentially expressed.Biological replicates were quality-checked for batch effect using sample correlation and PCA.
Structural Predictions
AlphaFold2 was used to model the structure of P. semoni MACs and NveMACs 1 to 7 (Jumper et al. 2021).Top-ranked AlphaFold2 models for each MAC were used as a query to search the AlphaFold/UniProt50 and Protein Data Bank (PDB) database using the Foldseek webserver in TM-align mode (van Kempen et al. 2023).Predicted structure figures were generated using PyMOL version 2.4.0 (Schrödinger, LLC).
Fig. 1 .
Fig. 1.Phylogeny of MAC genes across Anthozoa.Maximum-likelihood tree of the MACs in anthozoans.The sequences found in nematocytes appear in bold with a magenta/ elliptical nematocyte cartoon.The nematocyte-specific expression is characterized using cell atlases from different anthozoans including Xenia sp., S. pistillata, E. diaphana, and Nematostella(Hu et al. 2020; Levy et al. 2021; Steger et al. 2022; Cui et al. 2023).Additional NveMACs are found to have nematocyte-specific expression using bulk RNA-seq of the Nematostella transgenic line expressing NvNcol3::memOrange2, a nematocyte marker(Sunagar et al. 2018; Gahan et al. 2022).Sequences found to have high levels of expression in the Endo-atlas of Nematostella(He et al. 2023) also contain a gray/ circular cartoon representing the endomesodermal segments.MACs found in the venom and isolated from nematocytes of anthozoans are highlighted with an asterisk as well as in bold with a magenta nematocyte cartoon.Ultrafast bootstrap values and SH-like approximate likelihood ratio test above 85 are indicated.Taxon identifies can be found supplementary tableS1, Supplementary Material online.Silhouettes were made from BioRender.com.
Fig. 2 .
Fig. 2. Schematic representation depicting the MAC gene cluster and gene synteny among Anthozoans.Dashed lines between N. vectensis and S. callimorphus represent a cluster that shares macrosynteny.Created with BioRender.com
Fig. 4 .
Fig. 4. Knockdown of MACs in Nematostella.qRT-PCR for NveMACs at late planula stage after shRNA injection.The graph shows the relative fold change in the expression between control shRNA and the combined knockdown of a) NveMAC1, b) NveMAC3, and c) NveMAC4, using sequence-specific shRNA.The values for the individual replicates are shown as circles.The mean difference is depicted as a dot; the 95% confidence interval is indicated by the ends of the vertical error bar.d) Quantification of normal polyp development following the injection of control shRNA or knockdown of NveMACs 1, 3, and 4 simultaneously in 10 dpf polyps.e) Control shRNA.f) NveMACs 1, 3, and 4 shRNA.*P-value < 0.05; NS, not significant; **P-value < 0.01; ***P-value < 0.001.Scale bar represents 200 μm.
Fig. 5 .
Fig. 5. Reconstructing evolutionary history and possible scenarios of the origin of NveMACS as either reverse recruitment (a) or subfunctionalization (b).Horizontal arrows represent duplication event.Nematocyte expression is represented as magenta boxes and nematocyte cartoon, endomesodermal expression is represented by gray boxes, and cartoon represents the endomesodermal segments.Expression in both cell types is represented by boxes with both pink and gray and cartoons depicting both nematocyte and endomesodermal segments.
|
2024-04-28T15:09:33.296Z
|
2024-04-27T00:00:00.000
|
{
"year": 2024,
"sha1": "997f4ebc09ccce127db0856898c238936ac1233c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f6536b79159e8a5019ca00e5a9dded8941468ea",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2032605
|
pes2o/s2orc
|
v3-fos-license
|
Kinetic Analysis of Yersinia pestis DNA Adenine Methyltransferase Activity Using a Hemimethylated Molecular Break Light Oligonucleotide
Background DNA adenine methylation plays an important role in several critical bacterial processes including mismatch repair, the timing of DNA replication and the transcriptional control of gene expression. The dependence of bacterial virulence on DNA adenine methyltransferase (Dam) has led to the proposal that selective Dam inhibitors might function as broad spectrum antibiotics. Methodology/Principal Findings Herein we report the expression and purification of Yersinia pestis Dam and the development of a continuous fluorescence based assay for DNA adenine methyltransferase activity that is suitable for determining the kinetic parameters of the enzyme and for high throughput screening against potential Dam inhibitors. The assay utilised a hemimethylated break light oligonucleotide substrate containing a GATC methylation site. When this substrate was fully methylated by Dam, it became a substrate for the restriction enzyme DpnI, resulting in separation of fluorophore (fluorescein) and quencher (dabcyl) and therefore an increase in fluorescence. The assays were monitored in real time using a fluorescence microplate reader in 96 well format and were used for the kinetic characterisation of Yersinia pestis Dam, its substrates and the known Dam inhibitor, S-adenosylhomocysteine. The assay has been validated for high throughput screening, giving a Z-factor of 0.71±0.07 indicating that it is a sensitive assay for the identification of inhibitors. Conclusions/Significance The assay is therefore suitable for high throughput screening for inhibitors of DNA adenine methyltransferases and the kinetic characterisation of the inhibition.
INTRODUCTION
N 6 -methylation of adenine in DNA requires S-adenosylmethionine (AdoMet) as the methyl group donor and is catalysed by a family of DNA adenine methyltransferases (Dam) found in bacteria and viruses [1,2].Dam is a member of this family that selectively modifies the adenine base in the sequence GATC and is responsible for most of the DNA methylation in Escherichia coli [3].Unlike many other DNA adenine methyltransferases, Dam does not form part of a restriction modification system, but it has a pervasive influence in regulating many cellular processes including mismatch repair [4] and the timing of DNA replication [5].Dam has also been implicated as a global regulator of gene expression: for example the control of the expression of several important genes in E. coli [6,7], the Yersinia outer proteins [8] and at least 20 genes in Salmonella typhimurium [9].
Manipulating the levels of Dam in bacteria has been shown to reduce bacterial virulence.Dam deficient mutants of Salmonella typhimurium [9], Yersinia pseudotuberculosis [10] and Yersinia pestis [11] show reduction in virulence of 10,000-fold, 1 million-fold and .2300-foldrespectively.It has also been observed that inactivation of the dam gene is lethal to some bacteria, for example Y. pseudotuberculosis YPIII and Vibrio cholerae [8].
The degree of attenuation afforded by manipulation of the dam gene has led to the proposal that Dam inhibitors could find application as broad spectrum antibiotics [9,12,13].To identify compounds with an inhibitory effect, a reliable assay for Dam activity is required, ideally one readily adapted to high throughput screening.Perhaps the most reliable reported method for measurement of Dam activity involved monitoring the incorporation of a 3 H labelled methyl group (transferred from 3 H labelled AdoMet) to an adenine residue within a DNA substrate, typically either calf thymus DNA [14] or a short double stranded oligonucleotide [15].This method has allowed the kinetic analysis of a number of DNA adenine methyltransferases, including that from E. coli [16,17], bacteriophage T2 [18] and bacteriophage T4 [19], but using this type of discontinuous assay for detailed kinetic studies is labour intensive.
Recently two assays have been reported that couple methylation sensitive restriction enzymes to a fluorescence signal.Both of these assays utilise 'molecular break light' oligonucleotides [20].Based on molecular beacon design [21], they are single stranded, self complementary oligonucleotides which adopt a hairpin loop structure with a fluorophore at the 59 terminus and a quenching dye at the 39 terminus.Initially the stem of the oligonucleotide keeps the fluorophore and quencher in close proximity, promoting efficient quenching.Cleavage of the stem leads to separation of the fluorophore and quencher, resulting in an increase in fluorescence which is directly related to the quantity of cleaved DNA.The break light assay has been applied to several examples of DNA modification and cleavage, such as measuring the activity of restriction enzymes [12], nucleases [22,23] and DNA cleavage by enediynes [20].
By combining the use of methylation sensitive restriction enzymes with a molecular break light oligonucleotide, two alternative assay formats have been developed by Mashhoon et al. [12] and Li et al. [24].Mashhoon et al. used a break light oligonucleotide in a protection assay.In the first stage, Dam was allowed to methylate the oligonucleotide, and potential Dam inhibitors could be added at this step.In a second step, the oligonucleotide was subjected to cleavage by the restriction endonuclease DpnII (which only restricted the unmethylated DNA).By measuring the resultant fluorescence, the proportion of DNA protected by Dam methylation could be estimated.The assay was suitable for high throughput screening, allowing the relative potency of a library of Dam inhibitors to be compared to a standard.However, the assay is discontinuous and did not permit the rate of methylation to the symmetrical product to be determined.
The principal limitation of this assay is that it is discontinuous, making detailed kinetic analysis rather time consuming.The next logical development was a continuous assay for Dam activity in a format suitable for high throughput analysis.In a recent publication, Li et al. [24] described a continuous fluorescence Dam activity assay.This monitored the methylation of an unmethylated break light oligonucleotide in a coupled assay.Two methylation steps are required to make the fully methylated oligonucleotide, which can then be restricted by DpnI, resulting in a proportional increase in fluorescence.Unfortunately, the kinetics of the first methylation to the intermediate hemimethylated oligonucleotide results in a significant lag phase in the observed fluorescence signal, so that the assay is not convenient for determining initial rates of reaction, and Li et al. do not report kinetic parameters such as K M or V max for Dam substrates, or the K i for the potential Dam inhibitors.
To kinetically characterise Dam and its substrates or inhibitors, it was essential to develop an assay that reported the initial rate of reaction.To do this, we have used a hemimethylated break light oligonucleotide which upon turnover yields the fully methylated product.This is cleaved in situ by the restriction enzyme DpnI, resulting in an increase in fluorescence.Having established a direct relationship between the observed fluorescence increase and the methylation activity of Dam with this hemimethylated substrate, we have used the assay to determine the K M and V max of Y. pestis Dam for S-adenosylmethionine and the oligonucleotide substrate.To demonstrate the convenience of this assay, we have used it to determine the K i of the known Dam inhibitor, S-adenosylhomocysteine with Y. pestis Dam.
RESULTS
Expression and purification of His 6 -tagged Y. pestis Dam Initial attempts at expression of Y. pestis Dam in E. coli BL21(DE3) resulted in low protein yields and the attenuation of cell growth upon induction with arabinose.As E. coli was observed to be very sensitive to the intracellular Dam concentration, Dam was expressed in the dam-3 and end-1 strain GM215 [25], which did not show the attenuation of cell growth to the same extent observed in BL21(DE3).Large scale cell growth followed by induction at 37 uC and cell harvest two hours after induction was found to result in the highest yield of active protein.During the optimisation of purification of Y. pestis Dam, a rapid thermal inactivation was observed.It was therefore necessary to minimise the time taken to purify the protein, which was shortened to include a single nickel affinity purification step.The highest yield and purity of protein was achieved at pH 9.0.To ensure the purified Dam had the highest possible specific activity, the period of dialysis was kept as short as possible and the majority of the imidazole removed by two short dialysis steps (2630 min) at 4uC, after which aliquots of protein were flash frozen on dry ice and stored at 280uC.The yield of Dam was consistently 1.5 mg purified protein/l cell culture.
Continuous break light assay for Y. pestis Dam activity
To allow continuous monitoring of Dam methylation, hemimethylated oligonucleotide 1 (Fig. 1) was used in the coupled enzyme assay.The kinetics of the reaction were greatly simplified by the use of this hemimethylated substrate oligonucleotide, which allowed a single methylation step (Fig. 1, 1R2) to be directly coupled to the DpnI cleavage.The hemimethylated substrate has the added advantage of being the natural substrate for the enzyme during DNA replication [26].Under the conditions of the assay, DpnI preferentially cleaves doubly methylated oligonucleotide 1 leading to a direct relationship between each single methylation event and fluorescence increase.Example assays are shown in Fig. 2, where the ability of Dam to methylate oligonucleotide 1 was followed by measuring the fluorescence increase in assays containing 0, 0.31 and 0.61 nM Dam and a large excess of DpnI.During the timecourse, the fluorescence intensity increased in the assays containing Dam.Rates of fluorescence increase were highest immediately after initiation, but then slowly decreased.The initial rates of fluorescence increase (up to 180 s) were proportional to the concentration of Dam: for example, at a Dam concentration of 0.31 nM, the rate of fluorescence change was 17.162.12arbitrary units/s, but when the concentration of Dam was doubled to 0.61 nM, the rate of fluorescence also doubled to 34.761.82arbitrary units/s.A background increase in fluorescence was observed in negative control assays (lacking Dam) with an initial rate of fluorescence change of 4.6861.72arbitrary units/s.This slow background cleavage is probably due to DpnI cleavage of hemimethylated oligonucleotide 1 [27].The addition of sodium chloride to the assays has been demonstrated to increase the specificity of DpnI for the doubly-methylated substrate [28,29].The sodium chloride concentration was therefore optimised to 20 mM, minimising background cleavage by DpnI in the assay.Effect of substrates on Dam activity Y. pestis Dam was observed to be sensitive towards inactivation, and it was of interest to determine the kinetics of this process and to investigate the ability of substrates to stabilise the enzyme.To better understand this inactivation process, Dam was incubated for 0-500 s at 30uC, without substrates or in the presence of either 30 nM oligonucleotide 1 or 120 mM AdoMet.Aliquots of Dam were withdrawn throughout the time course and then assayed for activity.The first order rate constants (k inact ) for enzyme inactivation were obtained by fitting the results to an exponential function (Fig. 3) and half lives calculated from t K = 0.69/k inact .
Calibration of oligonucleotide fluorescence
When incubated alone, Dam was very unstable, with a half life of 42.567.5 seconds.Both substrates had a protective effect on the methyltransferase activity, increasing the half life of the enzyme by 3-50 fold.Incubation with oligonucleotide 1 had the greatest protective effect increasing the enzyme half life to .2300seconds.At least one other example of methyltransferase instability has been reported, for the Caulobacter crescentus cell cycle regulated adenine N-6 methyltransferase (Ccrm), for which a k inact of 2.3610 23 s -1 at 30uC was observed in the absence of substrates [30].CcrM was also stabilized by the addition of a saturating concentration of DNA.
Kinetic analysis of Y. pestis Dam using a break light assay
Our break light assay was used to determine K M AdoMet and K M DNA for oligonucleotide 1.The dependence of Dam activity on AdoMet concentration was fitted to a hyperbola (Fig. 4A).The measured K M AdoMet was 11.360.63mM, and is two times higher than that reported for E. coli Dam [17] (Table 1).
For the DNA substrate, data were also fitted to a hyperbola (Fig. 4B) to give K M DNA = 3.4361.68nM.The Y. pestis enzyme shows a considerably lower K M DNA than that reported for the E. coli protein [17].The observed k cat for Y. pestis Dam is 0.5560.01min 21 , a similar rate to that reported for both the E. coli and T4 DNA adenine methyltransferases [31].The error in K M AdoMet and k cat were comparable to those obtained using the tritium labelled filter binding assay.The observed instability of Y.
Measurement of K i of S-adenosylhomocysteine for Y. pestis Dam
The effect of a known Dam inhibitor, S-adenosylhomocysteine on Y. pestis Dam was investigated using the break light assay.From the data shown in Fig. 5, the K i was calculated to be 6.9362.01mM and is lower than the reported value for the E. coli enzyme (K i = 41.66 10.4 mM).These experiments demonstrate the application of the assay to the detailed kinetic analysis of a single inhibitor.
High throughput screening assay validation
The use of the hemimethylated oligonucleotide substrate for high throughput screening was validated using the method of Zhang [32].Data were collected from three 96 well plates containing positive (normal assay conditions, no inhibitor) and negative controls (assays containing no Dam) to assess the reproducibility and distribution the rate of fluorescence increase at the two activity extremes.From the distribution of the rates of fluorescence increase (Fig. 6), the 99.7% confidence limit (3 standard deviations from the mean) for control assays show that the assay has a high separation band, indicating that it is a highly sensitive assay for inhibitor identification.The data gives an average screening window coefficient (Z-factor) of 0.7160.07over three 96 well plates, implying that it is an excellent assay for hit detection.
DISCUSSION
Our initial investigations into the use of a DpnI cleavage assay to monitor Dam methylation began with a non-methylated break light oligonucleotide using a similar substrate to that of Li et al. [24].We found that the methylation kinetics of this substrate were too complicated to be simply resolved to allow a detailed kinetic analysis of the Dam methylation reaction.To kinetically characterise Dam and its substrates or inhibitors, it was essential to develop an assay that reported the initial rate of reaction.To do this, we have used a hemimethylated break light oligonucleotide which upon turnover yields the fully methylated product.This is cleaved in situ by the restriction enzyme DpnI, resulting in an increase in fluorescence.Having established a direct relationship between the observed fluorescence increase and the methylation activity of Dam with this hemimethylated substrate, we have used the assay to determine the K M and V max of Y. pestis Dam for AdoMet and the oligonucleotide substrate.To demonstrate the convenience of this assay, we have used it to determine the K i of the known Dam inhibitor, S-adenosylhomocysteine with Y. pestis Dam.This new approach has distinct practical advantages: the assay is simple to use, highly reproducible and gives real time continuous data which correlates directly to each methylation event giving the potential to allow more accurate determination of initial rates over shorter time scales.It permits the rapid determination of kinetic parameters and the K i for potential inhibitors, providing a safer alternative to the radioactive filter binding assay.
The assay has been validated for high throughput screening, giving a Z-factor of 0.7160.07,indicating that it is a highly sensitive and reproducible assay for hit detection and is sufficiently flexible to permit its application to screening large compound libraries for Dam inhibitors.
Materials
Expression vector pBAD/HisA and E. coli TOP10 competent cells were obtained from Invitrogen (Groningen, NL).DpnI was obtained from New England Biolabs (Herts., UK).Y. pestis was manipulated using suitable biological safety precautions and containment facilities.Y. pestis genomic DNA was isolated from Y. pestis strain GB using a PureGene DNA isolation kit from Gentra Systems (Minneapolis, USA).Other DNA was isolated from cultures or agarose gels using commercially available kits following the manufacturers instructions.E. coli GM215 was obtained from the Yale E. coli genetic stock centre (CGSC# 6645).PCR primers were purchased from Sigma-Genosys (Haverhill, UK).For fluorescence measurements, oligonucleotides 1 and 2 were purchased from ATDBio (Southampton, UK), flat bottomed black 96 well polypropylene and half area flat bottomed 96 well polystyrene microplates were purchased from Greiner Bio-One Ltd (Stonehouse, UK) and measurements were taken using a Tecan Safire 2 microplate reader (Reading, UK).Assays for high throughput screening were prepared using a Beckman Coulter Biomek 3000 liquid handling system equipped with a 200 ml single channel and 20 ml 8 channel pipette head.Mineral oil (sterile filtered, mouse embryo tested, light oil) and Sadenosylmethionine chloride were obtained from Sigma Aldrich (Poole, UK).Bovine serum albumin (BSA) was purchased from Advanced Protein Products Ltd (Brierley Hill, UK).
Construction of pRJW4213/07 for the expression of Y. pestis Dam
DNA manipulations were carried out using standard protocols [33].The dam gene was amplified by PCR from Y. pestis GB genomic DNA using Pfu Turbo polymerase and the following oligonucleotide primers: pfdam 59 GGCGGCCCATGGGCCACC-ACCACCACCACCACAAGAAAAACCGCGCTTTTTTAAA-ATGG and prdam 59 GCCGCCAAGCTTTCAGCTATAGAGC-GCCAAAAG.The dam gene was amplified adding an NcoI site (italics) and DNA encoding a His 6 tag (bold) to the 59 end of the amplified product, and a HindIII site (italics) at the 39 end.The PCR amplification product was purified and the NcoI-HindIII fragment containing the modified dam gene inserted by ligation between unique NcoI and HindIII sites of pBAD/HisA to yield plasmid pRJW4213/07.
Expression and purification of Y. pestis Dam
All purification steps were carried out at 4uC and samples centrifuged in a Beckman JA-14 rotor unless otherwise stated.All cell culture media contained 100 mg/ml ampicillin.Cell pastes and purified proteins were stored at 280uC.Protein purity was judged by SDS-PAGE with Coomassie staining.
2YT medium (100 ml) was inoculated from stored strains (pRJW4213/07 in E. coli strain GM215) and grown overnight in a shaking incubator at 37uC and 180 rpm.This overnight culture was used as 1 % innocula into 461250 ml fresh medium and grown until the OD 600 reached 0.6.The cultures were induced by addition of 10 ml/l of a filter sterilised 20 % w/v arabinose solution and growth continued at 37uC for two hours.Cells were harvested by centrifugation at 8000 rpm, 4uC, for 12 minutes and the cell paste, typically 20 g was stored at 280uC until required.
Dam was purified from 10 g of cell paste resuspended in 30 ml of buffer A [50 mM Tris/HCl (pH 9.0), 50 mM imidazole, 300 mM NaCl, 0.05 % v/v triton X-100, 10 % w/v glycerol, 10 mM 2-mercaptoethanol] and 0.3 ml of a 10 mg/ml lysozyme solution added.The suspension was stirred for 20 minutes at 4uC and then sonicated 25 times for 5 second bursts.The lysate was cleared by centrifugation at 12,000 rpm and 4uC for 30 min.The supernatant was applied to a nickel charged chelating sepharose FF column (5 ml bed volume) previously equilibrated in buffer A. The column was washed with 200 ml of buffer A and the proteins eluted with a 30 ml gradient to 100 % buffer B (buffer A plus 500 mM imidazole).The purest fractions (8 ml) were pooled and dialysed twice for 30 min at 4uC against 500 ml buffer C [50 mM Tris/HCl (pH 7.5), 200 mM NaCl, 0.2 mM EDTA, 20 % w/v glycerol, 2 mM dithiothreitol].The purification yielded 1.5 mg Dam from 10 g cell paste and aliquots of protein solution (100 ml) were immediately frozen at 280uC.
Dam activity assay
Fluorescence changes were recorded in a Tecan Safire 2 microplate reader using 10 readings per well (each measurement), 0.5 s between each movement and reading, 1 second of shaking between data collections with 7 seconds of settle time.The following instrument settings were used: the excitation wavelength was 486 nm, the emission wavelength was 518 nm, the bandwidth was 5 nm, the gain was 200, the Z-position was 9300 mm and the integration time was 40 ms.Break light oligonucleotide sequences used in the assay were; The activity of Dam was measured in triplicate in Greiner flat bottomed black 96 well polypropylene microplates, with a total assay volume of 200 ml, maintained at 37uC.Three buffers were required: buffer D [22.4 mM Tris-acetate (pH 7.9), 23.5 mM sodium chloride, 56 mM potassium acetate, 11.2 mM magnesium acetate, 1.12 mM dithiothreitol, 0.12 mg/ml BSA] which could be modified by the addition of between 0 and 235 mM AdoMet and between 0 and 35 nM oligonucleotide 1; buffer E [20 mM Tris-acetate (pH 7.9), 50 mM potassium acetate, 10 mM magnesium acetate and 1 mM dithiothreitol]; buffer F [48 mM Tris/ HCl (pH 7.4), 9.6 mM EDTA, 4.8 mM 2-mercaptoethanol, 0.4 mg/ml BSA and 30 nM oligonucleotide 1].Dam solution was prepared by defrosting an aliquot of Dam stock (as purified, 0.17 mg/ml) on ice for 10 minutes and diluting it 510 fold into buffer F. DpnI solution was prepared by diluting the stock solution to 1 U/ml in buffer E. 170 ml of buffer D was added to each well of the plate and overlaid with 3 drops of mineral oil.The plate was then equilibrated at 37uC for 15 minutes and the reaction initiated by the addition of 10 ml of DpnI solution and 20 ml of Dam solution to each well.Fluorescein emission was then monitored over time.The rate of reaction was calculated by taking the initial rate of fluorescence change over 180 seconds unless otherwise stated.Background changes in fluorescence were accounted for by subtracting a negative control (lacking Dam) when appropriate.The rate of change in fluorescence was converted to a rate of reaction using a fluorescence calibration curve.Data were fitted with the program SigmaPlot.
Inactivation of Y. pestis Dam
The inactivation was monitored at 30uC under three conditions: in the absence of substrates, with 30 nM oligonucleotide 1 or with 120 mM AdoMet.The Dam solution was prepared by diluting Dam (as purified) 213 fold in buffer F lacking DNA.To 200 ml of Dam solution was added 100 ml of DpnI solution and the resultant mixture aliquotted into PCR tubes (35 ml in each).The PCR tubes were maintained at 30uC in a PCR machine and aliquots withdrawn at the required time points.Aliquots were then rapidly cooled in an ice bath and the activities assayed at the end of the timecourse.
Inhibition of Dam by S-adenosylhomocysteine
Reactions contained 1 nM Dam and 33 nM oligonucleotide 1, with varying AdoMet and S-adenosylhomocysteine concentrations: Sadenosylhomocysteine; 0, 5, 10, 15, 20, 30, 40 mM, AdoMet; 20, 25, 30, 40, 70, 140, 200 mM.K i SAH was estimated as follows: a double reciprocal plot (Fig. 5A) of 1/rate of reaction against 1/concentration of AdoMet has a slope of K M.app /V max , yielding a series of values for the apparent K M (K M.app ) at different inhibitor concentrations.A plot of K M.app against concentration of S-adenosylhomocysteine (Fig. 5B) has intercepts of K M on the K M.app axis and -K i on the concentration of S-adenosylhomocysteine axis.
High throughput screening validation conditions
The activity of Dam was measured in half area Greiner flat bottomed black 96 well polystyrene microplates, with a total assay volume of 100 ml, maintained at 30uC.Four buffers were required: buffer G [buffer D containing 5.9 mM AdoMet and 35 nM oligonucleotide 1 (final concentrations of 5 mM and 33 nM respectively in the assay)]; buffer H [buffer E containing 1 U/ml DpnI (final concentration 5 U per assay)]; buffer I [buffer F containing 20 nM Dam (for a final concentration of 2 nM)].Per 96 well plate, 1160 ml buffer I was mixed with 580 ml buffer H to make buffer J.
Using a Biomek 3000 liquid handling system, 85 ml of buffer G was added to each well.The plate was then equilibrated at 30uC for 20 minutes and the reaction initiated by the addition of 15 ml of buffer J per well.The plate was then immediately transferred to a Tecan Saffire 2 microplate reader and fluorescein emission monitored over time.The following instrument settings were used: 10 readings per well (each measurement), 0 s between each movement and reading and no shaking between data collections.The excitation wavelength was 486 nm with a bandwidth of 9 nm, the emission wavelength was 518 nm with a bandwidth of 20 nm, the gain was 130, the Z-position was 9300 mm and the integration time was 40 ms.The rate of reaction was calculated by taking the initial rate of fluorescence change over the first 675 seconds.The rate of fluorescence increase was estimated by fitting data to a linear trend line with the program Excel.
Figure 1 .
Figure 1.Break light Dam activity assay.The fluorescence of the hemimethylated substrate oligonucleotide 1 is internally quenched by the dabcyl group.It is a substrate for Dam and yields the fully methylated product 2, which is rapidly cleaved by DpnI, thus forming fluorescent oligonucleotide 3. doi:10.1371/journal.pone.0000801.g001 To convert fluorescein emission into a concentration of cleaved oligonucleotide 1, a calibration curve of DpnI cleaved oligonucleotide 2, a doubly methylated analogue of oligonucleotide 1, was plotted.Concentrations of oligonucleotide used were 0-3.5 nM and the reaction endpoint fluorescence was used for the calibration.The variation of fluorescence intensity with concentration of oligonucleotide was fitted to a linear function (R 2 = 0.998), and the measured gradient of oligonucleotide 2 fluorescence increase was 9.3610 3 60.3610 3 arbitrary units/nM.
Figure 6 .
Figure 6.High throughput assay validation.Rate of fluorescence increase was monitored over 96 wells (wells 1-48 contained positive control assays, wells 49-96 contained negative control assays with no Dam).The solid trend lines indicate the mean rate of fluorescence increase for the positive/negative control datasets; the broken lines indicate the 63 standard deviations from the mean, which is the 99.7% confidence limit.doi:10.1371/journal.pone.0000801.g006 oligonucleotide 1: 59 C(F)CGGA m TCCA-GTTTTCTGGATCCGG(D) 39; oligonucleotide 2: 59 C(F)CG-GA m TCCAGTTTTCTGGA m TCCGG(D) 39, where Dam recognition sequences are shown in bold, (F) represents fluorescein and (D) represents a dabcyl quencher.
|
2014-10-01T00:00:00.000Z
|
1997-12-01T00:00:00.000
|
{
"year": 2007,
"sha1": "3093913a5502f9f1988dbdb9d8fdd10b19b0523a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0000801&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15665b087d8c95f7170673b8f36bd6ea8a82b189",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234475071
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Perceived Overqualification on Creative Performance: Person-Organization Fit Perspective
In today’s business world, the phenomenon of overqualification is widespread. Organizations need to consider – how to motivate the overqualified employees to utilize their qualifications, for example, promoting creative performance. Based on person-organization (P-O) fit theory, this study explored when and how employees, who feel overqualified can engage in creative performance. Data were collected from 170 supervisor-subordinate dyads of 41 groups in 10 manufacturing companies across two timepoints in China. Results revealed that (1) perceived overqualification is positively related to organizational identification when employee development-oriented organizational culture is strong but negatively related to organizational identification when employee development-oriented organizational culture is weak, (2) organizational identification is positively related to creative performance, (3) the indirect relationship between perceived overqualification and creative performance via organizational identification is moderated by employee development-oriented organizational culture. We provide several theoretical contributions to the overqualification literature and make some suggestions to motivate the overqualified employees to use their surplus qualifications within the organizations.
INTRODUCTION
In global organizations, a large number of employees report that their skills, experiences, or education exceed their work requirements (Feldman, 1996;Quintini, 2011;Hu et al., 2015;Erdogan et al., 2018b). The phenomenon that employees feel overqualified seems to be increasingly prevalent around the world. It is estimated that 20-25% of the employees in the United States feel overqualified (Feldman and Turnley, 1995), while the proportion in Britain and Canada is higher (Battu and Sloane, 2002). Some scholars believe the proportion of overqualified employees is likely to increase in the future and call for attention to the impact of perceived overqualification on performance (Liu and Wang, 2012). The literature to date about the results of perceived overqualification has obtained mixed findings. For instance, some scholars pointed out that perceived overqualification related to poor performance, low organizational commitment, low job satisfaction, and high turnover intentions (Kristof, 1996;Bolino and Feldman, 2000;Erdogan and Bauer, 2009;Maynard and Parfyonova, 2013). However, others alleged that overqualified employees might perform better than just-qualified employees because they possess higher levels of workability and work experience (Holtom et al., 2002;Fine, 2007;Erdogan and Bauer, 2009). The laters' argued that perceived overqualification positively related to performance. Given this mixed evidence, most literature suggested that the inconsistent results of perceived overqualification likely depend on contextual factors (Erdogan et al., 2018a). Therefore, it is essential to explore how organizations can encourage overqualified employees to use their surplus qualifications constructively (i.e., engage in creative performance). Creative performance, as one of the most important factors affecting the sustainable development of the company (Mone et al., 1998). Organizations constantly seek ways to improve creative performance and maintain a sustained competitive advantage in today's marketplace (Carmeli et al., 2013). Moreover, the improvement of creative performance requires employees to have extra ability and skills to engage in creative activities besides their core duties. That is, overqualified employees are more likely to produce creative performance. However, the relationship between perceived overqualification and creative performance has been largely ignored. Thus, we attempted to explore the internal mechanism and contextual conditions of the perceived overqualification-creative performance link.
Perceived overqualification can be considered as a type of person-job (P-J) misfit (Van Der Vegt and Bunderson, 2005;Liu and Wang, 2012), which may harm employees' work attitude and behavior (i.e., low creative performance) (Erdogan et al., 2018a). However, Kristof-Brown pointed out that P-J fit will interact with person-organization (P-O) fit to affect employees' performance (Kristof-Brown et al., 2005). Hence, we assume P-O fit may improve overqualified employees' creative performance even if the P-J misfit still exists. Employee development-oriented organizational culture encourages employees to develop their skills and provides them with opportunities for training and development, which is consistent with the personal values of overqualified employees (Tsui et al., 2006). As an effect way of P-O fit, employee development-oriented organizational culture have similarities in value consistency with organizational identification (Ashforth and Mael, 1989;Kristof-Brown et al., 2005). Therefore, we can infer that employee developmentoriented organizational culture may have a great impact on the relationship between perceived overqualification and organizational identification. Further, creative performance is the product of the combination of employee desire, employee capability, and related conditions (He and Brown, 2013;Man et al., 2020;Xie et al., 2020). Overqualified employees have more abilities and experience than others to participate in creative activities. Therefore, it is critical to study the internal mechanism of employees' creative motivation. P-O fit is an imperative influencing factor of organizational identification in the sense that people with high organizational identification may have more personal motivation in contributing to the organization (De Cremer and Van Vugt, 1999;Van Knippenberg, 2000). Hence, we assume organizational identification is the key to the internal mechanism of perceived overqualification influencing creative performance.
Taken together, grounded in the P-O fit theory, we developed a moderated mediation model and predicted the relationship between perceived overqualification and creative performance. We assumed that perceived overqualification has an indirect influence on creative performance via organization identification when employee development-oriented organizational culture is strong or weak.
This study made contributions to the present literature in the following specific ways. Firstly, we examine a key boundary condition of the relationship between perceived overqualification and creative performance. By examining employee development-oriented organizational culture as a moderator, we contribute to the overqualification literature and explicate how organizations can benefit from overqualified employees. Secondly, we contribute to P-O fit literature by testing the interaction between perceived overqualification and organizational culture on organizational identification. We found P-O fit and P-J fit have complex interactions, which may affect employees' emotional perception and behavior. Thirdly, we try to extend the literature by explicitly examining organizational identification as a mediating mechanism in the perceived overqualification-creative performance relationship.
THEORETICAL BACKGROUND
Overqualification refers to the extent to which employees possess more education, experiences, or skills relative to the requirements of their job (Maynard et al., 2006;Erdogan and Bauer, 2009;Ma et al., 2020a,b). Previous research has distinguished between perceived overqualification and objective overqualification. Objective overqualification is often judged by someone else, while perceived overqualification refers to the extent to which employees think they are overqualified (Maltarich et al., 2011). Maynard et al. (2006) argue that perceived overqualification is more suitable than objective overqualification for psychological research since perceived overqualification is considered as a proximal indicator of work attitude and behaviors (Erdogan et al., 2011;Liu and Wang, 2012). Consistent with previous researchers (Maynard et al., 2006;Erdogan and Bauer, 2009), we employ a subjective measurement of overqualification in this study. Perceived overqualification is generally regarded as an obstacle to both employment and retention. Extant studies have shown that perceived overqualification is associated with negative consequences, such as lower subjective well-being (Erdogan et al., 2018b), lower organizational commitment (Harari et al., 2017), higher perceived job insecurity (Peiro et al., 2012), and higher turnover intention (Maynard and Parfyonova, 2013). In recent years, a mounting evidence suggests that overqualified employees may make contributions to their organizations under appropriate conditions in the form of proactive behaviors (Zhang et al., 2016), performance (Deng et al., 2018) and creativity . Indeed, overqualified employees have more energy and time to participate in extra activities because they can accomplish their in-role tasks quickly (Liu and Wang, 2012).
In the person-environment (P-E) fit literature, overqualification is viewed as a type of person-job misfit (Maynard et al., 2006). P-E fit theory also realizes that multiple types of fit exist simultaneously, among which person-organization fit (P-O fit) is one of the most commonly studied types of fit (Kristof-Brown et al., 2005). Researchers proved that different types of fit may generate interactive effects that high fit in one area could compensate the low fit in others (Kristof-Brown et al., 2002), because individuals cognitive dissonance caused by having fit on one type would be diluted when they fit on another type (Jansen and Kristof-Brown, 2006). Based on P-O fit theory, we proposed that overqualified employees will produce creative performance through their sense of organizational identification. This is because, under the situation of employee developmentoriented organizational culture, P-O fit can make up for the deficiency of P-J misfit. P-O fit indicates the congruence between employees' identity and organizational values (Kristof, 1996), which is composed of two types: supplementary fit and complementary fit. Supplementary fit means a person and an organization possess similar or matching characteristics, while complimentary fit means that an employee has qualifications that an organization requires, and an organization offers the rewards that an individual needs. Previous literature has shown that employees who fit well with organizations are more likely to have positive work attitudes and behaviors, such as high job satisfaction, low turnover intention, high organizational identification, and more citizenship behavior (Chatman et al., 1998;Kristof-Brown et al., 2005;Farzaneh et al., 2014). Consistent with P-O fit theory and previous researches, we argue that improving the level of people-organization fit, workers who view themselves as overqualified -can enhance the employee's organizational identification and further produce creative performance.
The Moderating Role of Employee Development-Oriented Organizational Culture
Organizations require employees to contribute in terms of their time, effort, commitment, and qualifications. At the same time, the organization provides employees with financial, physical, and psychological resources, as well as job-related growth opportunities (Kristof-Brown et al., 2005). According to the two aspects of P-O fit, complementary fit emphasizes to meets the demands of both sides, while supplementary fit refers to the characteristic congruence between employees and organization (Kristof, 1996). Therefore, the organization and employees can achieve a high level of fit in a certain situation (e.g., employee development-oriented organizational culture). Previous literature regards perceived overqualification as a P-J misfit, which leads to negative outcomes (Kristof-Brown et al., 2005;Liu and Wang, 2012). However, extant literature has shown that P-O fit can interact with P-J fit to affect employee cognition and behavior (Erdogan et al., 2018a). For instance, Bills pointed out that overqualified employees will be satisfied with their situation when they fit well with the organization (Bills, 1992). Thus, we followed the P-O fit theory and proposed that employee development-oriented organizational culture is a key influential factor of the relationship between perceived overqualification and employees' organizational identification.
When employee development-oriented organizational culture is strong, overqualified employees may fit well with the organization even if the P-J misfit still exists. From the supplementary fit dimension, overqualified employees attach importance to their qualifications, and opportunities for extensive training/development that organizations respect for employees' qualifications (Bailey et al., 2001). From the complementary fit dimension, comprehensive opportunities are important resources that are demanded by employees (Kristof, 1996). The organization provides opportunities to convey the information that the organization regards employee development as an important catalyst (Wayne et al., 2002). Scholars have pointed out that employees have the qualifications which are of value by organizations, while organizations can provide employees with what they want (i.e., development opportunities and work resources) (Cable and Edwards, 2004). Therefore, when employee development-oriented organizational culture is strong, overqualified employees may fit well with the organization.
Employees who fit well with the organization are likely to have more positive work attitudes and behaviors (Verquer et al., 2003). Besides, previous studies have shown that employees possess a stronger sense of organizational identification when the extent of P-O fit is greater (Edwards and Rothbard, 2000;Verquer et al., 2003;Hoffman and Woehr, 2006;Amos and Weathington, 2008). Hence, employees who feel overqualified may contribute to such P-J misfit situation for understandable reasons such as economic environment instead of organizational reasons. Thus, we infer that perceived overqualification is positively related to organizational identification when employee developmentoriented organizational culture is strong.
In contrast, when employee development-oriented organizational culture is weak, overqualified employees might feel that their qualifications are wasted and a strong misfit with the organization. From the supplementary misfit dimension, overqualified employees pay more attention to their development opportunities and make full use of the qualifications. However, the culture of the organization does not attach importance to the growth of employees. From the complementary misfit dimension, employees provide organizations with job-related skills, education, and other qualifications, while organizations do not provide employees with jobs that meet their expectations. Taken together, when employee development-oriented organizational culture is weak, overqualified employees may misfit with the organization and lead to low organizational identification.
H1: The relationship between perceived overqualification
and organizational identification is moderated by employee development-oriented organizational culture such that this relationship is positive when employee development-oriented organizational culture is strong but negative when it is weak.
Organizational Identification and Creative Performance
Employees who have high levels of organizational identification may be internalized by the organization, and they are likely to do something beneficial to the development of the organization (Dutton et al., 1994;De Cremer and Van Vugt, 1999). Ashforth and Mael (1989) pointed out when the employees consider the organization to be their own, they will increase their cognitive, emotional, and behavioral investment in the organization. Hence, a high level of organizational identification is correlated with some important workplace outcomes (Van Dick et al., 2004;Cole and Bruch, 2006). Thus, we assume people with high levels of organizational identification may actively participate in creative activity. Previous literature has shown that organizational identification may positively contribute to employees' performance (Walumbwa et al., 2008;Weiseke et al., 2008). Moreover, scholars pointed out that when employees have a strong sense of organizational identification, they are likely to improve their work and generate new ideas (Lipponen et al., 2008). Similarly, it is believed that organizational identification can not only promote employees to engage in creative activities but also stimulate the internal motivation to create new things (Bailey et al., 2001;Hirst et al., 2009). Thus, we infer that the relationship between organizational identification and creative performance is positive.
H2: Organizational identification is positively related to creative performance.
A Moderated Mediational Model
As we mentioned before, creative performance is the product of the combination of three factors: Employee desire, employee capability, and related conditions (He and Brown, 2013). Overqualified employees have unused resources beyond their normal work, which can be used for creative activities. Besides, overqualified employees spend less time completing their work than their colleagues because of their superior education, skills, and experience, giving them more time to participate in creative activities (Liu and Wang, 2012). Thus, we find that for overqualified employees, the key to improving their creative performance is motivating their desire to create. Through the analysis of the front part of the paper, we found perceived overqualification may leading to contrary organizational identification on a different level of employee development-oriented organizational culture, and organizational identification, which may be positively related to creative performance.
In sum, the logical analysis sketches a complex framework that organizational identification may mediate the relationship between perceived overqualification and creative performance, and employee development-oriented organizational culture may moderate this indirect relationship. Specifically, when FIGURE 1 | Study model. employee development-oriented organizational culture is strong, overqualified employees may fit well with the organization, and organizational identification is improved in the process. Promoted by organizational identification, overqualified employees are more willing to serve and reward the organization by utilizing surplus resources to engage in creative activities. On the contrary, if employee development-oriented organizational culture is weak, overqualified employees may have low levels of organizational identification due to P-O misfit, which may reduce their creative performance. In general, employee developmentoriented organizational culture and organizational identification play an important role in influencing overqualified employees' creative performance.
H3: The indirect relationship between perceived overqualification and creative performance via organizational identification is moderated by employee development-oriented organizational culture and this relationship is positive when employee developmentoriented organizational culture is strong but negative when it is weak.
The main study variables and hypothetical links between them are graphically shown in Figure 1.
Sample and Procedure
We collected data from 10 companies operating in South China, from the industries of manufacturing. To effectively determine the causal relationship, our study adopted the multi-source (employees and their immediate supervisors), multi-time (two points in time T1 and T2) method to survey data. With the assistance of the company human resources department manager, we randomly invited 50 team leaders as well as their 250 subordinates from 10 companies to participate in our study.
Employees rated their perceived overqualification and demographic variables such as age, gender, educational level in time 1 and organization identification and employee development-oriented organizational culture in time 2, their supervisors evaluated subordinates' creative performance in time 2. The T2 surveys were distributed 2 months after T1 which helps reduce potential common method variance (Podsakoff et al., 2003). Some researchers have noted the importance of choosing an appropriate time lag because it "reduces the risk of erroneously concluding" (Meier and Spector, 2013;Eisenberger et al., 2014). Prior longitudinal studies on creative performance revealed a wide disparity in timing on data collection intervals, ranging from 1 week (e.g., Ozturk and Karatepe, 2018) to 6 months (e.g., Li et al., 2020). We chose a time lag of 2 months for several reasons. Firstly, although previous studies have shown that creativityrelated outcomes can change over a few weeks (Ozturk and Karatepe, 2018). For example, Lin et al. (2017) collected data with a time lag of 1 week between each survey to examine the effects of perceived underemployment on employees' creativity. However, unlike creativity that could result when the employees engage in task crafting every day, creative performance needs the accumulation of creative activities in a longer period (Lin et al., 2017). As our model deals with a psychological process and its effects on employee creative performance, so when employees perceived overqualification, it may have an impact on the internal psychology of employees with the gradual implementation of employee developmentoriented organizational culture. And with the accumulation of creative activities, the creative performance will change in a long enough period. Besides, a 2-months study span was long enough for changes in creative performance. Although some studies employing longer lags (e.g., 6 months; Tierney and Farmer, 2011) are informative, some have suggested that shorter intervals between assessments must be considered. Li et al. (2020) suggested that the use of shorter intervals is because studies with longer lags may underestimate the true relationship between the predictor and creative performance. Furthermore, in the research, the effect of perceived overqualification on Harmonious Passion (similar psychological processes to ours), Cheng et al. (2020) suggested that the interval of 2 months is appropriate. Considering these factors, we allowed for a 2-months lag in our data collections, expecting that it would permit adequate time for individual changes in organizational identification and creative performance.
We conducted the study in two waves and combined the paper-and-pencil version and on-line version. Using name lists provided by the human resources department manager, we assigned a unique code number to each participant and ensured that each survey listed the correct number to match surveys across time. The code number is composed of the company abbreviation, the employee/leader code (E or L), and work number (e.g., ZHE119034 and ZHL118032). Supervisor questionnaires also included a code number for each employee instead of names. We also provided each supervisor with a list of names matched to code numbers to facilitate their ratings.
Due to the uncertainty of employee access to computers, to ensure the amount of data we collect, we also use paper-based surveys to gather data in time 1. We invited 250 subordinates who were divided into several groups to fill the questionnaire in the meeting rooms and then issued questionnaires through the oneto-one correspondence way between questionnaire number and employees' work number. We also explained the purpose of the study, emphasizing that data was only used for scientific research. More importantly, we encourage participants to provide a valid e-mail address and actively participate in the second research online. At the end of the survey, we gave each participant about $5 in cash and promised that we will give $10 for the second survey in 2 months. Among the 250 questionnaires collected on-site, a total of 236 employees provided complete data (including email addresses), with a response rate of 95.40%.
To improve our research efficiency, we use an online survey website in T2. With the help of Wenjuanxing website 1 , the Chinese version of Qualtrics, each questionnaire had a unique questionnaire ID automatically generated within Wenjuanxing. Based on the collected information of the paper version of the questionnaire, we made one-to-one correspondence between the questionnaire number, work number, email address, and the Wenjuanxing ID and recorded them. We then sent the second questionnaire link to the 236 subordinates who provided a valid email address and their 50 immediate supervisors whose email was provided by the HR manager. We also asked the participants for their Alipay accounts in the email and gave them a $10 reward when the collection ended. Among those who received the link, 211 subordinates and their 41 immediate supervisors answered the online questionnaire with a response rate of 89.40% and 82%. Further, no significant difference was identified in a t-test between demographics for all employees were collected from both T1 and T2, and checks for non-response bias were conducted. There were no differences between invitees who did and did not respond in terms of gender (t = 0.81, p > 0.05), and perceived overqualification (t = 0.66, p > 0.05). Hence, we could guarantee that all survey listed were matched between subordinates and supervisors across time.
We excluded questionnaires with no responses on several variables of interest, T1 questionnaires with incorrect or no matching information, and T2 questionnaires from supervisors with less than three respondents. Finally, 170 subordinates' questionnaires and 41 supervisors' questionnaires were returned, which represented a response rate of 68 and 82%. On average, a supervisor had four employees. In the sample of subordinates, 58.8% were male, 90.6% had a bachelor's degree or above. The average age of employees was 37 years. The employees' average monthly income was 4006 Yuan approximately. Also, we ran our model at different cutoffs. The results did not change in direction or significance when we included data from employees with incorrect or no matching information or a team with less than three respondents (N = 211), but some probabilities dropped to marginal significance, likely because of the reduced sample size loss of statistical power. In sum, the results indicate that respondent attrition was mainly random.
Measures
Surveys were administered in Chinese. To confirm the accuracy of the translation and correct any discrepancies, we employed Brislin's translation/back-translation procedures (Brislin, 1980). Unless otherwise indicated, we used a 5-point Likert-type scale, ranging from 1 (strongly disagree) to 5 (strongly agree).
Perceived Overqualification
We measured perceived overqualification using the nine-item scale developed by Maynard et al. (2006) which from the perspective of education, experience and skills to judge whether one's qualification exceed what current job requirements. Perceived overqualification was employees' self-evaluation, the higher the score, the higher the degree of overqualification. This scale was widely used in previous studies and has been proved to have high reliability and validity (Luksyte et al., 2020;Zhang et al., 2021). Sample items included "My education level is above the education level required by my job" "Someone with less work experience than myself could do my job as well." Cronbach's alpha of the scale was 0.731.
Creative Performance
The creative performance was measured using the five-item scale developed by George and Zhou (2002) which has been proved to have high reliability and validity. We asked supervisors to rate each of their subordinates on each item. The scale measured three dimensions of individual creative performance: the generation, promotion and realization of innovative thinking at work. Sample items included "This employee comes up with new and practical ideas to improve performance at work"; "This employee searches out new technologies, processes, techniques, and/or product ideas at work." Cronbach's alpha of these items was 0.843.
Organizational Identification
Van Der Vegt and Bunderson (2005) proposed that four items related to emotion can be used to measure organizational identity in performance related research. Consistent with prior research, we measured organizational identification using the four-item scale created by Van Der Vegt and Bunderson (2005). Sample items included "I feel as if the team's problems are my own" and "I feel emotionally attached to my team." Cronbach's alpha found for this scale was 0.925. Tsui et al. (2006) put forward scales often organizational culture which covered employee development-oriented organization culture. We used Tsui, Wang, and Xin's five-item scale to measure employee development-oriented organizational culture (Tsui et al., 2006). Sample items included "the company/department concern for my development."; "the company/department developing my potentials." Cronbach's alpha was 0.873 for this scale. Based on our theoretical hypotheses and previous studies (Erdogan et al., 2006), we aggregated the construct at the team level and diagnosed whether group consensus was evidenced through interrater agreement (rwg) and intraclass correlations (ICCs). The results indicated that the median rwg was 0.806, greater than the suggested 0.70 thresholds, suggesting that employees agreed to the degree of the development-oriented organizational culture. The ICC (1) value was 0.14, suggesting that the between-group variance in the employee development-oriented organizational culture variable was significant. Overall, the results support the team-level aggregation of this scale.
Employee Development-Oriented Organizational Culture
Consistent with previous research (Harari et al., 2017), this research set the individual characteristics variables that may affect organizational identification as control variables, including education, income, age, times of promotions, working years with direct supervisors, and so on.
Descriptive Statistics and Correlations
The descriptive statistics and correlations of the variables in our study are presented in Table 1. The correlations are as expected. Perceived overqualification was negatively correlated to organization identification (r = −0.169, p < 0.05). Organization identification was positively correlated to creative performance (r = 0.347, p < 0.01).
Confirmatory Factor Analysis (CFA)
According to Bandalos's recommendation (Bandalos, 2002), the parameter-to-item ratio should be above almost a certain proportion (10:1), the direct use of the original title will lead to some estimation bias. According to Rogers and Schmitt's recommendation, this study used the item factor packaging method to deal with the four variables, and the packaging strategy adopts a high-high strategy (Rogers and Schmitt, 2004). According to the packaging results (As shown in Table 2), the four-factor model is the best (χ 2 = 115.148, df = 48, RMSEA = 0.078, TLI = 0.868, and CFI = 0.901), and the other alternative competition models are inferior to the hypothetical models proposed in this study.
Null Model
Since we obtained data from multiple levels, we used Mplus7.4 (Muthén and Muthén, 2015) to test the proposed hypotheses. We first ran null models with no predictors, including only organizational identification as dependent variables. The ICC (1) value for the organizational identification as the dependent variable was 0.136, indicating that 13.6% of the variances in organizational identification reside between groups. The outcomes justified the inclusion of predictors in the grouplevel research.
Hypothesis Test
Our theoretical model is multilevel, consisting of variables at both the team level (i.e., employee development-oriented organizational culture) and individual level (i.e., perceived overqualification, organizational identification, and creative performance). In testing the hypotheses, we conducted hierarchical linear modeling analyses because it can account for nesting in data while testing cross-level moderating effects. We tested cross-level moderating effects using slopes-as-outcomes models, controlling for employee education, age, gender, salary, and promotion chances at the individual level (i.e., Level 1). The cross-level analysis results are summarized at the bottom of Table 3. It can be seen from Table 3 that after concerning the *p < 0.05, **p < 0.01. CP, creative performance; EDOOC, employee development-oriented organizational culture; OI, organizational identification; POQ, perceived overqualification; Income: measured in thousands of Yuan; Gender: 1 = male; 2 = female; Education: 1 = middle school education or less; 2 = high school education; 3 = bachelor's degree; 4 = postgraduate degree; 5 = doctor degree. Three-factor model combined organizational identification and employee development-oriented organizational culture together. Two-factor model combined perceived overqualification, organizational identification, and employee development-oriented organizational culture together. Single-factor model combined all items together. control variables and main effects, the interaction influence of perceived overqualification and employee development-oriented organizational culture was significant with organizational identification (γ = 0.334, p < 0.01). Figure 2 further shows this interaction by plotting the simple slopes when employee development-oriented organizational culture at 1 SD above and below its mean, such that perceived FIGURE 2 | Employee development-oriented organizational culture moderates the relationship between perceived overqualification and organizational identification.
overqualification was positive to organizational identification while employees feel high employee development-oriented organizational culture (one standard deviation above), and it was opposite while employees feel low employee developmentoriented organizational culture (one standard deviation below). Hence, hypothesis 1 was supported.
Hypothesis 2 predicts that organizational identification is positively related to creative performance. The results, as shown in Table 4, organizational identification was found to have a positive correlation with creative performance (γ = 0.364, p < 0.01). Therefore, hypothesis 2 was supported. Hypothesis 3 predicts employee development-oriented organizational culture moderates the mediation effects of perceived overqualification on creative performance via organizational identification. The results based on repeated parameter sampling of Monte Carlo to estimate 95% confidence intervals showed in Table 5 that the indirect relationship in hypothesis 3 was supported. Specifically, perceived overqualification is positively and significantly related to creative performance when employee developmentoriented organizational culture was strong [γ = 0.005, 95% confidence interval is (−0.023, 0.041)], but negatively related with creative performance when employee developmentoriented organizational culture is weak [γ = −0.058, 95% confidence interval is (−0.130, −0.010), excluding 0]. Eventually, the difference between the two groups reached a significant level, and the 95% confidence interval (0.009, 0.151), excluding 0. Therefore, hypothesis 3 was supported.
DISCUSSION
Drawing from the P-O fit theory, we predicted that perceived overqualification's effects on creative performance through organizational identification and would be contingent on employee development-oriented organizational culture with both negative and positive effects. When employee development-oriented organizational culture is weak, the organizational identification of overqualified employees is low and leading to low creative performance. However, when employee developmentoriented organizational culture is strong, organizational identification of those employees who feel overqualified is improved, and then drive the improvement of creative performance. We now discuss the theoretical and practical implications of the results.
Theoretical Implications
Firstly, our study explains the mechanism of P-O fit and P-J fit promoting organizational identification by testing the interaction between perceived overqualification and employee development-oriented organizational culture on organizational identification. The results demonstrate that although perceived overqualification is a type of P-J misfit, its negative outcomes also can be reduced by P-O fit. By considering this issue, we emphasize the important impact of P-O fit on one's work attitude and behavior. According to the P-O fit theory, we explain why overqualified employees may have contrary performance, and put forward the important role of employee developmentoriented organizational culture in shaping organizational identification. Our result is consistent with the latest research statement of Erdogan (Erdogan et al., 2018a). Moreover, our study extends this prior work to demonstrate that organizational identification could be jointly determined by P-O fit and P-J fit.
Secondly, our study contributed to the perceived overqualification literature by identifying employee development-oriented organizational culture as an important boundary condition that influences organizational identification of overqualified employees. Employee development-oriented organizational culture indicates that situational factors can alleviate negative outcomes of overqualification. Previous findings on the relationship between perceived overqualification and performance were contradictory. Therefore, we believe that finding the boundary conditions of the relationship of perceived overqualification and its outcomes could determine whether it has a positive or negative impact. Although the previous study has pointed out that organizations can reduce the mismatch between overqualified employees and their jobs through job design (Wu et al., 2016). This study, from the perspective of organizational culture, found that building employee development-oriented organizational culture not only helps to improve creative performance but also reduces the cost of job design. Overqualification is a double-edged sword, this study provides a new research direction for the outcomes and contextual variables of perceived overqualification.
Thirdly, this study found that organizational identification is a key mediation mechanism between perceived overqualification and creative performance, which explained the contrary results of perceived overqualification. Overqualified employees produce positive/negative behaviors through their cognition, emotion, and attitude toward the organization. When overqualified employees have high levels of organizational identification, they will produce more creative performance, otherwise, they will have low creative performance.
Finally, the research on the influence of perceived overqualification on creative performance extended to the Chinese context. The overqualification phenomenon is serious in China. At the same time, the government encourages innovation. The research on the relationship between overqualification and creative performance is relatively lacking. This study expands the research of overqualification in China and provides a theoretical basis for organizations to reduce the negative impact of overqualification and respond to the call of promoting creative behaviors.
Practical Implications
The present findings provide important implications for practice. Firstly, organizations should realize that although hiring overqualified employees may be associated with some negative effects but organizations can gain positive results by taking measures to enhance the organizational identification of overqualified employees. We found that employee developmentoriented organizational culture could stimulate overqualified employee's sense of organizational identification, then improve their creative performance. Specifically, Organizations should create an organizational culture that focuses on the growth and development of employees, such that, providing advanced training opportunities and assigning more challenging and meaningful work for the overqualified employees, and offering them with greater promotion space. In addition, the organizations should care for employees' life to strengthen their affection attachment, undertake social responsibility and participate in public welfare to establish a good external reputation, which will all increase employees' identification of the organization.
Secondly, to improve the creative performance of employees, organizations need to create an atmosphere that attaches importance to innovation and provide conditions conducive to innovation for employees. The organization can promote the construction of creative atmosphere and make employees perceive that the organization attaches importance to creative performance by making high incentive policies and increasing the investment of software and hardware about innovation. In addition, the organization should be tolerant enough to creative failure, so that overqualified employees have the opportunity to try and make mistakes without worrying about the punishment for failure.
Thirdly, we found that P-O fit could reduce the negative outcomes caused by P-J misfit. If the employee is too highly skilled for the job but shares the values of the organization, then some of the negative effects of perceived overqualification may be neutralized. The results suggest that P-O fit is an important moderator factor in the perceived overqualificationoutcomes relationship, and organizations can benefit from considering these factors when estimating the potential impact of perceived overqualification. For example, when recruiting overqualified employees, organizations should focus on the matching degree between their values and the organization's. If it is fit, organizations can make full use of the excessive qualifications after the introduction. If it is misfit, organizations should consider not to employ them. For the employees who have been employed, organizations could strengthen the learning and propaganda of organizational values for them, striving to achieve the P-O fit, and use P-O fit to offset the possible negative impact of P-J misfit.
Limitation and Future Study
This study has a few limitations. Firstly, we explored only one boundary condition -employee developmentoriented organizational culture to moderate the perceived overqualification-creative performance relationship. There may be other moderator variables to mitigate the negative impact of perceived overqualification, future research can explore other organizational factors to reduce the negative impact of overqualification. Secondly, although this study pointed out that P-J fit will interact with P-O fit to reduce the negative impact of perceived overqualification, which of them play a more significant role in this relationship is still a mystery. Therefore, where situational variables are affecting the relationship between P-J fit and P-O fit remains to be further studied. Thirdly, samples from Chinese enterprises limit the generalizability of the findings to different contexts. Cultural values can influence how individuals perceive and react to overqualification (Erdogan et al., 2011;Harari et al., 2017). As this study examined perceived overqualification in one specific cultural context, replicating the present investigation in other cultural contexts is worthy of consideration.
CONCLUSION
Drawing on P-O fit theory, we explored the relationship between perceived overqualification and creative performance. The data were collected from 170 supervisor-subordinate dyads of 41 groups in 10 manufacturing companies across two time points in China. Results revealed that perceived overqualification is positively related to organizational identification when employee development-oriented organizational culture is strong but negatively related to organizational identification when employee development-oriented organizational culture is weak. Further, the indirect relationship between perceived overqualification and creative performance via organizational identification is moderated by employee development-oriented organizational culture. It widely accepted that as a type of P-J misfit, perceived overqualification may lead to negative consequences. However, the latest P-O fit theory shows that the fits of person is hierarchical and can affect each other (Lam et al., 2018), existing research begin to explore the most central fit type of all kinds of fit types (i.e., In our study, the negative outcomes of P-J misfit can be reversed by P-O fit). This finding will bring enlightenment to the management practices, future research can further explore the interaction of various fits such as person-job (P-J), person-group (P-G), and person-supervisor (P-S), person-environment (P-E), and so on.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board of Shanghai University, China. Written informed consent for participation was not required for this study in accordance with the National Legislation and the Institutional Requirements.
AUTHOR CONTRIBUTIONS
MZ, FW, and NL designed the study and revised the draft. MZ and FW collected the data. NL drafted the theory. MZ drafted the methods and results. All authors contributed to the article and approved the submitted version.
|
2021-05-13T13:33:48.473Z
|
2021-05-13T00:00:00.000
|
{
"year": 2021,
"sha1": "bacd108e32601020fd75fb261cbf9cbe67daf1e6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.582367/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bacd108e32601020fd75fb261cbf9cbe67daf1e6",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237595834
|
pes2o/s2orc
|
v3-fos-license
|
Effects of rehabilitation programs on heart rate variability after stroke: a systematic review
ABSTRACT Background: It has been shown that the autonomic nervous system can be modulated by physical exercise after stroke, but there is a lack of evidence showing rehabilitation can be effective in increasing heart rate variability (HRV). Objective: To investigate the effectiveness and safety of rehabilitation programs in modulating HRV after stroke. Methods: The search strategy was based in the PICOT (patients: stroke; interventions: rehabilitation; comparisons: any control group; outcomes: HRV; time: acute, subacute and chronic phases of stroke). We searched MEDLINE, CENTRAL, CINAHL, LILACS, and SCIELO databases without language restrictions, and included randomized controlled trials (RCTs), quasi -randomized controlled trials (quasi-RCTs), and non-randomized controlled trials (non-RCTs). Two authors independently assessed the risk of bias and we used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology to rate the certainty of the evidence for each included study. Results: Four studies (two RCTs with low certainty of the evidence and two non-RCTs with very low certainty of the evidence) were included. Three of them showed significant cardiac autonomic modulation during and after stroke rehabilitation: LF/HF ratio (low frequency/high frequency) is higher during early mobilization; better cardiac autonomic balance was observed after body-mind interaction in stroke patients; and resting SDNN (standard deviation of normal R-R intervals) was significantly lower among stroke patients indicating less adaptive cardiac autonomic control during different activities. Conclusions: There are no definitive conclusions about the main cardiac autonomic repercussions observed in post-stroke patients undergoing rehabilitation, although all interventions are safe for patients after stroke.
inTROdUCTiOn Stroke is one of the main causes of morbidity and mortality in industrialized countries and the leading cause of chronic disability in adults [1][2][3] . After stroke, more than 70% of individuals present alterations in motor, sensory, or cognitive systems, which can be mild and transient or severe and disabling, and these alterations can be related to autonomic nervous system impairments, which can lead to changes in heart rate variability (HRV) [4][5][6] .
HRV is the result of adaptive changes in heart rate caused by sympathetic and parasympathetic activity in response to external or internal stimuli 7 . Based on this concept, HRV is defined as the changes in heart rate (HR) that occur after a stimulus, and it is a predictor of processes related to the autonomic nervous system. Studies have shown that a low HRV response is related to a high risk of stroke 8,9 , severe stroke severity 10 , mortality after stroke 4,5,11 , low vagal modulation 12 , and a poor prognosis after stroke 13 .
There is evidence that physical inactivity reduces cardiac autonomic modulation after stroke 14 . Therefore, the autonomic nervous system can be increased through physical exercise and rehabilitation programs after stroke 15 . Lower HRV is a predictor of morbidity and mortality and cardiac changes increase the risk of death after stroke 16 and may be related to unfavorable outcomes 17 . Additional studies need to be conducted to elucidate the cardiac autonomic modulating mechanisms and clinical repercussions of HRV after stroke rehabilitation.
Thus, it is possible that specific and effective rehabilitation programs, allowing greater cardiovascular stability, functional gains, and quality of life in individuals after stroke, can be developed. Due to the lack of evidence that rehabilitation can be effective in modulating the autonomic nervous system after stroke, there is no consensus on this effect; there are no systematic reviews in the literature on this topic. Therefore, the aim of this review was to evaluate the effectiveness and safety of rehabilitation programs in modulating HRV after stroke.
MeTHOdS
We adhered to the methods described in the Cochrane Handbook for Intervention Reviews 18 and to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines 19 . This review was registered in the International Prospective Register of Systematic Reviews (PROSPERO -CRD42020156527).
eligibility criteria
The eligibility criteria were as follows: 1. Study designs: RCTs, quasi-RCTs, and non-RCTs 2. Participants: adults over 18 years of age of either sex with any duration of illness, severity of initial impairment, type of stroke diagnosis (ischemic or intracranial hemorrhage) that was made by a clinical examination or radiographically by computed tomography (CT) or magnetic resonance imaging (MRI). 3. Interventions: any rehabilitation protocol for stroke recovery (early mobilization, physical exercises) 4. Comparators: any conventional stroke rehabilitation program 5. Outcomes: Heart rate variability data sources and search strategy The search strategy was based in the PICOT (patients: stroke; intervention: rehabilitation; comparison: any control group; outcome: heart rate variability; time: acute, subacute, and chronic phases of stroke). We searched MEDLINE (OvidSP), the Cochrane Central Register of Controlled Trials (CENTRAL), CINAHL, the Latin-American and Caribbean Center on Health Sciences Information (LILACS), and SCIELO databases without language restrictions. The date of the most recent search was July 10, 2020. All searches were conducted with the assistance of a trained medical librarian. We also searched the reference lists of relevant articles and conference proceedings, and contacted the authors of the included trials.
The search terms included "Heart rate variability or (MeSH terms)" and stroke or (MeSH terms) and rehabilitation or (MeSH terms).
Other resources searched
In an effort to identify additional published, unpublished, and ongoing trials, we performed the following steps: • screened the reference lists of the identified studies; • contacted the study authors and experts; and • used the Science Citation Index Cited Reference Search to track important articles.
Selection of the studies
Two pairs of reviewers independently screened all titles and abstracts identified in the literature search, obtained fulltext articles of all the potentially eligible studies, and evaluated the articles for eligibility. The reviewers resolved disagreements by discussion or, if necessary, with third party adjudication. We also considered studies reported only as conference abstracts.
We used the START program (State of the Art through Systematic Review), developed by the Software Engineering Research Laboratory of the Federal University of São Carlos for data organization.
data extraction
The reviewers underwent calibration exercises and worked in pairs to independently extract data from the included studies according to the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions 20 . Disagreements were resolved by discussion or, if necessary, with third party adjudication. Reviewers collected the following data using a pretested data extraction form: study design, participants, interventions, comparators, assessed outcomes, and relevant statistical data.
Risk of bias assessment
Two authors of this review independently assessed the risk of bias for each study using the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions 20 . Disagreements were resolved by discussion or by consultation with another review author. We assessed the risk of bias according to the following domains. We graded the risk of bias for each domain as high, low, or unclear and provided information from the study report, together with justification for our judgment, in the "Risk of bias" tables. For incomplete outcome data in individual studies, we stipulated a low risk of bias for a loss to follow-up of less than 10% and a difference of less than 5% in missing data between the intervention/exposure and control groups.
Certainty of evidence
We summarized the evidence and assessed its certainty separately for bodies of evidence from RCT and non-RCT studies. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology to rate the certainty of the evidence for each outcome as high, moderate, low, or very low. In the GRADE approach, RCTs begin with high certainty, and non-RCT studies begin with moderate certainty. Detailed GRADE guidelines were used to assess the overall risk of bias, imprecision, inconsistency, indirectness, and publication bias and to summarize the results in an evidence profile (Table 1) 21 .
We planned to assess publication bias through the visual inspection of funnel plots for each outcome for which we identified 10 or more eligible studies; however, we were not able to do so because there were an insufficient number of studies to conduct this assessment.
data synthesis and statistical analysis
It was not possible to perform a meta-analysis due to the non-homogeneity of the interventions. The effects of the interventions, risk of bias, and quality of evidence for each study are reported.
ReSULTS
We identified a total of 88 studies through database searches (see Figure 1 for the search results). After screening the titles and then the abstracts, we obtained full-text articles for the 22 studies that were potentially eligible for inclusion in the review. We excluded 18 studies because they were considered one of the following types of articles: case report, case series, self-controlled study, review, or a study that was not relevant. The remaining two RCTs 22,23 and two non-RCTs 15,24 were included in this review.
Characteristics of the participants and groups
All participants in the included studies were diagnosed with ischemic stroke. The total sample size was 172 individuals, and the average age was 65 years; they were divided into groups, with the size of each group ranging from seven to 36 individuals. In one study 15 , there was no description of the difference between the intervention and control groups since all of the participants received interventions; the participants were instead divided according to stroke severity, as assessed by the National Institutes of Health Stroke Scale (NIHSS). The other three studies 22-24 divided the individuals into intervention and control groups. Beer et al. (2018) described the control group as healthy individuals. Two studies 22,24 included only individuals with one stroke, two evaluated patients within 1 to 10 days of an ischemic stroke 15,23 , one study evaluated individuals' post-acute stroke 24 , and another study evaluated individuals at 15 days after a stroke 22 . The characteristics of the included studies are shown in Table 1.
All studies evaluated individuals based on the analysis of linear heart rate variables, as shown in Table 2.
evaluations and interventions
The interventions reported by the studies were early mobilization 15 , low-intensity activity associated with meditation 23 , cycle ergometer and cognitive activities 24 , and protocol mobilization with a cycle ergometer, which were determined by exercise resistance tests individually (cycle ergometer, walking test, and going up and down stairs) 22 . All individuals in the control group performed activities such as conventional physical therapy. In the study by Nozoe et al. (2018) 15 , the variables LH, InHF, and LF/HF ratio were evaluated by a cardiac monitor, and in the analysis, the complement Lab Chart Pro HRV (ADInstruments Pty Ltd, Castle Hill, Australia) was used. In the intervention protocol, the participants performed an early mobilization in the sitting position; the evaluation comprised 5 minutes in the supine position (rest), followed by five minutes in the sitting position. The patients were reevaluated three months after the stroke.
In the study by Chen et al. (2019) 23 , the variables SDNN, LF, HF, and LF/HF ratio were evaluated during the execution of Chan-Chuang qigong, known as traditional Chinese medicine therapy, which promotes body-mind interaction and relaxation. The individuals performed the technique for 15 minutes each day for 10 days; the assessment took five minutes and was performed using a portable HRV analyzer (8Z11, Enjoy Research Inc., Taiwan), the Chinese version of the Short Form-12 (SF-12) to assess quality of life, and the Hospital Anxiety and Depression Scale (HADS) to assess negative emotions.
In the study by Beer et al. (2018) 24 , individuals underwent a protocol in which they were first evaluated at rest for 10 minutes, and then they were evaluated during a handgrip activity that lasted two minutes accompanied by controlled breathing (two minutes -six cycles in one minute). Afterwards, they performed cognitive activity (serial 3's subtractions) and finally mobilization with a cycle ergometer in combination with a cognitive exercise. Cognitive capacity was assessed using the Montreal Cognitive Assessment Scale (MoCA), and the Barthel index was used to assess functional capacity. The variables SDNN and RMSSD were measured by the Polar Advanced Heart Rate Monitor (RS800CX).
All included studies performed evaluations of linear heart rate variables; however, studies did not present heterogeneity among the groups, interventions, or evaluations. Only one study, of the four included, did not show significant results in relation to the variables evaluated. All studies excluded individuals who had heart disease.
evaluation of the effectiveness and safety of the included studies
The evaluation of the effectiveness and safety of the included studies are displayed in the Table 3.
In the study by Chen et al. (2019) 23 , the LF/HF ratio was higher in the intervention group after early mobilization regarding the physical component of the quality of life (QOL) scale (SF-12) than in the control group (P = 0.02). The authors did not report the effect sizes or confidence intervals of the data, and any adverse effects were observed after intervention.
The study by Beer et al. (2018) 24 showed less adaptive cardiac autonomic control during different activities. The values described for the groups were as follows: post stroke RR = 728.7 ± 110.1 ms; healthy individuals RR = 847.6 ± 120.6 ms, with P = 0.002; post-stroke SDNN = 32.5 ± 26.9 ms, healthy individuals SDNN = 48.7 ± 17.9 ms, with P = 0.01. The authors did not report any adverse effects after intervention.
In the study by Katz-Leurer and Shochina (2007) 22 , no significant interaction effects on HRV were observed between exercises during physical therapy. The values indicated for the variables were as follows: treatment group LF = 1248 ± 1684 Hz, control group LF = 1238 ± 1728 Hz, with P = 0.93; treatment group HF = 378 ± 638 Hz, control group HF = 667 ± 150 Hz, with a P = 0.33. The authors did not report any adverse effects after intervention.
Risk of bias interpretation
All included articles were analyzed for risk of bias, as shown in Table 4. Figure 2 shows a graphical analysis of the risk of bias.
diSCUSSiOn
This systematic literature review study comprised four articles from clinical trials that aimed to assess HRV using different methodologies, describing sympathovagal activity after specific rehabilitation protocols in patients after ischemic stroke.
Of the four studies included, two 22,24 used the cycle ergometer for the main rehabilitation program. Only the study by Beer et al. (2018) 24 showed a significant reduction in the RR and SDNN variables among post-stroke individuals compared to healthy individuals at rest, which indicates a state of sympathetic hyperactivity in the subacute phase after the stroke. In this study, patients did not show a normal increase in sympathetic activity in response to the test conditions, mainly due to a hypersympathetic state at rest. During the subacute phase, according to this study and other studies 25,26 , there is apparently a significant physiological change in the ability of the autonomic nervous system to respond adequately to the demands imposed by rehabilitation practices, so only large demands yield expected sympathetic responses 27 . The results indicate a need for rehabilitation focused on improving autonomic cardiac control.
In the study by Nozoe et al. (2018) 15 , patients were classified as having or not neurological deterioration (ND) using the NIHSS score (severity scale used in the acute phase of stroke). These individuals were evaluated during hospitalization and underwent an intervention involving early mobilization with posture changes. The LF/HF ratio showed a significant increase in the ND group (a higher NIHSS score) from before to after the intervention. Since the LF/HF ratio seems to reflect sympathetic performance, according to the authors, it is likely that an increase in sympathetic activity during mobilization is associated with neurological deterioration in acute stroke patients. Xiong et al. (2018) 28 reported that autonomic dysfunction is one of the predictors of worse functional outcomes in patients in the acute phase of stroke, which can confirm the possible occurrence of increased sympathetic performance in patients with a worse NIHSS classification.
Chen et al. (2019) 23 introduced a mind-body interactive exercise (Chan-Chuang qigong practice) as an intervention for hospitalized patients after stroke to increase cardiac parasympathetic tone mainly because the technique has relaxing effects. They concluded that the LF/HF ratio regarding the physical component of the quality of life (QOL) scale (SF-12) was higher in the intervention group after mobilization than in the control group. Therefore, during the hospital stay, the sympathovagal balance influenced the physical aspect of the QOL of individuals with subacute stroke. Thus, improved HRV in stroke patients after a specific rehabilitation protocol can lead to the recovery of physical functions and improve their quality of life.
In the study by Katz-Leurer and Shochina (2007) 22 , an individualized training protocol was used, and they did not find significant differences in HRV. Despite this result, a significant improvement was found in the functional parameters of post-stroke individuals, such as climbing stairs, and physical training allowed patients to significantly increase their workload. As described by other authors, autonomic impairment after stroke leads to low aerobic capacity 27 . Thus, the importance of early mobilization, rehabilitation, and physical-functional training in post-stroke patients is reiterated.
The authors reported sympathetic-vagal alterations in poststroke patients when subjected to physical activities. Thus, from this systematic review, it can be stated that significant autonomic modulation occurs in these individuals. Despite the methodological divergence found in the articles, only one article reported no changes in HRV between the groups evaluated 22 , which established an assessment in the frequency domain. In the study by Beer et al. (2018) 24 , variables in the time domain were included, whereas assessments in both domains (time and frequency) were included in other studies, which demonstrated significant changes in the HRV linear variables after stroke rehabilitation.
Studies on HRV demonstrate the need for flexibility in autonomic activity for individuals to maintain a good quality of life, as impaired adaptation can cause autonomic dysfunctions, cardiovascular deterioration, and increased morbidity and mortality rates in patients after stroke 28 . The four articles selected for the review show the need for specific therapies, early mobilizations, and physical activity protocols in the modulation of HRV. This conclusion points to the importance of maintaining muscle function, strength, and activity for cardiovascular benefits, which has been widely studied for methods including cardiac rehabilitation [28][29][30] .
This study has limitations, such as heterogeneity in the selected individuals and the analyzed outcomes; because only a few studies were selected, it was impossible to perform a meta-analysis. However, this is the first systematic review addressing this topic, with the possibility of elucidating the main autonomic repercussions observed in post-stroke patients undergoing rehabilitation procedures.
In conclusion, thequality of the evidence from the selected clinical trials was either low or very low; therefore, there are no definitive conclusions about the main autonomic repercussions observed in post-stroke patients undergoing rehabilitation, although all interventions are safe for these patients. The applicability of these results can be compromised since most of the results described in this review were obtained from clinical trials with methodological differences. This review highlights the need to conduct well-designed tests in this field. Future trials should be properly designed and should include standardized measures. It is suggested that RCTs address a heterogeneous population and include measures in the time and frequency domains, in addition to a nonlinear analysis of HR, to establish parameters of sympathetic-vagal behavior during rehabilitation protocols after stroke.
|
2021-09-23T06:23:26.335Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c5d56c8d2337e77882661a72bd9e55399eca7baa",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/anp/a/9S8F8dbqy5YfjZZFw5tb48R/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Thieme",
"pdf_hash": "5e9b191e45511a6e686b66298988f07a1558baa8",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119320663
|
pes2o/s2orc
|
v3-fos-license
|
On approximate continuity and the support of reflected stochastic differential equations
In this paper we prove an approximate continuity result for stochastic differential equations with normal reflections in domains satisfying Saisho's conditions, which together with the Wong-Zakai approximation result completes the support theorem for such diffusions in the uniform convergence topology. Also by adapting Millet and Sanz-Sol\'{e}'s idea, we characterize in H\"{o}lder norm the support of diffusions reflected in domains satisfying the Lions-Sznitman conditions by proving limit theorems of adapted interpolations. Finally we apply the support theorem to establish a boundary-interior maximum principle for subharmonic functions.
1. Introduction. The support theorem for diffusion processes defined by stochastic differential equations has been a much studied topic for probabilists and analysts since the seminal work of Stroock and Varadhan [13].
The typical approach to a support theorem in the norm of uniform convergence consists of two steps. One step is to establish a limit theorem for SDEs, meaning that the solution of an SDE can be approximated by a sequence of solutions of ODEs, obtained by regularizing the Brownian paths [15]; the other is to prove a Denjoy-type approximate continuity theorem, stating that the solution of an SDE is approximately continuous at points in a dense set of the Cameron-Martin space. Millet and Sanz-Solé [7,8] proposed a simple approach to characterizing in Hölder spaces the support of diffusions described by general SDEs, obtained by approximating Brownian motions with linear adapted interpolations, and proved the two inclusions through approximation results.
In this work we are concerned with the support problem of diffusions constrained in a domain D with normal reflection boundary. Such diffusions have been constructed by Anderson and Orey [2] if D has smooth boundary and by Tanaka [14] if D is convex. Correspondingly the support theorem has been established by Doss and Priouret [3] if D has smooth boundary, and a limit theorem has been proved by Pettersson [9] when D is a convex domain and the diffusion coefficient is constant. Recently in [10], a support theorem was proved for stochastic variational inequalities; this means, in particular, that the support theorem holds true for diffusions normally reflected in convex domains.
However, normally reflected diffusions have been constructed for domains much wider than convex domains and smooth domains (see Lions and Sznitman [6] and Saisho [12]), so a natural (and application-motivated) question is whether or not the support theorem continues to hold true for such diffusions. The first step in this respect was taken by Evans and Stroock [4] who proved, under the set of conditions given by Lions and Sznitman, that a weak limit theorem holds. Very recently this result was improved by Aida and Sasaki [1], and independently by Zhang [16], who used an adapted version of the Wong-Zakai approximations rather than the usual one, by removing the admissibility condition from the set of conditions and proving that the convergence takes place, in fact, in L p (and they obtained the convergence speed). Roughly speaking, they proved a strong limit theorem for the reflected diffusions studied by Saisho in [12]. To date, this was the widest, well-studied situation.
On the other hand, however, approximate continuity has not yet been touched in such situations. Our first result fills this gap, and it, together with the Wong-Zakai convergence result in [1] and [16], will yield the support theorem in the locally uniform convergence topology for normally reflected SDEs in domains, satisfying the conditions of Lions and Sznitman [6], except the admissibility. The second contribution of this paper is to present a characterization of the support for reflected diffusions in Hölder spaces in domains satisfying the conditions in [4], by extending the idea of Millet and Sanz-Solé [8] to SDEs with normal reflections.
We recall the Skorohod problem here. Let D be a domain in R d and w · ∈ C([0, +∞); R d ) such that w 0 ∈D. A pair of continuous functions (x, k) is a solution of the Skorohod problem if: • x t ∈D for all t ≥ 0 and x 0 = w 0 ; • for all t ≥ 0, x t = w t + k t ; • k(0) = 0, and k is of bounded variation on each finite interval and satisfies where n s ∈ N xs and N x is the set of inward normal unit vectors at x ∈ ∂D defined by Here and in what follows B(a, r) = {y ∈ R d ; |y − a| < r}, a ∈ R d , r > 0 and |k| t denotes the total variation of k on [0, t].
Let Ω = C 0 ([0, ∞), R d 1 ) be the space consisting of continuous functions from [0, ∞) to R d 1 vanishing at 0. Let F be the completion of the Borel σalgebra on Ω associated with the locally uniform convergence topology and P the distribution of an d 1 -dimensional Brownian motion. Then (Ω, F, P) is a complete probability space, and the coordinate process We consider the following reflected SDE: where ξ s ∈ N Xs . In Itô's notation, it takes the following form: Throughout the paper we will assume that σ : functions, respectively. Then by Saisho [12] this equation has a unique solution (X, K).
Let W d (resp., W d 1 ) denote the space of all R d (resp., R d 1 )-valued continuous functions defined on [0, ∞), and for each α ∈ (0, 1 2 ), W d α denote the subspace of W d consisting of locally α-Hölder continuous functions. Then for every α ∈ [0, 1 2 ), W d α is a Fréchet space with the topology defined by the system of seminorms { · T,α , T > 0}, where for x ∈ W d , H will be endowed with the topology given by the family of seminorms ) the solution to the following deterministic Skorohod problem: Denote by S (H) α the closure of S (H) in W d α , and S , S p and S (H) the closures of S , S p and S (H) in W d , respectively. We are going to prove in Section 2 the approximate continuity theorem, which together with the result in [1] and [16] yields that the support of P • X −1 in W d coincides with S . We also prove in Section 3 an enhanced version of the support theorem by showing that for every α ∈ (0, 1 2 ), the support of The paper is organized as follows: in Section 2 an approximate continuity theorem for normally reflected diffusions is proved, and this result combined with the main result in [1] and [16] implies, of course, the support theorem for such diffusions. Next, we provide in Section 3 an alternate approach to solving the support problem in Hölder spaces. Finally in Section 4, we give a first application of our support theorem to maximum principle for Lsubharmonic functions in domains having nonsmooth boundaries and with possibly degenerate L.
Throughout the paper we use C to denote a generic constant which may be different in different places, and we use summation convention for repeated indices. Finally A B means that there exists a C ≥ 0 such that A ≤ CB.
2. Approximate continuity. In this section we will work in the setup of [6]. But, as in [12], we will not need the admissibility condition on the domain. Precisely, we assume that we are given a domain D ⊂ R d satisfying: (H 1 ) There exists c 0 > 0 such that for any x ∈ ∂D, y ∈D and ξ ∈ N x , where N x denotes the set of unit inward normals at x; (H 2 ) There exist a function ϕ ∈ C 3 b (R d ; R) and a constant α > 0 such that It is obvious that under the conditions(H 1 )-(H 2 ), S = S p = S (H). To see this, we only need to show S ⊃ S (H). In fact, for any h ∈ H, we can take a sequence h n ∈ S such that h n → h in H. Denote by (Z, Ψ) and (Z n , Ψ n ) the corresponding solutions of the Skorohod problem (1.2). Set ρ(t) := e −(2/α)(ϕ(Zt )+ϕ(Z n t )) . Then for any t ≥ 0, by (H 2 ) and the assump- which implies by Gronwall's lemma that sup 0≤t≤T |Z n t − Z t | 2 → 0 as n → ∞ and thus Z ∈ S , yielding that S ⊃ S (H).
Before we proceed, a few words about these conditions are in order. The constant c 0 appearing in condition (H 1 ) is also allowed to equal to zero in [6]. Then the function ϕ in condition (H 2 ) can be taken to be identically zero, and it turns out that some arguments below will break down, and different treatments will be needed. But in this case D is a convex domain, and thus the equation is a special case of stochastic variational inequalities already treated in [10]. Hence we simply assume c 0 > 0 here.
For convenience we record here some basic facts which will be used below; see [5]. Set for i, j = 1, . . . , d 1 , Let T > 0 be arbitrarily fixed.
In the same way, we can also obtain We have the following exponential integrability result.
Proof. By Itô's formula and (H 2 ) we have Since ϕ ∈ C 2 b , there exists a β ′ > 0 such that From this the first inequality follows immediately, and the second follows from the first together with equation (1.1).
ON APPROXIMATE CONTINUITY AND THE SUPPORT OF RSDES
Proof. We have by Lemma 2.1 and Proposition 2.1 that We need to prove This is obvious for I 1 and I 3 . To show this for I 2 we notice that Noticing that f and σ are bounded, the sets { I 2i T > εδ −1/2 } ∩ { w T < δ}, i = 2, 3 will be empty for small δ and thus Since for t ∈ [0, T ], By the exponential inequality (cf. [11], Exercise IV.3.16) we have Finally, since we have by using (2.4) that Thus (2.5) has been proved. Now the result follows from (2.3) and (2.5).
Proof. It suffices to prove (2.6). Using (2.1) with α = 1 2 and the above lemma we have Now we can prove the following: is uniformly continuous. Then for all ε > 0 and i, j = 1, 2, . . . , d 1 , Consequently by (2.6), We have by Itô's formula, Obviously and it is clear from Corollary 2.1 that it holds also for i = 8. For I 222 we notice where It suffices to prove we have by exponential inequality
Hence by letting
Now we extend the result to f ∈ C b , which is uniformly continuous. Let ε > 0 be given. For any ε ′ > 0 choose an η ∈ (0, ε 2T ) sufficiently small such that where c 1 and c 2 are constants appearing in Lemma 2.1.
It is easy to see Y 2 T < ε 4 . Moreover, since Y 1 (T ) ≤ η 2 w 2 T T , we have by exponential inequality and with arguments similar to the proof of (2.10) that if δ ∈ (0, 1], Thus for such δ, Now we conclude by letting δ → 0 and by the arbitrariness of ε ′ . Proof. It suffices to prove (i), since then (ii) follows from (i) and (2.3).
We have Since Finally, it is trivial that This completes the proof.
Now are ready to state our main result. Let (Y, l) denote the solution of the following deterministic Skorohod problem: where η(s) ∈ N Ys .
Theorem 2.1. For any h ∈ S and ε > 0, Proof. We first assume h ≡ 0. Since (X, K) and (Y, l) are solutions to equations (1.1) and (2.11), respectively, we have By Itô's formula we have Combining these with the fact By Itô's formula Thus we have by Itô's formula, Obviously, i =2 As for I 2 we have It is easily seen that Combining all the above and the fact that ϕ is bounded, we have where A(t) satisfies that for every ε > 0, lim δ↓0 P( A T > ε| w T < δ) = 0.
On the set {ω; A T < ε}, we have Since ε is arbitrarily small, Finally, to see it suffices to notice that and use Lemma 2.4. For general h ∈ S, just as in the proof of [5], Theorem 8.2, pages 527-528, we set Then w ′ t := w t − h t is a Brownian motion under P ′ , and (X, K), (Y, l) satisfy the following equations, respectively: Therefore according to the case of h ≡ 0 we have for every ε > 0, which, together with the fact that M 1 is a continuous functional of w, yields that Remark 2.1. In the last step of the proof above, we encounter the situation that the drift b ′ depends also on time t. But as in [5], Theorem 8.2, everything still works with trivial modifications.
3. The support problem.
3.1.
Conditions and useful estimates. The approximate continuity theorem proved in the above section together with the Wong-Zakai approximation theorem proved in [1] gives, in a similar way paved in [13], the support theorem for reflected diffusions under the conditions (H 1 )-(H 2 ). In this section we will prove the support theorem based upon the idea in [8] when the domain D is supposed to satisfy the following conditions: (A) There exists a constant r 0 > 0 such that for any x ∈ ∂D, (B) There exist constants δ > 0 and β ≥ 1 satisfying that for any x ∈ ∂D, there exists a unit vector l x such that where ·, · denotes the usual inner product in R d . (C) There exists a function ϕ ∈ C 2 b (R d ) and a positive constant γ such that for any x ∈ ∂D, y ∈D and n ∈ N x , y − x, n + 1 γ Dϕ(x), n |y − x| 2 ≥ 0.
Lemma 3.1 ([1], Lemma 2.3). Assume (A)-(B) hold, and (x, k) is the solution to the Skorohod problem associated with a continuous function w
such that x 0 = w 0 ∈D. Then for θ ∈ (0, 1], there exist constants c 1 , c 2 , C dependent on θ, δ, β, γ 0 such that for all 0 ≤ s ≤ t ≤ T , (1.1). Moreover, for all 0 ≤ s < t < ∞, Let n ∈ N and t i = iT 2 −n (here we should have used t n i instead of t i to indicate the dependence on n, but in order to not surcharge the notation, we omit the superscript n), ∆ = 2 −n T , and for t ∈ [t i , t i+1 ) set Consider the following reflected equation: Denote the solution by (X n , K n ).
Support theorem.
We first state our main theorem. To prove the theorem, we will apply the following results; cf. [8].
Proposition 3.1. Let F be a measurable map from Ω to a Banach space (X, · ): (1) Let Z X 1 : H → X be measurable and H n : Ω → H be a sequence of random variables such that for any ε > 0, Then supp(P • F −1 ) ⊂ Z X 1 (H). (2) Let Z X 2 : H → X be measurable and for fixed h, T h n : Ω → Ω be a sequence of measurable transformations such that P • (T h n ) −1 ≪ P, and for any ε > 0, lim sup Then supp(P • F −1 ) ⊃ Z X 2 (H). where · T,θ is defined in the Introduction.
Using the Riemannian sum approximation of stochastic integrals, it is easy to see that Y n := X(w − w n + h) solves the following RSDE: Proof. By Lemma 3.2, Moreover, for all 0 ≤ s ≤ t ≤ T and for any θ ∈ (0, 1 4 To prove this proposition, we need some lemmas, and without loss of generality we take T = 1. Proof. Set ξ = max 0≤s≤t |w s |. Note that P(|w i t | ∈ dx) = 2 πt e −x 2 /(2t) dx, i = 1, . . . , d 1 and thus
21
The result follows from Doob's maximal inequality and that Lemma 3.7. There exists a constant C p such that for any t ∈ [0, 1] and any p ≥ 1, Proof. By Lemma 3.1, for any θ ∈ (0, 1], Note that for any p ≥ 1, For any c, by Lemmas 3.5 and 3.6, Now combining these two estimates gives E(|φ n |t n t ) 2p ≤ C p ∆ p . The other result follows from Y n t = L n t + φ n t and the above estimate. Proof.
Now note that
Trivially by the Burkholder and Hölder inequalities we have From the estimates above we deduce Now we are ready to prove Proposition 3.3.
Proof of Proposition 3.3. For cases of s, t ∈ [t i−1 , t i ] and t i−1 ≤ s ≤ t i < t ≤ t i+1 for some 1 ≤ i ≤ 2 n , it follows from Lemmas 3.7-3.8 that We get by Itô's formula, where according to (C), By the conditions on σ, b, ϕ, Using the BDG inequality we get .
Note that · 0 U n s (r n ) dw n r is the piecewise linear interpolation of M n · := ·−∆ 0 U n s (π n (r)) dw r with π n (r) := max{t k ; t k ≤ r}.
Thus by Doob's inequality and Lemma 3.7 we get Moreover, Summing up we have which together with Gronwall's lemma yields E|Y n t − Y n s | 4p ≤ C p |t − s| p . It follows from this estimate and Lemma 3.8 that E|φ n t − φ n s | 4p ≤ C p |t − s| p . Now (3.5) holds due to Kolmogorov's continuity criterion.
Continuing this process and using arguments similar to those used in (3.15) above, we get
|
2016-06-06T05:46:16.000Z
|
2016-06-06T00:00:00.000
|
{
"year": 2016,
"sha1": "87f5b9a3ed20d6a3bcb1d41194860f9d69134c6c",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1214/15-aop1018",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "f8fd287befe0a961a8cd094596dca28a6c42a63b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
263327566
|
pes2o/s2orc
|
v3-fos-license
|
Decreased cholinesterase level combined with renal dysfunction and sympathetic denervation associated with increased cardiac mortality in systolic heart failure
Aims Cardiac mortality in patients with heart failure (HF) is likely to be aggravated by malnutrition, assessed by serum cholinesterase (ChE) level, as well as by kidney dysfunction or impairment of cardiac sympathetic denervation. Their prognostic interactions, however, have not been determined. Methods A total of 991 systolic HF patients were enrolled in our HF database following clinical evaluation including evaluation of the nutrition state and assessment of standardized heart-to-mediastinum ratio (sHMR) of iodine-123-labeled meta-iodobenzylguanidine activity. Patients were followed up for an average of 43 months with the primary endpoint of fatal cardiac events (CEs). Results The CE patient group had a lower level of ChE, lower estimated glomerular filtration rate (eGFR) and lower late sHMR than those in the non-CE patient group. A five-parameter model with the addition of serum ChE selected in the multivariate logistic analysis (model 2) significantly increased the AUC predicting risk of cardiac events compared with a four-parameter model without serum ChE (model 1), and net reclassification analysis also suggested that the model with the addition of serum cholinesterase significantly improved cardiac event prediction. Moreover, in overall multivariate Cox hazard analysis, serum ChE, eGFR and late sHMR were identified to be significant prognostic determinants. HF patients with two or all of the prognostic variables of serum ChE < 230 U/L, eGFR < 48.8 ml/min/1.73 m2 and late sHMR < 1.90 had significantly and incrementally increased CE rates compared to those in HF patients with none or only one of the prognostic variables. Conclusion Decreases in cholinesterase level and kidney function further increase cardiac mortality risk in HF patients with impairment of cardiac sympathetic innervation.
Introduction
Obesity is well known to be an independent risk factor for metabolic, endocrine and cardiovascular diseases (1).On the other hand, it has been shown that heart failure (HF) patients who are malnourished or experiencing rapid weight loss may have an increased prognostic risk independent of age, New York Heart Association (NYHA) functional class, or cardiac functions (2)(3)(4).These findings clearly demonstrate the importance of controlling energy intake and body weight for better prognosis, as indicated by many prophylactic or treatment guidelines for atherosclerotic disorders.The dichotomic aspects have emphasized the necessity to appropriately select a nutritional intervention strategy in association with the nutrition state, patient activity, baseline disease, comorbidities and cardiac function.Although body mass index (BMI) is conventionally used for the assessment of obesity or calorie intake in relatively heathy subjects, the usefulness of BMI may be limited for identifying a malnutrition state or sub-clinical cachexia in patients with pathologic conditions.This is because a pathological undernutrition state is related to many factors such as protein catabolism, lipolysis, bone loss, kidney dysfunction and anemia together with increases in sympathetic nerve activity, inflammatory cytokines and/or insulin resistance (5).It has recently been shown that serum cholinesterase can be an indicator of hepatic function and nutrition, and several investigations have revealed its close correlation with prognosis in HF patients (6,7).Although these biomarkers are speculated to increase cardiac mortality by the development of a vicious cycle, there are few reports on the prognostic interactions in HF patients.
With reference to previous studies on the prognosis of HF (8)(9)(10)(11)(12)(13), the aim of the present study was to determine prognostic incremental values for risk stratification of HF patients with reduced left ventricular ejection fraction (LVEF), with particular focus on the prognostic interactions of serum cholinesterase levels, renal function and cardiac sympathetic innervation as nutritional parameters.
Study design and study subjects
A total of 991 consecutive patients with symptomatic HF and echocardiographic LVEF <50% who were admitted to our hospital between April 2010 and December 2016 and underwent myocardial scintigraphy of 123 I-labeled meta-iodobenzylguanidine (MIBG) for prognostic evaluation after compensatory and optimal medical therapy were retrospectively enrolled in our HF database.
The inclusion criteria for this retrospective study were symptomatic HF requiring hospitalization, LVEF <50% determined by echocardiography, and age >20 years.Patients who refused resuscitation, patients who had obvious malignancy or hemorrhagic disease, and patients who were younger than 20 years of age were excluded.The patients included 738 males (74.5%).The mean age of the patients was 67.5 ± 12.9 years and the mean LVEF was 32.7 ± 11.3%.The diagnosis of HF was made by the Framingham criteria including typical symptoms of jugular venous distention, peripheral edema, pulmonary rales, S3 or S4 gallop sounds, and tachycardia.A chest x-ray was taken and two-dimensional echocardiography was performed to corroborate the diagnosis and rule out other diseases with similar symptoms and signs.In addition to a history of myocardial infarction or coronary revascularization, the etiology of HF, such as ischemic or nonischemic, was differentiated by electrocardiography, echocardiography, scintigraphy, or a combination of these when necessary as well as by coronary vascular information obtained by computed tomography, magnetic resonance imaging, and invasive selective coronary angiography.Just before discharge, blood tests were performed to measure the levels of hemoglobin (Hb), albumin, lipid profiles, glutamic-oxaloacetic transferase (GOT), glutamic-pyruvic transferase (GPT), lactate dehydrogenase (LDH), serum ChE, creatinine and brain natriuretic peptide (BNP).Renal function was evaluated by estimated glomerular filtration rate (eGFR) using the standard formula.The baseline geriatric nutrition risk index (GNRI) was calculated from serum albumin and BMI using the following formula: GNRI = 14.89 × serum albumin (g/dl) + 41.7 × present body weight/[height 2 (m 2 ) × 22] = 14.89 × serum albumin (g/dl) + 41.7 × BMI/22.Plasma BNP levels were measured in 598 patients (60.3%) and NT-pro BNP levels were measured in the remaining 393 patients (39.6%) before discharge when heart failure was stable.To evaluate BNP and NT-pro BNP in an integrated manner, all of the HF patients were classified into four groups according to the following values based on ESC HF guidelines: 0-40 pg/ml and 0-125 pg/ml for stage 1, 41-100 pg/ml and 126-400 pg/ml for stage 2, 101-200 pg/ml and 401-900 pg/ml for stage 3, and 201 pg/ml and 901 pg/ml for stage 4, respectively.
Echocardiographic assessment
Two-dimensional echocardiography was performed by echocardiographic technicians with no knowledge of the patients' clinical data in the left lateral recumbent position using a commercially available ultrasound machine equipped with a 2.5 MHz variable frequency transducer to measure the parameters from views of the paracentral long axis and short axis and apical 4, 3, and 2 chambers.The following echocardiographic parameters were measured at the time of compensated heart failure prior to discharge: left atrium diameter (LAD; mm), left ventricular enddiastolic diameter (LVDd; mm), LVEF (%) calculated using the biplane modified Simpson's method, left ventricular volume at end-diastole (EDV; ml), left ventricular volume at end-systole (ESV; ml) and septal E/e' (14,15).
Measurement of sympathetic denervation of the left ventricle
Cardiac sympathetic innervation by 123 I-labeled MIBG of 111 MBq was quantitatively assessed in patients with compensated heart failure in a fasting and resting state by cardiac imaging using a gamma camera with a low-energy generic collimator at 15-30 min (early images) and 4 h (late images) after an intravenous tracer injection, as previously described (9)(10)(11).
Cardiac 123 I-MIBG activity was measured as heart-tomediastinum ratio (HMR) using dedicated MIBG software (Smart MIBG Software, Tokyo, Japan) operated by experienced nuclear medicine technicians with no knowledge of the patients' clinical data, automatically setting the region of interest to the upper mediastinum and the entire heart on a planar anterior image. 123I-MIBG washout kinetics from the left ventricle were calculated from early and late cardiac 123 I-MIBG activities as washout rate (WR).The HMR of cardiac 123 I-MIBG was measured using a late image and standardized as sHMR for medium energy collimator conditions by a mathematical method established in a crosscalibration phantom experiment (12,16).This method not only minimizes the variability of 123 I-MIBG HMR but also makes it possible to compare HMR data from sHMR regardless of differences in data acquisition methods such as data obtained using low-and medium-energy collimators or data obtained at different research facilities.
Follow-up protocol
HF patients enrolled in our hospital HF database were regularly followed up by experienced cardiologists in the out-patient clinic for one year for a longer period if the patients survived.The cardiac events (CEs) that were considered as the primary endpoints of the present study were as follows: fatal CEs such as sudden cardiac death, death due to refractory and progressive pump failure, fatal sustained ventricular tachyarrhythmias, including ventricular tachycardia and ventricular fibrillation, and appropriate implantable cardioverter defibrillator (ICD) therapy against fatal ventricular arrhythmias.Clinical findings were confirmed by reviewing the patients' medical records and outcome analysis was retrospectively performed.In HF patients enrolled in the present study, sudden cardiac death was defined as witnessed cardiac arrest and death within one hour of acute onset or unexpected death in patients who had survived for the previous 24 h.The present study was conducted in accordance with the principles outlined in the Declaration of Helsinki and was approved by our institutional ethics committee for enrollment in our database and use of data for clinical research.The present study was approved by the Ethics Committee of Obihiro Kosei Hospital.Its approval number was 2016-015.Due to the retrospective, noninterventional, observational nature of the study as well as the fact that the clinical research was publicly disclosed on the institution's Web site and the fact that comprehensive consent was determined by the institution's ethics committee, the need for each patient's written informed consent was waived.
Statistical analysis
Statistical values are presented as means ± SD.Means of values in the two groups were compared by the unpaired t-test and categorical variables were compared by the χ 2 test.New parameters were added to the conventional risk factors for patients with HF, and area under the curve (AUC) and net reclassification index (NRI) were used to assess whether the prognostic evaluation was significantly improved.Reclassification was additionally used to re-evaluate the results.
A time-dependent cumulative event-free curve was generated by Kaplan-Meier analysis using the key parameters identified in the present study, and log-rank tests were also used to compare the curves as needed.The number of significant variables identified in the univariate analysis was statistically appropriate for the number of cardiac events in the present study, and multivariate analysis with the Cox proportional hazards model was performed to calculate hazard ratios and 95% confidence intervals (CIs) for the significant variables.Receiver operating characteristic (ROC) analysis was performed to determine the optimal cut-off values for the independent significant parameters of CEs.Statistical analysis in the present study was performed using the computer software SAS for Windows (version 9.4, SAS Institute, Cary, North Carolina, USA) and Mathematica software for Windows (version 12.3, Wolfram Research Inc., Champaign, IL.USA).P values less than 0.05 were considered significant.
Results
Following measurement of serum ChE and calculation of eGFR, cardiac sympathetic innervation was quantified as standardized HMR of MIBG activity as shown by two typical cases (Figure 1).Despite having almost identical levels of cardiac dysfunction, Case 1 with almost normal values of serum ChE, eGFR and late sHMR (2.37) had no CEs, while Case 2 with apparent decreases in serum ChE, eGFR and late sHMR (1.34) died of progressive pump failure during the observation period.In 313 HF patients (31.5%),CEs recorded during a mean followup period of 43 months were as follows: sudden cardiac death in 29 patients, cardiac death due to refractory and progressive pump failure in 240 patients, lethal ventricular tachyarrhythmias in 22 patients and appropriate ICD treatment against lethal ventricular tachyarrhythmias in 22 patients.1).Patients in the CE group had a significantly lower late sHMR than that in patients in the non-CE group: 1.72 ± 0.41 vs. 2.00 ± 0.48, P < 0.0001 (Table 2).
Net reclassification analysis was performed using the fourparameter logistic model created by the combination of NYHA functional class, ESV, eGFR and late sHMR (model 1) and by the five-parameter model that included ChE (model 2) [Table 4A].Of the 314 patients who died of a cardiac event, classification was improved in 22 patients and was made worse in 6 patients using the five-parameter model.The net gain in reclassification proportion was 5.4% (P = 0.0015).Of the 677 patients who did not die of a cardiac event, 14 were reclassified upward and 248 patients were reclassified downward, with a net gain in reclassification of −57.4% (P < 0.0001).The NRI in all subjects was 62.7% (P < 0.0001) [Table 4B].Thus, model 2 significantly improved the identification of patients at low risk of cardiac death when compared to model 1.Based on logistic In the present formula, NYHA functional class (class Ⅰ/1, class Ⅱ/2, class Ⅲ/3, class Ⅳ/4) was a categorical variable, and ESV, eGFR, late sHMR and ChE were continuous variables.
The respective cut-off values for serum ChE, eGFR and late sHMR were calculated from the ROC curve.By using cut-off values of the three parameters, patients were clearly categorized into high-risk and low-risk sub-groups (Figure 2).Patients who had a serum ChE level of less than 230 U/L, eGFR of less than 48.8 ml/min/1.73m 2 or late sHMR of less than 1.90 had significantly lower cardiac eventfree rates than did their counterparts (Figure 2).The combination of two of the three parameters (ChE, eGFR and late sHMR) more clearly discriminated HF patients in three risk categories: low, intermediate, and high-risk groups (Figure 3).Further accumulation of the three parameters (i.e., serum ChE < 230 U/L, eGFR < 48.8 ml/min/1.73m 2 and late sHMR < 1.90) was demonstrated to increase CE risks in HF patients (Figure 4).
Discussion
The present study revealed a synergistic role of serum ChE as a malnutritional parameter in risk stratification of patients with
Cardiac sympathetic nerve denervation and malnutrition
A low nutritional status worsens the prognosis of HF patients (2,17).In HF patients, impaired absorption and increased permeability due to liver dysfunction, intestinal edema and decreased appetite due to right HF are possible causes of a malnutritional state.In elderly HF patients, furthermore, inadequate energy intake, increased energy expenditure, and impaired anabolism can combine to form a malnutritional state that is prone to water retention and infection.In addition, the parasympathetic nervous system is predominantly involved in the secretion of digestive enzymes such as glucosidase, peptidase, and lipase, and digestion and absorption are inhibited in a hyper-sympathetic state, which likely contributes to malnutrition in HF patients.In the present study, levels of liver enzymes were also significantly higher in the CE group than in the non-CE group.Serum ChE level, an indicator of liver dysfunction and malnutrition, was also an independent prognostic factor together with an indicator of cardiac sympathetic dysfunction.Moreover, the combination of these two parameters suggests that further risk stratification of HF patients is possible.
Renal dysfunction and malnutrition
Recently, the concept of protein energy wasting (PEW), defined as a depletion of protein energy sources, has been used to define malnutrition in patients with renal dysfunction.The causes of PEW include inadequate nutritional intake, systemic inflammatory response syndrome due to a cytokine storm including HF, uremia, endocrine abnormalities (increased insulin resistance, overproduction of catabolic hormones, decreased production of anabolic hormones), and metabolic acidosis (18,19).
The vicious cycle leads to hypoproteinemia and albuminemia.
In addition, hyperglycemia and increased insulin resistance in patients with renal dysfunction lead to abnormal glucose metabolism.In the liver, this leads to accelerated glycogen breakdown, increased glycogenesis, and inhibition of sugar and amino acid oxidation (18).Furthermore, for adipose tissue, renal dysfunction induces insulin-dependent suppression of glucose oxidation, increased lactate production and fatty acid release.Levels of total cholesterol, HDL and LDL are decreased (19).
In the present study, the synthesis of lipids and proteins was also suppressed in the CE group compared to that in the non-CE group, with significantly lower blood levels of lipid profile and protein/ albumin in the CE group.Serum ChE level, a surrogate parameter for lipid and protein-albumin synthesis in the liver, was also significantly lower in the CE group.Serum ChE was an independent prognostic factor, as was eGFR, a surrogate parameter for renal dysfunction, suggesting that the combination of these two parameters might enable synergistic risk stratification of HF patients.
Kaplan-Meier event-free curves clearly show that accumulation of the three abnormal parameters (i.e., serum cholinesterase <230 U/L, estimated glomerular filtration rate (eGFR) < 48.8 ml/min/1.73m 2 and late standardized heart-to-mediastinum ratio of MIBG activity (sHMR) < 1.90) synergistically decreased event-free rates.The score category (0-3) indicates the number of the three abnormal parameters.
Cardiac sympathetic nerve denervation and renal dysfunction
When hypoperfusion is sustained due to HF, the reninangiotensin-aldosterone system, cardiac sympathetic nerve function, arginine vasopressin, endothelin I, and atrial natriuretic peptide are constantly activated (8).Long-lasting HF leads to increased cardiac sympathetic function, contraction of import and export arteries, and decreased GFR, which further contribute to renal dysfunction.
Decreased GFR leads to anemia due to impaired erythropoietin production and other causes.Anemia worsens heart failure, increases cardiac sympathetic function and leads to refractory HF (8,20).
This study also showed significant cardiac sympathetic dysfunction and renal dysfunction in the CE group compared to those in the non-CE group.Moreover, cardiac sympathetic denervation and renal dysfunction were selected as independent prognostic factors, indicating that the combination of these two parameters might enable clearer risk stratification in HF patients.
Limitations
The present study was designed as an observational and noninterventional cohort study of patients with irreversible systolic HF at a single facility.
A large multicenter intervention study based on the results of our study could contribute to the development of better prevention and treatment strategies, including diet and cardiac rehabilitation, using appropriate indications for malnutrition in HF patients with comorbid cardiac cachexia who are at high risk of death.In the present study, there was no evaluation of the Prognostic Nutrition Index (PNI) or Controlling Nutritional Status (CONUT), which have been reported to have a prognostic value for malnutrition in HF patients (21,22).
In HF patients at risk for nutritional impairment, approaches to improving nutritional status have been reaffirmed as part of the role of cardiac rehabilitation.Improvement of nutritional status is expected to be effective for improving exercise capacity.It has also been reported that cardiac rehabilitation increases muscle mass, prevents muscle hypercatabolism, and improves nutritional status (23,24).
There are two main types of therapy for HF patients: pharmacological and non-pharmacological.The use of percutaneous coronary intervention, catheter ablation, ICDs and biventricular pacemakers, transcatheter aortic valve implantation, MitraClip ® implantation, and other non-pharmacologic therapies have received attention, and the importance of nutritional management, including diet and exercise rehabilitation, should be further discussed.
Conclusion
A lower level of serum ChE can independently and synergistically predict the risk of cardiac mortality in systolic HF patients with cardiac sympathetic innervation and renal dysfunction.
FIGURE 1
FIGURE 1Measurements of heart (H)-to-mediastinum (M) ratio (HMR) of 123 I-MIBG activity using a late planar anterior image standardized with dedicated MIBG software (smart MIBG software, Tokyo, Japan).Case 1 (left panel): In a 51-year-old male, LVEF was reduced (21%) due to ischemic cardiomyopathy, but eGFR (67 ml/min/1.73m 2 ), serum cholinesterase (330 U/L) and late standardized HMR (2.37) were nearly normal.No fatal cardiac events occurred during the observation period.Case 2 (right panel): A 71-year-old woman underwent cardiac resynchronization therapy after refractory heart failure management with optimal medical therapy alone.She had significantly decreased values of LVEF (28%) due to non-ischemic cardiomyopathy, eGFR (23 ml/min/1.73m 2 ), serum cholinesterase (136 U/L), and late standardized HMR (1.34).The patient died of progressive pump failure during the observation period.
HF patients in the CE group were older than those in the non-CE group, and HF patients in the CE group had a greater NYHA functional class, lower eGFR (36.7 ± 25.3 vs. 50.1 ± 27.8 ml/min/ 1.73 m 2 , P < 0.0001), lower BMI (22.1 ± 4.3 vs. 23.0 ± 4.5, P = 0.0027) and lower serum ChE (216.3 ± 82.5 vs. 258.5 ± 84.6 U/L, P < 0.0001) than those in patients in the group without CEs (Table
TABLE 1
Comparison of clinical data for patients with and those without cardiac events.
TABLE 4A Reclassification analysis in patients who died and those who did not die of a cardiac event.
TABLE 3
Results of logistic multivariate analyses.
systolic HF, conventionally assessed by renal dysfunction and cardiac sympathetic denervation.
TABLE 5
Results of univariate and multivariate analyses.
|
2023-10-02T15:12:06.128Z
|
2023-09-29T00:00:00.000
|
{
"year": 2023,
"sha1": "fac45c9a31b0a7719506f56b280ccb4ed28d60e6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2023.1131282/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6991a94c95f31bd50a18d2153def45abe666b221",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
74876
|
pes2o/s2orc
|
v3-fos-license
|
How Reliable Are ATP Bioluminescence Meters in Assessing Decontamination of Environmental Surfaces in Healthcare Settings?
Background Meters based on adenosine triphosphate (ATP) bioluminescence measurements in relative light units (RLU) are often used to rapidly assess the level of cleanliness of environmental surfaces in healthcare and other settings. Can such ATP measurements be adversely affected by factors such as soil and cleaner-disinfectant chemistry? Objective This study tested a number of leading ATP meters for their sensitivity, linearity of the measurements, correlation of the readings to the actual microbial contamination, and the potential disinfectant chemicals’ interference in their readings. Methods First, solutions of pure ATP in various concentrations were used to construct a standard curve and determine linearity and sensitivity. Serial dilutions of a broth culture of Staphylococcus aureus, as a representative nosocomial pathogen, were then used to determine if a given meter’s ATP readings correlated with the actual CFUs. Next, various types of disinfectant chemistries were tested for their potential to interfere with the standard ATP readings. Results All four ATP meters tested herein demonstrated acceptable linearity and repeatability in their readings. However, there were significant differences in their sensitivity to detect the levels of viable microorganisms on experimentally contaminated surfaces. Further, most disinfectant chemistries tested here quenched the ATP readings variably in different ATP meters evaluated. Conclusions Apart from their limited sensitivity in detecting low levels of microbial contamination, the ATP meters tested were also prone to interference by different disinfectant chemistries.
Introduction
Several types of pathogens can readily survive on high-touch environmental surfaces in healthcare and other settings [1][2][3][4] as a result these surfaces may act as vehicles for the spread of a variety of nosocomial pathogens [3,5]. In 2002 in the US, 5% of all patients acquired such infections and of these, the mortality rate was nearly 6% [6][7][8][9][10][11]. In the United States alone, the cost of such hospital-acquired infections (HAIs) is estimated to be between 5 and 29 billion dollars annually [9,[12][13][14].
To limit the impact of HAIs, routine cleaning and disinfection of high-touch environmental surfaces in healthcare facilities is crucial for infection control [2,4,[15][16][17][18][19]. In addition, it is imperative to ascertain that the decontamination procedures in such facilities are optimal. While the widely used practice of visual inspections may be sufficient for aesthetic purposes, it does not provide quantitative feedback on the effectiveness of the decontamination process [7,20,21]. While culture-based approaches provide quantitative results, they cannot provide immediate feedback and antibody-or PCR-based techniques have limited applications such as in the food industry where immediate availability of the results may be less crucial [2,13].
ATP bioluminescence meters, which measure the concentration of ATP as relative light units (RTU) in organic material and living cells [16], are widely used in food and beverage industries because of their ease of use and fast turn-around of results. Such meters are increasingly being used in healthcare facilities as well.
This study evaluated four leading ATP bioluminescence monitoring systems for their accuracy and linearity in detecting ATP values, detection limits for microbial count, correlation with plate-counting using Staphylococcus aureus and the quenching and enhancement effect of various disinfectant chemistries. Table 1 shows the list of tested disinfectants in this study. They were selected because they are sold for the decontamination of environmental surfaces in healthcare settings. In addition to the commercial products, a few antimicrobial active ingredients were also used in this study to compare their results with actual disinfectant formulations.
Methods
First, the ATP luminometer meters were tested for their linearity in reading standard ATP solutions. A 0.1 molar solution of ATP standard powder was prepared in autoclaved deionized (DI) water, followed by serial 10-fold dilutions from 10 22 to 10 210 . 10 mL of each dilution was pipetted directly onto the swab tip using positive displacement tips. This was done to avoid the variability resulting from the difference of swab-to-swab efficiency in picking up the organic load from the surface. Each meter measured ATP and reported the data in RLU. Later, serial dilutions of S. aureus were prepared from a freshly thawed stock culture. A 10 mL volume of each serial dilution (10 0 to 10 8 ) was separately pipetted directly on the tip of each swab and the readings were recorded. To correlate the RLU reading with the actual CFU, 900 mL of 10 29 and 10 210 dilutions of the bacterial suspension were separately plated on TSA in triplicates and incubated for 24 hours at 3661uC. Any chemical interference through quenching or enhancement of bioluminescence was tested by placing 10 mL of the appropriate dilution of ATP standard solution onto the tip of a swab followed by placement of 10 mL of the test disinfectant. The baseline ATP solution concentrations used above were individually determined for each of the luminometers, selecting the aliquot with ATP concentration that fell between the ATP meters' true maximum and minimum detection limits based on their obtained linearity standard curves. Also, the volume of dispensed disinfectant on the swabs, 10 mL, was determined by testing the average volume of water required to keep 50% of a 10 cm610 cm hard non-porous surface (a typical surface area dimension recommended by ATP meter manufacturers to be swabbed) wet for 3 minutes. The calculated average volume required was 80 mL in ambient room temperatures. This volume was reduced to 10 mL to compensate for the evaporation of the volatile ingredients.
To account for the repeatability of the results, all the tests have been performed in triplicates.
Statistical Analysis
Microsoft Excel was used in this study to determine correlation, R 2 , between mean readings. A log transformation of the RLU and CFU values were used since the original distribution is highly skewed with a long tail towards the higher values. Therefore, geometric mean is used for these calculations. Figure 1 shows the linearity between the geometric mean of the ATP readings versus the molarity of ATP standard solution.
Results
As can be seen, none of the ATP meters provided a linear relationship between ATP readings and the actual concentration of the ATP throughout the whole test range. Based on the results, approximately 6 logs of ATP reading RLUs is the highest difference observed in reading the same ATP concentration among different ATP meter brands. It can be noticed, however, that at some selected regions, the readings are almost linear; for example, for Hygiena, if the ATP reading at the 10 26 molarity is not considered, the rest of the data are completely linear (R 2 of 0.99952 compared to 0.98591 for the dataset including 10 26 molarity data point). Table 2 shows the correlation of ATP values to the ATP readings both at logarithmic scales. Figure 2 shows CFUs of S. aureus versus the geometric mean of the ATP readings for each ATP meter.
The detection limit of each ATP meter is displayed in Figure 2, as well as the smallest detectable number of the test organism on the swab. Table 3 shows the exact values of CFU at which each bioluminator was able to detect. It also demonstrates the correlation between RLU reading to CFUs. Figures 3 to 6 show the quenching/enhancement effect of each disinfectant on the ATP readings.
In Figures 3 to 6, the horizontal line represents the average ATP reading for the control sample, which is a dilution of the standard ATP solution and is specified in the caption of each figure. The error bars show the standard deviation for the three measurements at each point. Instances in which the bars which do not reach the horizontal line (even with their error bar) indicate that the disinfectant has significantly quenched the ATP readings.
Discussion and Concluding Remarks
In spite of the wide acceptance of ATP measurement technology there are gaps in our knowledge concerning the true reliability of the approach to assessing the cleanliness of environmental surfaces in healthcare and other settings. A correlation between RLU and colony forming units (CFU) has been reported previously [15,[17][18][19][20][21][22]. In some studies, ATP meters have not been examined for their correlation with the actual microbial count, and have only reported RLU values as a measure of surface cleanliness [16,17,23]. Other studies suggest a loose correlation between the RLU values and the actual counts [1]. Further studies have also shown the interaction of detergents and disinfectants in RLU readings [6,8,10,11,16,17,23] and some include comparisons of different bioluminescent meters [1,12,14]. Carrick et al compared four different ATP meters and their swab units and found poor detection and linearity when the surfaces are swabbed. They also reported that the swabbing units are unreliable at picking up total surface ATP. In a study by Carmen and colleagues, two of the three tested ATP luminometer failed verification, which means that they both need modifications by their manufacturers. In this study, four of the market leading ATP meters were used. Disinfectant chemistries used in this study are the most widely used products in North America. They include quaternary ammonium chlorides, phenol, sodium hypochlorite, isopropanol, citric acid and hydrogen peroxide. Furthermore, individual active ingredients of these disinfectants were tested in parallel to see the interference of the whole formulation versus the active ingredient(s).
Our analyses demonstrate that the higher the concentration of ATP or S. aureus on the swab tip, the higher the ATP reading values; in other words there is a strong positive correlation between true concentrations and RLU readings. These results support earlier studies showing such a correlation [6,8,10,11,15,[17][18][19] [12,14,20,21]. The observed correlations were slightly higher for the standard ATP solutions than those derived from S. aureus. This slight lower RLU value correlation can be explained by the fact that a single bacterial cell of a specific strain does not always produce/release the same amount of ATP molecules at a given time.
The detection limit test also showed that there could be a significant difference in the level of bacteria detectable by each device. For example, one device required 6.17610 5 CFU on the swab in order to detect an RLU value of greater than zero. Minimum detection limit values among different brands varies at up to 2 logs of ATP standard dilution. The same for maximum ATP concentration true detection varies at up to 2 logs as well. Therefore, in actual testing, an ATP reading of zero by swabbing may be misleading since the surface may in fact contain at least 10 2 CFU bacteria. It should be noted that the detection limit results in this study are based on S. aureus, while in real life, many other bacteria may be present in the environment and therefore the lower limit of bacterial detection varies very more widely.
Comparing Figures 1 and 2, we see that the detection limits of each luminometer for the bacterial ATP and the standard ATP solution are completely different. For example, Figure 1 shows that 3M detects ATP at 10 211 molarity, Charm and Hygiena detect it at 10 210 molarity and Kikkoman at 10 29 molarity, therefore 3M can detect the least concentration of ATP among these four bioluminescence meters, while by examining Figure 2, it can be seen that Hygiena is the most sensitive unit among the four in detecting S. aureus by showing a lowest detection limit of 2.4610 2 CFU followed by 3M (8.98610 2 CFU), Kikkoman (5.6610 4 CFU) and Charm (6.2610 5 CFU). The only conformance between these two sets of data is the detection limit for Charm, which is the last in both cases. Chemical disinfectants seem to significantly affect the ATP readings of all four tested units. Data in Figures 3 to 6 are summarized in Table 4.
These data were generated based on the deviation of the average ATP readings from the control sample. The values in this table represent the percentage deviation from the control sample. Since the majority of the tests led to quenching, the quenching values are shown in positive while enhancements are demonstrated as negative.
These results clearly show that each chemistry has a unique effect in either quenching or enhancing the ATP readings. Some formulations (Sporicidin and CleanCide) show the highest quenching among all tested chemicals. The 3M-meter is also shown to be the most susceptible to disinfectant chemistries. Comparing 0.5% hydrogen peroxide (in DI water) with disinfec-tant products containing hydrogen peroxide (0.5-1.4%), we observe that other ingredients (inerts) present in these formulations are almost entirely responsible for the interaction. Comparing Accel TB, Accel PREVention RTU and Virox 5 RTU shows that although they all have 0.5% hydrogen peroxide as actives, their different inert ingredients can have a profound distinctive effect in the interference.
CleanCide (0.6% Citric acid) and Sporicidin (phenol based) have the most quenching effect among all the products. The CleanCide data are in conformance with findings of Mubiru [15,[17][18][19]24] and that citric acid interferes with ATP determination by bioluminescence. Phenol was not tested in this study separately and therefore it is not possible to conclude whether Sporidicin interference comes from phenol or the inert ingredients in the formulation. Cavicide and Sani-Cloth plus are both combinations of quaternary ammonium compounds, 2-butoxy ethanol, and isopropanol, with close concentration ranges. These show almost identical quenching results with Cavicide to have more quenching effect on Hygiene and Charm units. This could be due to using different types quaternary ammonium compounds and/or using different types of inert chemicals. BTC 50, another Quat based disinfectant was also tested here to examine the effect of an exemplary quaternary ammonium compound. It shows mild quenching on Kikkoman and Charm and no effect on Hygiene and 3M. If it is used in a disinfectant formulation however, it may show different interaction, due to the effect of the other ingredients in its formulation.
By examining Table 1, it can be seen that healthcare disinfectants contain high levels of active ingredients. They also have other inert ingredients, which are usually not disclosed on their materials safety data sheets. Therefore swabbing a surface which has already been treated with a disinfectant has the potential to introduce high levels of residual chemicals to the swab and, subsequently, to the ATP measuring device. In food processing facilities, on the other hand, the chemical exposure will be significantly lower as FDA requirements (21 CFR 178.1005 & 1010 and similar guidelines) significantly limit the level of chemicals in food sanitizing and disinfecting solutions. This, results in much less chemical interaction, which could be the reason why not much chemical interaction is reported in ATP bioluminescence meters in these applications. It should be noted however that in this study, the disinfectant was directly applied to swab for the interaction test, while in real life situations, the disinfectant will be applied to the surface first, and in most part it will dry before swabbing. Therefore for volatile active ingredients such as alcohols or hydrogen peroxide, the actual chemical interaction may be less than the test results here, but for those non-volatile active ingredients, such as quaternary ammonium compounds or citric acid, the chemical interaction should be more or less the same if the surface is properly swabbed.
In summary, these results suggest that ATP meters cannot be relied upon to evaluate the effective disinfection of a healthcare surface and in particular, cannot be used as a tool to compare the effectiveness of disinfection between different disinfectants. These units have a number of limitations in detecting the true number of organisms on the surface, which can lead into false confidence in surface disinfection. Furthermore the cleaning/disinfecting chem- istry residues can have a very high impact in the ATP readings, and therefore again can result in more false confidence. As of now, there have been no reports of scientific publications that specifically studied the quenching phenomena for its true cause. Our assumption on the mechanism of chemical quenching points to two main directions: either the chemicals react with the ATP molecules and make them no longer available by breaking/ masking the ATP molecule, or perhaps the chemicals enter the luciferase activity chamber and adversely affect the enzymatic pathway for fluorescence generation. In this study, all the test solutions such as ATP standard solutions, inoculum and disinfectant chemistries were pipetted into the swab, and therefore the efficiency of each swab was not studied here. Furthermore, only one type of Gram-positive bacterium was tested here to obtain more definitive and reliable conclusions. Further studies should involve the use of both Gram-positives and Gram-negatives to expand on this study's finding. Needless to say, testing viral contaminations with ATP meters would result futile as viral cells do not contain or produce ATP molecules on their own, raising another concern on the limitations of the ATP bioluminescence technology in healthcare use.
Our findings suggest that introducing ATP meters to healthcare facilities, as a disinfection validation tool is not a reliable choice.
The limitations of ATP luminometers clearly show that the units are not reliable in confirming proper removal of disease causing agents at healthcare settings. Inaccurate bioluminescence results causing false confidence on surface disinfection can ultimately jeopardize public health and rise infection control costs at hospitals and healthcare facilities.
We should note that our findings are based on one bacterium namely S.aureus, on four ATP meters and fourteen disinfectant products. All tests were performed in controlled laboratory conditions. To avoid variability in the recovery, the inoculum was directly applied to the swabs, and as such our tests did not involve sampling of environmental surfaces.
Our findings, in conjunction with the available literature, can help healthcare infection control practitioners make more educated decisions about the methods they choose to evaluate the microbial cleanliness of healthcare surfaces.
|
2016-05-04T20:20:58.661Z
|
2014-06-18T00:00:00.000
|
{
"year": 2014,
"sha1": "b23ed367ed06ccd62372cc4fa30cc7cad0eb7261",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099951&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b23ed367ed06ccd62372cc4fa30cc7cad0eb7261",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
32055113
|
pes2o/s2orc
|
v3-fos-license
|
3-Phenyl-1H-1,2,4-triazol-5-amine–5-phenyl-1H-1,2,4-triazol-3-amine (1/1)
In the title compound, C8H8N4·C8H8N4, two tautomers, viz. 3-phenyl-1,2,4-triazol-5-amine and 5-phenyl-1,2,4-triazol-3-amine, are crystallized together in equal amounts. The 3-phenyl-1,2,4-triazol-5-amine molecule is essentially planar; the phenyl ring makes a dihedral angle of 2.3 (2)° with the mean plane of the 1,2,4-triazole ring. In the 5-phenyl-1,2,4-triazol-3-amine tautomer, the mean planes of the phenyl and 1,2,4-triazole rings form a dihedral angle of 30.8 (2)°. The π-electron delocalization of the amino group with the 1,2,4-triazole nucleus in the 3-phenyl-1,2,4-triazol-5-amine molecule is more extensive than that in the 5-phenyl-1,2,4-triazol-3-amine tautomer. The molecules are linked into a two-dimensional network parallel to (100) by N—H⋯N hydrogen bonds.
Usually, tautomerizable 1,2,4-triazoles with nonequivalent substituents at positions 3 and 5 crystallize as a tautomer bearing at position 5 substituent with relatively more pronounced electronodonor properties (Buzykin et al., 2006). Considering significant difference in electronic properties of phenyl and amino group, the crystal would be assembled from the molecules of tautomer I analogously to the reported 3-pyridin-2-yl-1,2,4-triazol-5-amine (Dolzhenko et al., 2009). Surprisingly, two tautomeric forms I and II were found crystallized together in the crystal. To the best of our knowledge, this is the first example of existence in crystal of unequally 3,5-disubstituted tautomerizable 1,2,4-triazole tautomeric form with electronodonor group located at position 3.
The geometry of the tautomer I molecule is essentially planar (Fig.2). The amino group is involved in π-electron delocalization with the 1,2,4-triazole nucleus. It is almost planar with small deviation 0.06 (2) Å of the nitrogen atom from the C8/H4A/H4B plane. The length of the C8-N4 bond is 1.337 (3) Å. The π-electron delocalization of the amino group of II with the 1,2,4-triazole nucleus is significantly lower. The nitrogen atom (N8) of the amino group adopts a pyramidal configuration with 0.21 (2) Å deviation of the nitrogen atom from the C16/H8A/H8B plane. The C16-N8 bond [1.372 (3) Å] is also longer. The phenyl ring of I makes a small dihedral angle of 2.3 (2)° with the mean plane of the 1,2,4-triazole ring. The molecule of tautomer II loses this planarity. The mean planes of the phenyl and 1,2,4-triazole rings of II form a dihedral angle of 30.8 (2)°.
The molecules are linked into a two-dimensional network parallel to the (100) by N-H···N hydrogen bonds (Table 1 and Fig.3).
Special details
Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
|
2014-10-01T00:00:00.000Z
|
2008-12-17T00:00:00.000
|
{
"year": 2008,
"sha1": "90e47c28f43319e17f9fbb71deb98791edcbbf4e",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2009/01/00/ci2720/ci2720.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90e47c28f43319e17f9fbb71deb98791edcbbf4e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
100594556
|
pes2o/s2orc
|
v3-fos-license
|
Differentiation and Non-Linear Responses in Temporal Phenotypic Plasticity of Seasonal Phenophases in a Common Garden of Crataegus monogyna Jacq
Phenology in perennial plants implies the temporal occurrence of biological events throughout the year. Heritable phenotypic plasticity in the timing of the phenophases can be of importance in the adaptation of woody species to a quickly changing environment. We observed the timing of bud burst, flower opening, leaf senescence and leaf fall in two successive years in a common garden of Crataegus monogyna Jacq. in Belgium, consisting of six local and five non-local provenances. Data were processed with cumulative logistic mixed models. Strong auto-correlation was present among the spring phenophases as well as among the autumnal phenophases, with spring phenophases being negatively correlated with fall phenophases. The strongest between-provenance differentiation was found for the timing of bud burst in spring, followed by flower opening and finally by leaf senescence and leaf fall. Warmer spring temperatures in March 2017 advanced the timing of bud burst, and to a lesser extent of flower opening, in all provenances compared to 2016. However, the advancement was non-linear among the provenances, with the lower latitude provenances being relatively less early and the higher elevation provenances being more late than the local provenances in this year. It can be hypothesized that non-local provenances display larger temporal phenotypic plastic responses in the timing of their spring phenophases compared to local provenances when temperatures in the common garden deviate more from their home-sites.
Introduction
Woody plants are sessile and perennial organisms that are characterized by long generation times and slow migration rates [1].Therefore, it is expected that woody species can adapt relatively quickly to changing local environmental conditions [2].The capacity for evolutionary change depends on the standing genetic variation in tree populations [3] and climate is a major driver of evolutionary change over longer time scales [4].Understanding the genetic basis of complex polygenic traits in woody plants that are clearly influenced by climate is therefore a current challenge in forest genetic research.Because all individuals in a common garden share the same environment, any average difference in a trait between provenances of the same species has a genetic origin.The genetic variation in fitness-related traits is typically estimated in open pollinated progeny tests in common garden experiments, including the estimation of differentiation between provenances [5,6], and the study of this variation has its place in the field of quantitative genetics [7].
The seasonal cycle in deciduous woody plant species of temperate regions is characterized by the timing of bud burst and flowering in spring and by the timing of bud formation, leaf senescence and leaf shedding in autumn.These phenophases can deviate between genetically differentiated populations and are believed to be adaptive, responding to selection induced by environmental change [5,8,9].The timing of the phenophases marking the beginning and the end of the yearly growing season maximizes the annual growth while minimizing the risk of frost damage in spring and autumn.Frost can damage woody plants at high fitness costs [10].Late frosts in spring can damage the soft tissues of young leaves, whereas early frosts in autumn may cause early leaf abscission, hindering the resorption of nutrients.In addition, early frosts are also known to damage the cambial zone in trees [11].
The timing of bud burst and leaf unfolding vary considerably among tree species, responding to divergent climatic conditions [3,5].Bud burst can be the main trait that is affected by climate mediated selection [3].Selection on timing of bud burst can be sufficiently strong to counteract the homogenizing effect of gene flow [12,13].Rising spring temperatures advance the onset of the growing season in many woody species of the temperate zone, prolonging the growing season and thus affecting plant productivity and the global carbon balance [14][15][16].On the other hand, late frosts in spring may hamper early flushing species in a global warming scenario [17].Because of its adaptive nature and the easy assessment from an early age onward, spring flushing is frequently evaluated in common gardens [5,18].Within the same woody species, population differentiation for bud burst, as observed in common garden experiments, typically follows clines along gradients of elevation and/or latitude of the home-sites of the populations [5,19,20] possibly due to different temperature requirements that are genetically determined [21].The genetic variation in timing of bud burst, together with divergent selection, is believed to have allowed tree species in temperate regions to occupy large distribution ranges [22].Within populations, individual trees display variable timing of bud burst, which is suggested to have a genetic cause [23].Individual trees in a population can therefore be categorized as early, intermediate and late phenological forms [24].Neutral genetic marker analysis showed that late bud burst forms of Fagus sylvatica L. in natural populations in Poland displayed higher within-population genetic variation in comparison to the early forms, suggesting that late spring frosts shape the neutral genetic structure of the populations [24].Finally, no relationship was found between the timing of cambial activity and the timing of bud burst in Quercus robur L. [23].
Flowering and the subsequent fruit formation are part of the sexual reproductive cycle in plants.Reproductive phenology is sensitive to environmental cues such as temperature, moisture and herbivory [25].Divergent timing of flowering can stimulate assortative mating in populations of woody plants, reducing gene flow and promoting population differentiation [25].The timing of flowering varies strongly among woody angiosperm species and can occur before, during or after bud burst.For instance, pollen emission concurs with leaf unfolding in oaks.Assortative mating through long-distance pollen flow is therefore suggested to interact with local adaptation of bud burst [3,26].In general, timing of bud burst and flowering are most likely auto-correlated in temperate tree species [26,27].
The emergence and growth of new spring foliage in temperate deciduous trees relies strongly on the nutrients that were resorbed during the preceding leaf senescence [28].The timing of leaf senescence is affected by both photoperiod and temperature [29][30][31][32] while the timing of bud burst is primarily influenced by temperature [33,34].It shows less year-to-year variability in comparison with timing of bud burst and is concomitant with less favorable conditions for photosynthesis [29].In 59 tree species, the timing of autumnal leaf senescence displayed a pattern according to the climatic clines of the home-sites of the studied populations, which was clearer than the pattern observed in bud burst timing [5].But, in a common garden experiment of Quercus petraea (Matt.)Liebl.composed of populations derived from the same geographic region but from deviating elevations, the population Forests 2019, 10, 293 3 of 17 differentiation for autumnal leaf senescence did not correlate with the elevation of the origin [3].There is no consensus concerning the factors controlling the leaf senescence process [29,35].Leaves that emerged after a late spring frost in beech and oak displayed higher photosynthesis rates and a delayed leaf senescence in autumn, compensating for spring frost damage and demonstrating that long-lived trees can adapt their autumnal phenology depending on preceding productivity [36].The spring phenophases of bud burst and flowering, and the timing of leaf senescence and leaf abscission are most probably strongly auto-correlated.Therefore, it is suggested that leaf fall can be used as a proxy for leaf senescence, assuming a certain time delay [30].
The range of phenotypes that a plant can express as a function of the environment is called phenotypic plasticity.Genetically controlled, heritable phenotypic plasticity has the potential to influence plant evolution [37].Species that are able to adjust their phenological responses to warming spring temperatures by earlier bud burst or earlier flowering show better performance when compared with less responsive species [38].Because of their longevity, phenotypic plasticity may play an important role in the adaptation of woody perennials to the predicted climate change [39].Studying responses of provenances of woody species in a common garden not only allows assessment of the differentiation among the provenances that is shaped by divergent selection, e.g.[40].Also, repeated observations in successive years allow the estimation of phenotypic plastic reactions to variable meteorological conditions in the garden over time.We planted a common garden in Belgium consisting of Crataegus monogyna, a common shrub species in western Europe, including local and non-local provenances, and studied the seasonal phenophases that mark the growth cycle of woody plants.Using neutral (non-adaptive) molecular markers, high levels of genetic diversity within populations, but low levels of population differentiation, were found in this species [41].We hypothesized that: (i) the timing of bud burst, flower opening, leaf senescence and leaf fall differ between local and non-local provenances; (ii) the timing of the four phenophases display varying degrees of within-provenance and between-provenance variation; (iii) spring phenophases are strongly auto-correlated, as well as autumnal phenophases; and (iv) the non-local provenances respond in a non-linear way, in comparison with the local provenances, to the variable meteorological conditions in the common garden environment in two successive years (variable temporal phenotypic plasticity among the provenances) which can be related to the home-site conditions of the non-local provenances.
Common Garden
A description of the provenances, the seed collection, the growth of the plants and the planting of the common garden have already been reported in [42].In short, the common garden consisted of six local Flemish provenances (northern part of Belgium), two Walloon provenances (southern part of Belgium), one provenance from the UK, one from Italy and one from Hungary (Table 1).The Belgian (Flemish and Walloon) provenances were collected by the authors and grown in the nursery of the research institute in Geraardsbergen, Belgium, whereas the UK, Italian and Hungarian provenances were grown in adjacent nursery beds in a Flemish forest nursery, under a sales contract.For the collection of the seeds in the Belgian populations, care was taken to collect from C. monogyna, excluding Crataegus laevigata (Poir.)DC. individuals and individuals with a putative hybridogenic background, by visually assessing the morphology of the leaves and the berries.Seeds were collected from at least 30 individuals from each population.Information on the seed collection in the commercial provenances was not available as it concerned commercial plant material without a certificate of provenance.Certificates of provenance are not compulsory for C. monogyna according to the Council Directive 1999/105/EC [43].Therefore, the exact location of these commercial seed sources was also not available.In the beginning of 2008, planting stock of the 11 provenances were planted in a common garden in Londerzeel, Flanders, Belgium.The provenances were randomly mixed in a single tree plot design and planted with a spacing of 1.5 × 1.5 m.Mean monthly temperatures for 2016 and 2017 were acquired from the weather station in Groenendaal which is located at a distance of 30 km from the common garden site (Figure 1).
Scoring of the Four Phenophases
Observations of the phenophases were performed on the shrubs in the common garden in 2016 and 2017.Bud burst, flower opening, leaf senescence and leaf fall were scored following two 6-level and two 5-level protocols, respectively (Table 2).For all phenophases, the whole shrub was evaluated visually and a mean score level was given.Bud
Scoring of the Four Phenophases
Observations of the phenophases were performed on the shrubs in the common garden in 2016 and 2017.Bud burst, flower opening, leaf senescence and leaf fall were scored following two 6-level and two 5-level protocols, respectively (Table 2).For all phenophases, the whole shrub was evaluated visually and a mean score level was given.Bud
Statistical Analysis of Phenological Data
All statistical analyses were performed in the open source software R 3.5.1.[44].Models were fitted to examine the timing of the four phenophases in the two successive years.Each phenological trait (T ph ) was a response variable and was modeled using cumulative logistic regression in the "ordinal" package [45], as the recorded observations were in an ordinal scale.The command "clmm" in the "ordinal" package models the probability (p) of having reached maximally a given level of the ordinal response variable.The score levels for leaf senescence and fall were defined in increasing order, and the probability was modeled for having reached maximally a given score level; e.g., having reached maximally a leaf senescence score of 3 was to be interpreted as the probability of having reached scores of 1, 2 or 3.The score levels of bud burst and flower opening were defined in decreasing order, so that the probability of having reached maximally a bud burst score of 3, for example, included the probability of having reached scores of 6, 5, 4or 3.In this way, this could be interpreted as having reached a score of at least 3. Mixed models were fitted as the phenological data concerned repeated observations on the same plants.The year of observation (Y, categorical variable) and the provenance (P, categorical variable) were present in the fixed part of each model, including an interaction term between these two variables.In the four phenological models, the local Flemish provenance FL1 (Table 1) was taken as the standard provenance to which the timing of the other provenances in the common garden was compared.Day (D, numerical variable) was added in the fixed part to account for the different observation days.The random part (random intercept) consisted of a unique shrub identity code (ID).The latter accounted for the repeated observations on the same plants.
where α i was an intercept value indicating the passing from one level of the ordinal phenological response variable to the next.β Y , β P and β D were the estimated coefficients for the fixed covariates Y, P and D, and r ID was the random effect coefficient for all levels of the variable ID.
To be able to compare the timing of the four phenophases for all provenances, the days were calculated for which the different provenances in each phenophase had reached the same stage of phenological development.The DOY (day of the year) was calculated for which the probability for having reached a bud burst or flower opening score of at least 4 and having reached a maximal leaf senescence and leaf fall score of 3, attained 50% (D 50%PY ) in the observation years 2016 and 2017 and for every provenance.A D 50%PY for a given provenance and a given year therefore indicated the day that half of the plants of this provenance had reached minimally (bud burst and flower opening) or maximally (leaf senescence and fall) a given score level of the respective phenophase in the respective year.This calculation was based on log(p Tph /(1 − p Tph )) being 0 for p = 50%.With 2016 as the standard level for the variable Y, to which 2017 is compared, the following formulas were used: Forests 2019, 10, 293 6 of 17 When the timing of a given phenophase for a given provenance differed significantly from the standard provenance FL1, the time lag was calculated between this provenance and the standard provenance.For 2016, this was inferred from the model with 2016 as the standard level of the categorical variable Y, whereas for 2017 this was inferred from the model with 2017 as the standard level for the variable Y.The time span between a provenance and the standard provenance from which it differed significantly, was calculated by subtracting the calculated D 50%FL1Y from D 50%PY .Pearson correlation coefficients were calculated between the timing of the phenophases in the two observation years using the D 50%P2016 and D 50%P2017 values.
A significant interaction term between provenance P and year Y in the model statistics indicated that the time span between the timing of a phenophase for a given provenance and the standard provenance in 2016 differed significantly from the respective time span between these two provenances in 2017.The significant interaction terms were visualized in reaction norm figures, in which the timing of a phenophase for a provenance was compared between 2016 and 2017.The slope of the line connecting the timing of a phenophase between 2016 and 2017 for a provenance with a significant interaction term in the model statistics, differed significantly from the slope connecting the timings for the standard provenance FL1.
To examine the relative variance in the timing of bud burst, flower opening, leaf senescence and leaf fall among the different provenances in the common garden, in comparison with the relative variance among the shrubs within a provenance, the four phenological models were adapted by moving the provenance variable (P) from the fixed to the random part.
The relative variance between the provenances (σ 2 P ) and the relative variance between the shrubs within a provenance (σ 2 ID ) were obtained from these models.
Timing of Bud Burst
All provenances in the common garden burst buds earlier in 2017 compared with 2016 (the covariate year was significant in the model statistics, Table 3, Figure S1).Significant differences in the timing of bud burst were observed between several provenances and the standard Flemish provenance FL1 in 2016 (Table 3) and in 2017 (Table S1).The time spans between the timing of these provenances and FL1 were calculated for both observation years (Table 4).The Italian and Hungarian provenances flushed earlier in comparison to the Flemish provenances, whereas the Walloon provenances with a higher elevation in the home-sites, flushed later (Figure 2a, Table 3).A significant interaction term between provenance and year in the model indicated a significant relative change in time span (between the timing of bud burst for the respective provenance and the A significant interaction term between provenance and year in the model indicated a significant relative change in time span (between the timing of bud burst for the respective provenance and the timing for the standard provenance FL1) between the two observation years (Table 3).A significant interaction term for a provenance was visualized as the slope of a line connecting the timings (expressed as D 50%PY values) for this provenance in the two observation years differing from the slope of the standard provenance FL1 (Figure 2b).In 2016, the southern European provenances burst their buds about 10 days earlier than the Flemish provenance FL1, whereas this time lag between the southern European provenances and FL1 was reduced to about 6 days difference in 2017, resulting in less steep slopes for the provenances HO and IT compared to FL1 in Figure 2b (provenance abbreviations in Table 1).Whereas in 2017 the difference between the timing of bud burst in the southern European provenances and FL1 was reduced in comparison with 2016, the time lag between the Walloon provenance WA1 and FL1 increased in 2017 in comparison with 2016, resulting in a less steep slope for WA1 compared with FL1 in Figure 2b (provenance abbreviations in Table 1).The UK provenance and the Flemish provenance FL3 also displayed significant interaction terms (Tables 1 and 3, Figure 2b).
Timing of Flower Opening
In general, all provenances in the common garden opened their flowers earlier in 2017 compared with 2016 (the covariate year was significant in the model statistics, Table 3, Figure S2).The southern European provenances and the Walloon provenances differed significantly in the timing of this phenophase when compared with the standard local provenance FL1 in 2016 and in 2017 (Table 3, Table S1 and Figure 3a).In addition, the Flemish FL4 and the UK provenance differed significantly from FL1 in 2017 (provenance abbreviations in Table 1).The time spans between these provenances and FL1 were calculated for both observation years (Table 4).Similar to bud burst, the Italian and Hungarian provenances opened their flowers earlier in comparison with the Flemish provenances, whereas the Walloon provenances tended to flower later (Figure 3a, Table 3).All time spans for flower opening between FL1 and the provenances that differed significantly from FL1, were smaller than for bud burst in both observation years.
Forests 2019, 10, 293 9 of 17 S1 and Figure 3a).In addition, the Flemish FL4 and the UK provenance differed significantly from FL1 in 2017 (provenance abbreviations in Table 1).The time spans between these provenances and FL1 were calculated for both observation years (Table 4).Similar to bud burst, the Italian and Hungarian provenances opened their flowers earlier in comparison with the Flemish provenances, whereas the Walloon provenances tended to flower later (Figure 3a, Table 3).All time spans for flower opening between FL1 and the provenances that differed significantly from FL1, were smaller than for bud burst in both observation years.Significant interaction terms between provenance and year in the model (Table 3) are visualized in Figure 3b.The pattern was comparable to bud burst.In 2016 the southern European provenances opened their flowers about 6 to 7 days earlier than the Flemish provenance FL1.In 2017, this time lag between the southern European provenances and FL1 was reduced to about 2 to 5 days, resulting in less steep slopes than FL1 in Figure 3b.Similar to bud burst, the time lag between the Walloon provenances and FL1 increased in 2017 compared with 2016 (from 2 to 3 days in 2016 to about 5 days in 2017), also resulting in less steep slopes than FL1 in Figure 3b.One extra local provenance, FL4, and the UK provenance differed significantly in timing of flower opening from FL1 in 2017 (Table S1).The UK Significant interaction terms between provenance and year in the model (Table 3) are visualized in Figure 3b.The pattern was comparable to bud burst.In 2016 the southern European provenances opened their flowers about 6 to 7 days earlier than the Flemish provenance FL1.In 2017, this time lag between the southern European provenances and FL1 was reduced to about 2 to 5 days, resulting in less steep slopes than FL1 in Figure 3b.Similar to bud burst, the time lag between the Walloon provenances and FL1 increased in 2017 compared with 2016 (from 2 to 3 days in 2016 to about 5 days in 2017), also resulting in less steep slopes than FL1 in Figure 3b.One extra local provenance, FL4, and the UK provenance differed significantly in timing of flower opening from FL1 in 2017 (Table S1).The UK provenance displayed a significant interaction term (Table 3, Figure 3b), whereas among the Flemish provenances only FL3 had a significant interaction term, visualized as a less steep slope than FL1 in Figure 3b.
Timing of Leaf Senescence and Leaf Fall
For the timing of leaf senescence and leaf fall, fewer provenances differed significantly from the standard provenance FL1 in comparison with the timing of bud burst and flower opening (Table 5, Table S2, Figure 4, Figure 5a, Figure S3 and Figure S4).For the southern European provenances, leaf senescence in 2016 occurred 9 to 16 days later than the local provenance FL1, and around 12 days later in 2017 (Table 4).Timing of leaf fall was modeled for these provenances 6 to 9 days later than the local provenance FL1 in both years (Table 4).In 2017, the timing of leaf senescence in the UK provenance was 7.5 days earlier than FL1, whereas in 2016, the timing of leaf fall in the Flemish provenance FL5 was 4 days later (Table 4).
Correlation and Variance Analysis
Pearson correlation coefficients between the timing of the different phenophases in the two
Correlation and Variance Analysis
Pearson correlation coefficients between the timing of the different phenophases in the two observation years revealed, in general, high and significant correlations.The spring phenophases bud burst and flower opening displayed the highest correlation coefficients (0.98 *** and 0.97 *** in 2016 and
Correlation and Variance Analysis
Pearson correlation coefficients between the timing of the different phenophases in the two observation years revealed, in general, high and significant correlations.The spring phenophases bud burst and flower opening displayed the highest correlation coefficients (0.98 *** and 0.97 *** in 2016 and 2017, respectively, Table 6).In comparison, the autumnal phenophases, leaf senescence and leaf fall, displayed lower correlation coefficients (0.79 ** in 2016 and 0.9 *** in 2017, Table 6).Both spring phenophases were correlated with leaf senescence (−0.87 *** and −0.86 *** for bud burst and flower opening in 2016, and −0.82 ** and −0.75 ** in 2017, respectively, Table 6), with later spring flushing and flowering occurring with earlier leaf senescence.In comparison, the spring phenophases were less correlated with leaf fall (−0.7 * and −0.65 * for bud burst and flower opening in 2016, and −0.78 ** and −0.65 * in 2017, respectively, Table 6).Intra-phenophase correlation coefficients, between 2016 and 2017, were obviously high, with bud burst displaying the highest correlation coefficient (0.97 ***), followed by flower opening (0.92 ***), leaf fall (0.89 ***) and finally leaf senescence (0.86 ***).Table 6.Correlations between the timing of the four phenophases bud burst (Bb), flower opening (Fo), leaf senescence (Se) and leaf fall (Fa) in 2016 and 2017.Pearson correlation coefficients and corresponding p-values are indicated above and below the diagonal respectively.A correlation coefficient with a corresponding p-value below 0.001 is indicated in bold and is underlined, between 0.001 and 0.01 is in bold and between 0.01 and 0.05 is underlined.The relative variability in timing of the phenophases between the provenances was compared with the relative variability between the different shrubs within a provenance (Figure 6).In 2016, the between-provenance variance was relatively highest in the phenophase bud burst, and lower in decreasing order in flower opening, leaf senescence and leaf fall.The relative within-provenance variance increased accordingly.In 2017, the between-provenance variance for bud burst was lower compared with 2016 but was still higher than the relative within-provenance variance in this year.For flower opening, the between-provenance variance in 2017 was lower compared with 2016 and was as high as the relative within-provenance variance in this year.For leaf senescence and leaf fall in 2017, the relative variances attributable to the differentiation between the provenances were lower than in 2016, and thus displayed correspondingly higher relative within-provenance variances.In general, the spring phenophases displayed the largest relative between-provenance variances, in comparison with the autumnal phenophases.
Bb2016
compared with 2016 but was still higher than the relative within-provenance variance in this year.For flower opening, the between-provenance variance in 2017 was lower compared with 2016 and was as high as the relative within-provenance variance in this year.For leaf senescence and leaf fall in 2017, the relative variances attributable to the differentiation between the provenances were lower than in 2016, and thus displayed correspondingly higher relative within-provenance variances.In general, the spring phenophases displayed the largest relative between-provenance variances, in comparison with the autumnal phenophases.Relative between-provenance (between prov) and within-provenance (between geno) variance for the timing of bud burst, flower opening, leaf senescence and leaf fall in 2016 and 2017.
Timing of the Phenophases
Our results showed that differentiation between local and non-local provenances of C. monogyna is present in the phenological traits marking the seasonality in woody plants, as observed in a common garden.Differentiation in phenological traits in common gardens has been found in many tree species [46,47].As shown before [42], the southern European provenances burst their buds earlier (lower latitude), the Walloon provenances are later (higher elevation), and the UK provenance is similar to the local Flemish provenances (lower longitude).We also detected, although in a lower order of magnitude, differentiation for bud burst between certain local provenances in both observation years.This finding
Timing of the Phenophases
Our results showed that differentiation between local and non-local provenances of C. monogyna is present in the phenological traits marking the seasonality in woody plants, as observed in a common garden.Differentiation in phenological traits in common gardens has been found in many tree species [46,47].As shown before [42], the southern European provenances burst their buds earlier (lower latitude), the Walloon provenances are later (higher elevation), and the UK provenance is similar to the local Flemish provenances (lower longitude).We also detected, although in a lower order of magnitude, differentiation for bud burst between certain local provenances in both observation years.This finding may be related to results from Danish common gardens consisting of local populations of insect-pollinated shrub species (Cornus sanguinea L. Malus sylvestris (L.) Mill.and Rosa dumalis Bechst.),where differentiation in bud burst on a local scale, with very little spring temperature deviation between the home-sites of the populations, was suggested to be driven not only by natural selection but also by neutral processes [48].Despite the fact that timing of bud burst and flower opening are strongly correlated, the time spans between the timing of bud burst from, on the one hand the standard local Flemish provenance and on the other hand the non-local provenances from the Walloon region (higher elevation), southern Europe (lower latitude) and the UK (lower longitude), were larger in comparison to the respective time spans between the timing of flower opening.This phenomenon has already been detected in the studied common garden [42] and proved to be consistent for two additional observation years.As hypothesized before [42], the timing of flower opening may be less sensitive to natural selection and local adaptation due to two reasons.Shrubs start flowering only after several crucial years of establishment and seedling development, and a year of reduced reproduction due to an improper timing of flowering may be less detrimental to a woody plant than reduced growth (accompanied by unfavorable competition with neighboring plants) because of an improper timing of bud burst.Therefore, the timing of flowering may be more responsive to the local micro-climate.
Leaf senescence and leaf fall in the southern European provenances were delayed compared with the local provenances.Together with an earlier bud burst, this implies a longer growing season for these provenances (spring phenophases were negatively correlated with autumnal phenophases).It is questionable whether the longer growing season is advantageous or disadvantageous.In a reciprocal common garden experiment of Populus fremontii S. Watson, southern populations planted in colder climates set buds relatively later in comparison with the same genotypes planted in a common garden at their home-site [39].This later bud set is described as an inability to avoid early autumnal frosts and is interpreted as non-adaptive phenotypic plasticity, possibly caused by a lesser sensitivity to photoperiod as a cue to initiate bud set in more southern populations [39].Bud flush in the southern population was found to be later in the colder common garden compared to the home-site common garden (but still earlier than the local populations from the cold environment) and was therefore interpreted as adaptive plasticity [39].In our experiment, the higher elevation provenances of the Walloon region displayed a later bud burst in comparison to the local provenances, but no earlier nor later leaf senescence and leaf fall.This finding is likely in line with [34] who found no correlation between temperatures of the source sites along an altitudinal gradient and timing of leaf senescence in a common garden for Acer pseudoplatanus L. and Fraxinus excelsior L..This may be due to a higher sensitivity of leaf senescence to photoperiod, compared to that of bud burst [29,30].Higher elevations in the Walloon part of Belgium imply a generally colder climate compared to the local climate of the common garden site, but a negligible difference in photoperiod.When compared with the timing of bud burst and flower opening, a relatively smaller contribution of the between-provenance variation in the variance analysis indicates a weaker population differentiation for the autumnal phenophases.These results are in line with findings of [39] and [40] who both found a stronger influence of genotypic effects for bud set in poplar compared with bud burst, whereas bud burst showed stronger population-level effects relative to bud set.Although occurring in the same vegetative organ of the plant, the correlation between the timing of leaf senescence and leaf fall was smaller compared with the correlation between the timing of bud burst and flower opening, the latter implying a correlation between a vegetative (leaf bud) and a generative (flower) organ, which may suggest less tight genetic control for the autumnal phenophases.
Non-Linear Temporal Responses in Timing of the Phenophases
Inter-annual variation over long time periods in the timing of bud burst in temperate tree species has been widely modeled and discussed, e.g., [49].Still, it remains difficult to accurately predict bud burst on smaller time scales.We studied the temporal phenotypic plasticity in the phenological responses on a small time scale, i.e., the responses to local meteorological conditions in the common garden, by observing the variability of the phenological responses on the same shrubs in two successive years.The non-linear inter-annual response of timing of bud burst was expressed in five significant interaction terms between the variables provenance and year in the modeling analysis, including four non-local provenances.For flower opening we found a comparable number of significant interaction terms (one extra Walloon provenance) with the same provenances involved as for bud burst, most probably giving expression to the auto-correlation of both phenophases.For leaf fall, only two provenances displayed significant interaction terms in the models, both being non-local, and for leaf senescence there were none.The lesser sensitivity of the autumnal phenophases to plasticity compared with the spring phenophases can be due to a higher sensitivity to photoperiod as a stable cue to initiate these processes [40].The relatively high presence of non-local provenances among the significant interaction terms in the phenological models may be indicative of their non-local origin.The non-linear temporal responses in non-local provenances can be interpreted as a reaction to the prevailing growth conditions that deviate from the home-site conditions that they are adapted to.Considering the timing of bud burst, the time span between the southern European provenances and the local standard provenance in 2016 was larger than in 2017.For the Walloon provenances we observed an opposite response.As temperature is a well-known determinant for the timing of bud burst, the warmer temperatures in March 2017 have likely advanced the timing of bud burst in all provenances compared to 2016.It is by now well-known that woody plants shift to earlier bud burst dates upon increased warming [50].However, the advancement that we observed in the non-local provenances was not linear in comparison with the local Flemish provenances, with the southern European provenances being relatively less early and the Walloon provenances being relatively later than the local provenances in 2017.In transplant experiments of Populus fremontii, the magnitude of phenotypic plasticity in bud flush and bud set was found to be correlated with the home-site climates [39].Planting the southern European provenances of C. monogyna in the common garden in Belgium implied a transfer to a cooler environment, with the cooler spring of 2016 inducing a longer time span in the timing of bud burst with the local provenances than the warmer spring of 2017.The warmer spring in 2017 drove the higher elevation provenances to a longer time span in the timing of bud burst with the local provenances, compared with the cooler spring in 2016.Together, these results suggest a larger plastic reaction in the timing of bud burst in the common garden environment may imply that prevailing meteorological conditions deviate more from the home-site conditions.
Conclusions
In Europe, populations of woody species at the southern limits of their natural range have maintained relatively large levels of genetic diversity and are therefore considered as appropriate gene pools for assisted migration towards the north as a climate adaptation strategy [51].On the other hand, results from Danish (high latitude in Europe) common garden trials with local populations for several shrub species indicate substantial genetic variation and evolutionary potential, questioning the need for assisted migration specifically for widespread and generally occurring woody species [48].In addition, C. monogyna, as a key component of old hedgerows, displayed large levels of genetic diversity in nuclear and chloroplast markers in the UK (medium latitude in Europe) [41].Although limited in time, our two-year study showed that all provenances of the common shrub species C. monogyna adjust their phenological responses to the prevailing temperatures and that non-local provenances tend to react non-linearly relative to the local provenances, with larger temporal spring plasticity coinciding with a larger difference in climatic conditions between home-sites and the common garden environment.Our results can be interpreted as an extra argument in the debate over assisted migration, suggesting that for widespread species planted stock of non-local origin may tend to "over-react" to variable environmental conditions at the site to which they are transplanted.This over-reaction in plastic response may be caused by the environmental conditions that deviate more from the optima these provenances are adapted to at their home sites.Our results therefore stress the importance of carefully reflecting on assisted migration projects, and considering at least the addition of local planting stock when deciding to transport populations of trees and shrubs over longer distances in anticipation of the predicted climate change.
Supplementary Materials:
The following are available online at http://www.mdpi.com/1999-4907/10/4/293/s1, Figure S1: Scores of bud burst in the common garden on four observation days in the years 2016 (a) and 2017 (b).Provenance abbreviations are in Table 1.Descriptions of score levels are in Table 2; Figure S2: Scores of flower opening in the common garden on three observation days in the years 2016 (a) and 2017 (b).Provenance abbreviations are in Table 1.Descriptions of score levels are in Table 2; Figure S3: Scores of leaf senescence in the common garden on two observation days in the years 2016 (a) and 2017 (b).Provenance abbreviations are in Table 1.Descriptions of score levels are in Table 2; Figure S4: Scores of leaf fall in the common garden on two observation days in the years 2016 (a) and 2017 (b).Provenance abbreviations are in Table 1.Descriptions of score levels are in Table 2; Table S1: Model statistics for bud burst and flower opening.The provenance FL1 and the year 2017 are the standard levels for the categorical variables provenance and year, to which the other year, 2016, and the other provenances are compared.Provenance abbreviations are in Table 1; Table S2: Model statistics for leaf senescence and leaf fall.The provenance FL1 and the year 2017 are the standard levels for the categorical variables provenance and year, to which the other year, 2016, and the other provenances are compared.Provenance abbreviations are in Table 1.
Author Contributions: K.V.M. conceptualized the study; K.V.M. and A.J. performed the field work and the statistical analysis and wrote the manuscript.
Funding: This research received no external funding.
Figure 2 .
Figure 2. Modeled timing of bud burst for the different provenances in 2016 and 2017: (a) Modeled probability of having reached at least bud burst score 4; (b) D50%PY values for the provenances (P) with a significant interaction term in the model statistics, for the years (Y) 2016 and 2017.D50%PY values indicate the modeled day of the year (DOY) when half of the plants of a provenance attain a bud burst score of at least four.
Figure 2 .
Figure 2. Modeled timing of bud burst for the different provenances in 2016 and 2017: (a) Modeled probability of having reached at least bud burst score 4; (b) D 50%PY values for the provenances (P) with a significant interaction term in the model statistics, for the years (Y) 2016 and 2017.D 50%PY values indicate the modeled day of the year (DOY) when half of the plants of a provenance attain a bud burst score of at least four.
Figure 3 .
Figure 3. Modeled timing of flower opening for the different provenances in 2016 and 2017: (a) Modeled probability of having reached at least a flower opening score of four; (b) D50%PY values for the provenances (P) with a significant interaction term in the model statistics, for the years (Y) 2016 and 2017.D50%PY values indicate the modeled DOY when half of the plants of a provenance attain a flower opening score of at least four.
Figure 3 .
Figure 3. Modeled timing of flower opening for the different provenances in 2016 and 2017: (a) Modeled probability of having reached at least a flower opening score of four; (b) D 50%PY values for the provenances (P) with a significant interaction term in the model statistics, for the years (Y) 2016 and 2017.D 50%PY values indicate the modeled DOY when half of the plants of a provenance attain a flower opening score of at least four.
Figure 4 .Figure 5 .
Figure 4. Modeled timing of leaf senescence for the different provenances in 2016 and 2017.Modeled probability of having reached a maximal leaf senescence score of three.
Figure 4 .
Figure 4. Modeled timing of leaf senescence for the different provenances in 2016 and 2017.Modeled probability of having reached a maximal leaf senescence score of three.
Figure 4 .Figure 5 .
Figure 4. Modeled timing of leaf senescence for the different provenances in 2016 and 2017.Modeled probability of having reached a maximal leaf senescence score of three.
Figure 5 .
Figure 5. Modeled timing of leaf fall for the different provenances in 2016 and 2017: (a) Modeled probability of having reached a maximal leaf fall score of three; (b) D 50%PY values for the provenances (P) with a significant interaction term in de model statistics, for the years (Y) 2016 and 2017.D 50%PY values indicate the modelled DOY when half of the plants of a provenance attain a maximal leaf fall score of three.
Figure 6 .
Figure 6.Relative between-provenance (between prov) and within-provenance (between geno) variance for the timing of bud burst, flower opening, leaf senescence and leaf fall in 2016 and 2017.
Figure 6 .
Figure 6.Relative between-provenance (between prov) and within-provenance (between geno) variance for the timing of bud burst, flower opening, leaf senescence and leaf fall in 2016 and 2017.
Table 1 .
Descriptive data of the provenances of C. monogyna in the common garden.No precise data on seed-stock populations were available for the commercial provenances.-: no data available.
Table 1 .
Descriptive data of the provenances of C. monogyna in the common garden.No precise data on seed-stock populations were available for the commercial provenances.-: no data available.
burst was scored in 2016 on 8, 23, 31 March and 16 April, and in 2017 on 12, 21, 26 March and 2 April. Flower opening was scored in 2016 on 28 April, 4 and 11
May, and in 2017 on 26 April, 1 and 11 May.Leaf senescence and leaf fall were scored in 2016 on 21 October and 4 November, and in 2017 on 8 and 29 October.
Table 2 .
Description of the different score levels of the four observed phenophases.
burst was scored in 2016 on 8, 23, 31 March and 16 April, and in 2017 on 12, 21, 26 March and 2 April.Flower opening was scored in 2016 on 28 April, 4 and 11 May, and in 2017 on 26 April, 1 and 11 May.Leaf senescence and leaf fall were scored in on 21 October and 4 November, and in 2017 on 8 and 29 October.
Table 2 .
Description of the different score levels of the four observed phenophases.
less than half of the flowers in an inflorescence opened 4 half to more than half of the flowers in an inflorescence opened 5 most but not all flowers in an inflorescence opened 6 all flowers opened
Table 3 .
Model statistics for the timing of bud burst and flower opening.The provenance FL1 and the year 2016 are the standard levels for the categorical variables provenance and year, to which the other year, 2017, and the other provenances are compared.DOY: day of the year.Provenance abbreviations are given in Table1.
Table 4 .
Differences in timing between the standard local provenance FL1 and the other provenances in the common garden in 2016 and 2017 for the four phenophases.Time spans are shown only for the provenances that differed significantly from the standard provenance FL1 in the models (Table3, TableS1).Negative values indicate earlier timing of the phenophase, positive values indicate later timing.Provenance abbreviations are given in Table1.
Table 5 .
Model statistics for the timing of leaf senescence and leaf fall.The provenance FL1 and the year 2016 are the standard levels for the categorical variables provenance and year, to which the other year, 2017, and the other provenances are compared.Provenance abbreviations are given in Table1.
|
2019-04-06T10:38:53.082Z
|
2019-03-27T00:00:00.000
|
{
"year": 2019,
"sha1": "2fbd118bd16ab0918c0dd290420ecbbdedd4a1e3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/10/4/293/pdf?version=1555058375",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2fbd118bd16ab0918c0dd290420ecbbdedd4a1e3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
182602588
|
pes2o/s2orc
|
v3-fos-license
|
CCMetagen: comprehensive and accurate identification of eukaryotes and prokaryotes in metagenomic data
There is an increasing demand for accurate and fast metagenome classifiers that can not only identify bacteria, but all members of a microbial community. We used a recently developed concept in read mapping to develop a highly accurate metagenomic classification pipeline named CCMetagen. The pipeline substantially outperforms other commonly used software in identifying bacteria and fungi and can efficiently use the entire NCBI nucleotide collection as a reference to detect species with incomplete genome data from all biological kingdoms. CCMetagen is user-friendly, and the results can be easily integrated into microbial community analysis software for streamlined and automated microbiome studies.
easily identified at the species and strain levels [8,9], it remains challenging to obtain a fine-grained taxonomic classification of lesser-known species and microbial eukaryotes [10,11]. Many of the current metagenomic classifiers assign a taxonomy to individual short sequence reads [7]. However, as closely related species share very similar or identical genome segments, short reads often map to multiple species in the reference data set. Some metagenomic classifiers, like MEGAN [12] and Kraken [13], address this issue by calculating the lowest common ancestor (LCA) among all species sharing those sequences. Paradoxically, as identical regions in reference databases become more common, fewer reads can be classified at the species level [14]. Other classifiers use a database of clade-specific diagnostic regions (e.g., [9]). While highly accurate, this procedure relies heavily on reference databases of complete genomes, which often cannot be readily updated by the end user. Complete genomes are available for only a small fraction of the microbial eukaryotic species. For example, as of April 2019, the widely used NCBI RefSeq database contained 285 fungal genome sequences, even though it is estimated that there are over 2 million species of fungi [15]. Therefore, relying on these databases of complete genomes greatly restricts the inclusion of microbial eukaryotes in metagenome studies.
A recently developed concept in read mapping-the ConClave sorting scheme, implemented in the KMA software [16]-is more accurate than other mapping strategies as it takes advantage of the information from all reads in the data set ( Fig. 1). Our goal was to use this approach to produce an accurate metagenomic classification pipeline that will allow the inclusion of microbial eukaryotes in metagenomic studies. We now present a novel tool-CCMetagen (ConClave-based Metagenomics)-to process KMA sequence alignments and produce accurate taxonomic classifications from Fig. 1 Overview of the ConClave sorting scheme applied to species identification in metagenomic data sets. The figure represents a data set containing 5 sequence reads (4 bp) and two closely related reference sequences (templates), including a true positive (Ref. 1) and a potential false positive (Ref. 2). a Commonly used read mappers yield a high number of false positives because reads can be randomly assigned to closely related reference sequences sharing identical fragments spanning the whole sequence read (represented by the ATATT region). b The KMA aligner minimizes this problem by scoring reference sequences based on all possible mappings of all reads and then choosing the templates with the highest scores. Coupled with KMA, CCMetagen produces highly accurate taxonomic assignments of reads in metagenomic data sets in user-friendly formats metagenomic data. We benchmark CCMetagen using simulated fungal and bacterial metagenomes and metatranscriptomes. Additionally, we include two case studies with real biological data to demonstrate that CCMetagen effectively produces a comprehensive overview of the eukaryotic and prokaryotic members of microbial communities.
Implementation and availability
Metagenomic reads (or contigs) are first mapped against a reference database with KMA [16], which implements the ConClave sorting scheme for better-informed and highly accurate alignments (Fig. 1). CCMetagen is then used to perform quality filtering and produce taxonomic classifications that can be explored in text or interactive visualization formats (Krona plots [17]). Our pipeline uses the NCBI taxonomic database (taxids) to produce ranked and updated taxonomic classifications, so that the ever-changing species nomenclature issue is minimized [18]. CCMetagen yields classifications at a taxonomic level that reflects the similarity between the query and reference sequences. This ranked classification means that species with only distant relatives in reference databases (e.g., undescribed genera) can be identified, as well as well-known microorganisms. The output of CCMetagen can be easily converted into a PhyloSeq object for statistical analyses in R [19]. The pipeline is sufficiently fast to use the entire NCBI nucleotide collection (nt) as a reference database [20], thereby enabling the inclusion of microbial eukaryotes-in addition to bacteria, viruses, and archaea-in metagenome surveys. Our program is implemented in Python 3 and is freely available at https://github.com/vrmarcelino/CCMetagen [21] or via the Python Package Index (PyPi) [22]. A web service to easily run the pipeline with default settings is available at https://cge.cbs.dtu.dk/services/ccmetagen/ [23].
Fungal classifications are more accurate with the CCMetagen pipeline
To test the performance of CCMetagen in identifying an important and diverse group of microbial eukaryotes, we simulated in silico a fungal metatranscriptome (15 species) and a fungal metagenome (30 species). We then benchmarked CCMetagen's performance by comparing it with widely used metagenomic classification software, including Centrifuge [24], Kraken2 [25], and KrakenUniq [26]. These programs were chosen because they are compatible with custom-made reference databases, which is a desirable flexibility when working with microbial eukaryotes. KrakenUniq was recently shown to outperform eleven other classification methods when using the NCBI nucleotide collection ("nt" database), including Diamond/Blast + MEGAN [12,27,28], CLARK [29], GOTTCHA [30], PhyloSift [31], and MetaPhlAn2 [9]. KrakenUniq therefore provides a gold standard for the available tools. We evaluated precision, recall, and F1 scores of the benchmarked software in identifying fungal taxa in the simulated fungal metagenome and metatranscriptome (see the "Methods" section). The F1 score is the harmonic average of precision and recall; high F1 scores can be interpreted as a good trade-off between precision and recall.
The CCMetagen pipeline achieved the highest precision and F1 scores of all the approaches tested (Fig. 2, Additional file 1: Figures S1 and S2, Additional file 2). KrakenUniq achieved higher precision than Kraken2 and Centrifuge when using an ideal database (i.e., RefSeq-bf, which contains only the complete and curated genomes of fungi and bacteria, containing all species from the test data set). However, the performance of KrakenUniq decreased substantially when the database was incomplete (i.e., RefSeq-f-partial, where a part of the reference sequences was removed to mimic the effects of handling species without reference genomes).
Centrifuge, Kraken2, and KrakenUniq yielded many more taxa than the number included in the test data sets: for example, Centrifuge, when used with the nt database, reported 6950 species in the simulated metagenome containing 30 species, while CCMetagen yielded only 15. Naturally, their recall was very high-Centrifuge and Krake-nUniq recovered 100% of the taxa present in the test data set when using the RefSeq-bf and nt reference databases (Additional file 1: Figure S2). The species-level recall of Kra-ken2 decreased when using the nt database. CCMetagen recovered between 50 and 100% of the species when used with RefSeq-bf and nt databases (Additional file 2).
We also tested CCMetagen with assembled sequence reads (Additional file 3). When using the NCBI nt collection, precision ranged from 67 to 71% for species-level Fig. 2 The CCMetagen pipeline has a higher F1 score than other metagenomic classification methods for all taxonomic ranks. The two points for each program and taxonomic rank represent the results using a simulated metagenome and a metatranscriptome sample of a fungal community. a Results using the whole NCBI nt collection as a reference database. b Results using the RefSeq-bf (bacteria and fungi) database, containing all bacterial and fungal genomes available. c Partial RefSeq database containing only some of the fungal species currently present in the RefSeq-bf database, mimicking the effects of dealing with species without representatives in reference data sets. In this case, Kraken2, Centrifuge, and KrakenUniq have overlapping results. Refer to Additional file 1: Figures S1 and S2 and Additional file 2 for more information, including precision and recall classifications, while recall ranged from 53 to 100% (Additional file 4), indicating that our pipeline is suited to processing long sequences.
The fastest processing time was achieved by Kraken2 (Table 1). The combined CPU time of KMA and CCMetagen (i.e., the CCMetagen pipeline) was faster than Centrifuge and KrakenUniq when using the whole NCBI nt database, but it was the slowest approach when using the RefSeq database. The KMA indexing of the nt database was limited to only include k-mers with a two-letter prefix, which on average corresponds to only saving non-overlapping k-mers. This prefixing substantially increases the speed and could also be applied to the RefSeq database if a faster processing time is required (Additional file 3). Choosing a longer prefix will result in gaps in the database which in turn will result in lower precision and recall. With a prefix of two, this is relatively limited. When the NCBI nt data set was used, CCMetagen required~15 min to process a sample (~5 Gb, 7.8M reads on average).
Bacterial communities are best depicted with the CCMetagen pipeline
We assessed the performance of the CCMetagen pipeline when applied to 10 bacterial communities simulated at different levels of complexity [32,33]. Using the NCBI nt collection as a reference, CCMetagen achieved the highest precision and F1 scores at all taxonomic ranks (Fig. 3). Recall was highest for Centrifuge and KrakenUniq. In this data set, the recall of Kraken2 was higher than CCMetagen from phylum-to familylevel classifications, but lower than CCMetagen at the genus and species level.
Biological data set 1: Experimentally seeded fungal metatranscriptome
We validated the CCMetagen pipeline with a fungal community previously generated in vitro by culturing, processing, and sequencing 15 fungal species ( [34], Additional file 5). The analyses were performed using the NCBI nt collection as a reference. Our pipeline correctly retrieved 13 of the 15 fungal species sequenced, in addition to identifying a small component of other eukaryotic (0.4%) and bacterial (3%) RNA, which likely represent laboratory contaminants (Fig. 4, Additional file 5).
As this data set contains the same 15 fungal species as those simulated in silico, it is possible to tease apart classification errors from laboratory-related confounders such as contamination. Accordingly, we were able to retrieve all 15 species when using the in silico data set, suggesting that the two false negatives (Schizosaccharomyces pombe and Debaryomyces hansenii) were missing due to laboratory-related issues, such as RNA extraction biases, gene [under] expression, and imprecise cell counts. We also identified seven times more false positives in the seeded fungal metatranscriptome (44 species, including bacteria, while the simulated data yielded only 6). These additional 38 species were present at low abundance and possibly represent reagent and laboratory contaminants [35,36] as they were not identified in the analysis of the equivalent simulated metatranscriptome.
Biological data set 2: Australian birds
We used the CCMetagen pipeline to characterize the gut microbiome represented in 9 metatranscriptome libraries from wild birds sampled at various sites across Australia [37,38]. These samples were collected as part of a long-term avian influenza study and were stored in Viral Transport Medium (brain-heart infusion broth containing 2 × 10 6 IU/l penicillin, 0.2 mg/ml 383 streptomycin, 0.5 mg/ml gentamicin, 500 U/ml amphotericin B, Sigma), possibly simplifying microbiome composition and abundance, but not necessarily eliminating microbial genetic material. Indeed, fungal and bacterial transcripts were observed in all libraries (Additional file 6). Eukaryotic microbes accounted for 60% of the family-level diversity of the bird microbiome samples (taxa unclassified at family-level were not taken into account). Notably, fungi represented 12 of the 20 most abundant microbial families (Fig. 5). Among the fungal transcripts with a species-level classification, those attributed to the basidiomycete Cystofilobasidium macerans (Tremellomycetes) were the most abundant and were present in all bird libraries. Transcripts from species of filamentous fungi (e.g., Mucor, Cladosporium, Fusarium) and yeasts (e.g., Cryptococcus, Metschnikowia) were common. The high diversity of fungi associated with birds is unsurprising, as birds are known to play an important role in the ecology and distribution of yeasts and fungal spores [39,40]. Bird excrement is a natural niche for species of the opportunistic pathogen Cryptococcus [41,42], and several studies have reported Mucor, Cladosporium, and Cryptococcus [40,46]. Other microbial eukaryotes were also observed, including the trichomonad Simplicimonas and the Apicomplexan Eimeria. Archaeal and viral transcripts were also detected. The methanogenic archaea Methanobrevibacter woesei, which was previously reported in chicken guts [47], was observed in two duck libraries. Influenza A virus was detected and confirmed with PCR-based methods [37]. The CCMetagen results were parsed with PhyloSeq for a graphical representation of the most abundant microbes, and the R script to reproduce Fig. 5 is available on the CCMetagen website [48].
Discussion
The application of the ConClave sorting scheme to differentiate highly similar genetic sequences [16] represents an important step forward in metagenomic species profiling. We have applied this concept to develop a metagenome classification pipeline that is highly accurate yet fast enough to use the entire NCBI nucleotide collection as a reference, thereby facilitating the identification of microbial eukaryotes in metagenomic studies. The species-level identifications of bacteria and fungi obtained with the CCMetagen pipeline were from 3× to 1580× more precise than other metagenome classifiers (across all databases tested). CCMetagen is therefore a powerful tool for achieving accurate taxon identifications across a range of biological kingdoms in metagenome or metatranscriptome samples. Scarce reference data pose a major challenge to study any microbial system that is less well-studied than the human gut. Some methods with reportedly high For a full list of taxa, refer to Additional file 6. A tutorial and R scripts to reproduce these analyses are available on the CCMetagen website accuracy rely heavily on reference databases of complete or near-complete genomes. KrakenUniq, for example, showed relatively high precision and recall when using the RefSeq-bf database, which contained the complete genomes of all species in the test data set. However, when KrakenUniq was tested with an incomplete reference database (RefSeq-f-partial), the number of false positives increased, on average, from 51 to 221 species. This likely happens because it is relatively easy to identify a species that is present in the reference database, while it can be challenging to identify the closest match in the absence of a perfectly matching reference sequence. In the latter case, when reads are classified individually, multiple reference sequences can have identical levels of similarity, leading to a high number of false positives. This is an obvious problem when working with microbial eukaryotes, for which very few complete genomes are available.
One of the many advantages of metagenomics is that it enables the detection of both novel and rare microbes. Being able to distinguish between known and novel microorganisms in metagenomic data sets is a desirable feature possessed by surprisingly few metagenome classifiers. Some of these classifiers (e.g., MEGAN and Kraken) use the lowest common ancestor between all reference sequences that match the query sequence. The accuracy of these taxonomic classifiers tends to decrease as reference databases get populated with closely related taxa [14], and paradoxically, well-known taxa can be classified at higher taxonomic ranks than rare or novel ones. CCMetagen classifies taxa at the lowest common ancestor that reflects the genetic similarity between the query and the reference sequence. As rates of molecular evolution can vary substantially among genes and species, it is currently not feasible to set a universal sequence similarity threshold that works equally well for all organisms and genes. By default, CCMetagen uses similarity thresholds previously determined for fungi [49,50]. Importantly, CCMetagen allows the user to easily set different similarity thresholds or disable the threshold-filtering step entirely. While this strategy also has limitations, it is a better alternative to the reference-dependent method of calculating LCAs, even when using the default thresholds for bacterial classifications (Fig. 3).
With CCMetagen, it is possible to confidently use metagenomics to identify microbial eukaryotes and prokaryotes in microbial communities. Our analyses of the gut microbiome of wild birds revealed an abundant and diverse community of micro-eukaryotes, representing 60% of the family-level diversity in the samples. We detected various species of Mucor and of basidiomycetes, including species of the opportunistic pathogen genus Cryptococcus. These and other non-ascomycetes fungi can be affected by mismatches in commonly used metabarcoding primers [51][52][53]. The fact that they were observed in high abundance indicates that metagenomics and metatranscriptomics are valuable for detecting these organisms in environmental samples. A recent analysis of 38 human gut microbiome samples using the CCMetagen pipeline and the NCBI nt database revealed only three fungal taxa (Saccharomycetaceae, Rhizopodaceae, and one unidentified family in the Dothideomycetes) in three samples [54], supporting the notion that the high diversity and abundance of fungi observed here is a feature of the avian microbiome rather than an artifact of the analysis. Importantly, CCMetagen can generate results in a format that resembles an operational taxonomic unit (OTU) table that can be imported into software designed for microbial community analyses, such as PhyloSeq [19], facilitating downstream ecological and statistical analyses of the microbiome.
Conclusion
In summary, CCMetagen is a versatile pipeline implementing the ConClave sorting scheme (via KMA) to achieve more accurate taxonomic classifications than current analytic methods. The pipeline is fast enough to use the entire NCBI nt collection as the reference, facilitating the inclusion of understudied organisms, such as microbial eukaryotes, in metagenome surveys. CCMetagen produces ranked taxonomic results in user-friendly formats that are ready for publication (with Krona) or for downstream statistical analyses (with PhyloSeq). The pipeline is freely available as a web service and as a command line application. We expect that a range of novel ecological and evolutionary insights will be obtained as information about microbial eukaryotes in metagenomic studies becomes more accessible.
CCMetagen workflow and implementation
CCMetagen is a workflow implemented in Python 3 (Python ≥ 3.6). The analysis requires a reference database in which sequence headers contain taxonomic identifiers (taxids). Ready-to-use reference databases (NCBI nt and RefSeq) and instructions to create custom reference databases are provided in the CCMetagen website: https:// github.com/vrmarcelino/CCMetagen [21]. Sequence reads, contigs, or long reads are first mapped to the reference database with KMA [16], which accepts single-end or paired-end, fastA, fastQ, and compressed (gzip) formats. CCMetagen is then used to process the KMA results via two main programs: CCMetagen.py and CCMetagen_merge.py. The first command takes as input the results of KMA and performs a customized quality control where the user can specify the minimum requirements to accept a match in terms of sequence depth, coverage, and ConClave scores. The pipeline will detect two (or more) closely related lineages if there are detectable SNP differences between the consensus alignments (between query sequences and templates). Supposing that Fig. 1b (Step 1) referred to two closely related species with different abundances, the pipeline would not detect them as separate taxa, as there are no detectable differences between them.
The CCMetagen.py program then processes taxonomic information using the ETE toolkit [55] and outputs a ranked taxonomic table-where taxon names for superkingdom, kingdom, phylum, class, order, family, genus, and species are attributed when known. Sequence similarity of the consensus alignment between query sequences and the template is calculated with KMA. CCMetagen.py applies a sequence similarity threshold to define the lowest taxonomic rank that can be attributed with confidence. The default thresholds are based on large-scale analyses of fungal sequences [49,50] and can be changed or disabled (so that no similarity filtering is performed) using built-in options in CCMetagen.py. The program provides the option to convert abundance units to the commonly used reads per million (RPM), and to produce interactive graphs showing the relative abundance of taxa using Krona [17]. After processing individual samples with CCMetagen.py, the user can use CCMetagen_merge.py to produce a single spreadsheet containing the results of all samples in comma-separated values (CSV) format. This spreadsheet reassembles an operational taxonomic unit (OTU) table, helping to integrate the CCMetagen results with existing statistical software designed for microbiome analysis (e.g., PhyloSeq [19]). CCMetagen_merge.py provides the option to merge taxa at different taxon ranks and to include or exclude taxa. A stepby-step tutorial on the CCMetagen workflow is provided online (https://github.com/ vrmarcelino/CCMetagen/tree/master/tutorial [48]), and a web server version of CCMetagen, which requires no command line knowledge from the user, is available at https://cge.cbs.dtu.dk/services/ccmetagen/ [23].
Test data sets
A fungal metagenome and a metatranscriptome were simulated in silico to assess the performance of CCMetagen and other classification pipelines in identifying the fungal members of a microbial community (Additional file 7). Simulations were based on complete fungal genomes obtained from the NCBI RefSeq collection [56]. The metagenome contained 30 fungal species and was simulated with Grinder [57] using parameters to mimic the insert size and sequencing errors of an Illumina library (-md poly4 3e-3 3.3e-8 -insert_dist 500 normal 50 -fq 1 -ql 30 10). Coverage was set to vary between 0.001× and 10× for different species. The simulated metagenome contained 6,767,167 PE reads (6,695,384 PE reads after quality control, see Additional file 3).
The metatranscriptome contained 15 fungal species and was simulated for a subsample of 4000 genes (CDSs) from each fungal genome. Transcripts were simulated with Polyester [58], using the Illumina5 error model and gene expression following a normal distribution of average 3× (20% of genes up-and 20% downregulated). The simulated fungal metatranscriptome contained 9,009,121 PE reads (9,008,363 PE reads after quality control, see Additional file 3).
Additionally, 10 bacterial metagenomes simulated by Segata et al. [32], and compiled in McIntyre et al. [33], were used to assess the performance of the different classifiers in identifying prokaryotic communities with various levels of complexity. Each metagenome contained between 25 and 100 bacterial species [33].
Reference databases
Reference databases were downloaded and indexed as described in Additional file 3. We used three reference databases: (i) "nt"-the NCBI nucleotide collection [20]; (ii) "RefSeq-bf," containing curated genomes of fungi (all assembly levels) and bacteria (only complete) in the NCBI Reference Sequence Database [56]; and (iii) "RefSeq-f-partial," which is a subset of RefSeq-bf, containing only part of the fungal species in our test data sets. The RefSeq-f-partial database was built to assess how the programs perform when reference databases are incomplete, for example, when dealing with species without reference genomes. Fifteen species were removed, resulting in a database that contained 15 of the 30 species in the fungal metagenome sample, and 7 of the 15 species in the metatranscriptome sample (species removed from this data set are listed in Additional file 8). The nt and RefSeq-bf databases indexed to function with KMA and CCMetagen are hosted in two sites, at https://doi.org/10.25910/5cc7cd40fca8e [59] (Australia) and http://www.cbs.dtu.dk/public/CGE/databases/CCMetagen/ [60] (Denmark).
Benchmarking
Details about the quality control and data analyses are described in Additional file 3. Metagenome classifications using Kraken2 v.2.0.6-beta, KrakenUniq v.0.5.6, and Centrifuge v.1.0.3-beta were performed using default values. The performance of the classifiers was assessed in terms of precision, recall, F1 score, and CPU time. Precision was calculated with the formula: Recall was calculated with the formula: Recall ¼ True Positives True Positives þ False Negatives F1 score, which is the harmonic average of the precision and recall, was calculated as: True positives reflect the number of taxa in the test data set that was retrieved by the analysis. Likewise, false positives refer to the number of taxa that were identified in the analysis but were not present in the test data set, while false negatives are taxa present in the test data set that were not detected by the analysis. The accuracy of abundance estimates was not benchmarked in this study. Precision and recall were multiplied by 100 to indicate percentages. Precision, recall, and F1 scores were calculated at the levels of species, genus, family, order, class, and phylum, following the hierarchy of the NCBI taxonomic database [18]. Only matches to organisms with valid taxids were included in the analyses. Valid but obsolete taxids (altered due to nomenclature changes) were updated accordingly using the ETE toolkit [55]. This strategy also minimizes nomenclature problems. For example, Filobasidiella neoformans is a life stage of Cryptococcus neoformans; they share a unique taxid (5207) regardless of the name attributed to the sequence in the reference database. The benchmarking scripts are available at https:// github.com/vrmarcelino/CCMetagen/tree/master/BenchmarkingTools.
CCMetagen applied to real data sets
We validated the CCMetagen pipeline using two biological data sets: one defined fungal community (biological data set 1) and one set of environmental samples (biological data set 2). The fungal community was constructed by culturing, pooling, and sequencing the same 15 fungal species used in the metatranscriptome simulated in silico (SRA Bio-Project number PRJNA521097) [34].
The biological data set 2 consisted of nine metatranscriptome libraries derived from gut samples from Australian wild birds (SRA BioProject number PRJNA472212) [37]. Quality control was performed as described in Marcelino et al. [38].
These samples were mapped to the NCBI nucleotide database using KMA with the options -1t1 -mem_mode -and -apm f, and then processed with CCMetagen using default values. The results were parsed with PhyloSeq to produce a graph with taxa
|
2019-06-07T21:33:35.180Z
|
2019-05-17T00:00:00.000
|
{
"year": 2020,
"sha1": "6ee8a6897ba8adef8f96edd7baa08abec7ded1eb",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-020-02014-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30a59d1c1ccefbda4c1a9da15484c3f979fd2144",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Biology"
]
}
|
1406423
|
pes2o/s2orc
|
v3-fos-license
|
Adipocytes as a vehicle for ex vivo gene therapy: Novel replacement therapy for diabetes and other metabolic diseases
Abstract Because of its availability and recent advances in cell biology, adipose tissue is now considered an ideal target site for the preparation of recipient cells and for the transplantation of gene‐transduced cells for supplementation of therapeutic proteins. Inherited or acquired serum protein deficiencies are the ideal targets for gene therapy. However, to develop an effective ex vivo gene therapy‐based protein replacement treatment, the requirements for the recipient cells are different from those for standard gene therapy that is intended to correct the function of the recipient cells themselves. To meet the requirements for such a therapeutic strategy, recent in vitro and animal model studies have developed new methods for the preparation, culture, expansion and manipulation of adipose cells using advanced gene transduction methods and transplantation scaffolds. In this short review, we introduce the progress made in novel adipose tissue‐based therapeutic strategies for the treatment of protein deficiencies by our group and other investigators, and describe their future applications for diabetes and other metabolic diseases. (J Diabetes Invest, doi: 10.1111/j.2040‐1124.2011.00133.x, 2011)
INTRODUCTION
Since the first gene therapy trial against advanced melanoma using gene-transduced lymphocytes was published in 1990 1 , numerous therapeutic clinical trials have been carried out, and inherited monogenic disorders represent approximately 8% of the diseases targeted by gene therapy applications (http:// www.wiley.com/legacy/wileychi/genmed/clinical/). Recent studies on the biology of pluripotent stem or progenitor cells have suggested the sustained production of therapeutic proteins to be a potential treatment strategy for patients with a variety of genetic disorders [2][3][4][5] . The ability of cells to self-renew at a high proliferation rate has led to the expectations that these cells might be ideal targets for retroviral vector-mediated transgene delivery for permanent correction of the defect, not only for immunodeficiencies, but also for a variety of inherited or acquired metabolic diseases, including diabetes mellitus.
EX VIVO GENE THERAPY FOR IMMUNODEFICIENCIES
The most impressive outcomes of ex vivo gene therapy trials have been reported in subjects with immunodeficiencies as a result of monogenic disorders, including adenosine deaminase deficiency (ADA-SCID) 6,7 , cc chain deficiency (X-SCID) 8,9 or X-linked chronic granulomatous disease (X-CGD) 10,11 , where the treatments were combined with the infusion of ex vivo gene-corrected hematopoietic cells. Among these trials, the treatment for X-SCID caused the oncogenesis of gene-transduced cells through the clonal expansion of the cells with the activation of cellular oncogenes as a result of insertion of the MLV LTR sequence into the promoter region of the LMO2 gene 12 . Clonal expansion was also reported in X-CGD gene therapy trials 11 and myelodysplasia with monosomy 7 was caused by the insertional activation of ecotropic viral integration site 1 (EVI1) 13 .
To correct the immune disorder in these patients, it is necessary for the infused gene-corrected cells to grow, differentiate into multiple hematopoietic lineages and reconstruct the immune system. In the case of X-SCID, the introduced gene (cc) is essential for the maturation of T cells, hence, only the gene-transduced cells grow and mature into functional lymphocytes, causing in vivo selection of the gene-corrected cells 14 , although the precise mechanisms underlying the development of leukemia in such patients are not completely understood 15 .
EX VIVO GENE THERAPY FOR FAMILIAL HYPERCHOLESTEROLEMIA
The liver is one of the primary sites of metabolic activity, and is thus the target organ of the pathogenesis for many metabolic disorders. Hepatocytes are the major cell type in the liver and have the ability to proliferate after injury, making them seem like an ideal target for ex vivo gene therapy purposes. Using essentially the same technique, in which a partial hepatectomy followed by MoMLV-mediated gene transduction and reinfusion of the cells was carried out, a total of five familial hypercholesterolemia patients were treated 16,17 . However, levels of serum cholesterol reduction in these patients were moderate, and metabolic responses after gene transfer varied substantially among the five recipients. Thus, the strategy has not been carried out again to date, as a result of the invasiveness of the procedure and ineffective cell engraftment in addition to difficulties in cell preparation steps 18 , and the development of the treatment has been shifted to more efficient in vivo transduction methodologies 19 . The various gene therapy trials carried out for the treatment of various metabolic deficiencies are summarized in Table 1.
CURRENT PROGRESS IN OTHER DISEASES
Genetic and acquired disorders causing secreted serum enzyme deficiencies have also been postulated to be ideal targets for gene therapy applications. In these diseases, the deficient protein functions systemically, and its defect causes severe complications in target organs. Therapeutic genes expressed by a viral vector are directly infused into the target tissues (in vivo gene therapy), or therapeutic gene-transduced cells are transplanted (ex vivo gene therapy) and, subsequently, functional proteins are produced systemically to improve the symptoms through protein replacement therapy.
In the former strategy, the gene transduction efficiency might vary depending on the tissue and cell types, and unexpected ectopic gene transduction is not completely prevented. Acute toxicity has been observed after the clinical use of an adenoviral vector 20 , leading to limited further use. The efficacy of the currently available AAV vectors was shown to be hampered by the pre-existing host immune system, resulting in limitations of their applications to a clinical trial for hemophilia B treatment 21 .
In the latter strategy, these side-effects can be minimized by preparing the recipient cells in vitro, and gene transduction efficiency is controllable and checked before transplantation, although cell preparation steps are required. In addition, transplanted cells are required to reside and/or survive in the patient rather than replicate, in order to continue providing a therapeutic level of protein secretion. Hemophilia has been indicated to be one of most obvious candidates for protein replacement therapy. Although considerable efforts have been expended to apply ex vivo gene therapy to treat these patients, no obvious clinical benefits were observed [22][23][24] . However, transplantation of genetically-modified fibroblasts into the forebrain was shown to be effective in clinical gene therapy trials of Alzheimer's disease 25 . Another approach using encapsulated-cell biodelivery technology to provide nerve growth factor (NGF) release (the product name is NsG0202) is currently being studied in a clinical trial. In this strategy, cells are enclosed by an immunoprotective, semi-permeable, hollow fiber membrane, enabling the influx of nutrients and outflow of NGF, and preventing the direct contact of the cells with the host tissue and immune system. Preliminary results have shown good safety and tolerability with no serious adverse events, and an increase in the expression of cortical nicotinic receptors, and three patients have shown cognitive improvement 26 . However, these strategies were designed for local supplementation of NGF. There is thus an absolute necessity for a novel approach to systemic delivery of therapeutic proteins. Therefore, long-lasting protein replacement therapy using gene-transduced cells is needed to provide a sufficient therapeutic strategy for systemic metabolic diseases.
ADIPOSE TISSUE AS A TARGET TISSUE FOR EX VIVO GENE THERAPY
To develop life-long protein replacement therapy through transplantation of gene-transduced cells, adipose tissue has been explored as a suitable target for several reasons. First, aspirated fat is a common source of autologous tissue transplantation for the correction of tissue defects in plastic and reconstructive surgery [27][28][29] . Adipose tissue is well-vascularized, and now is recognized as an important endocrine and secretory organ [30][31][32][33] , and thus could enable the systemic delivery of the therapeutic protein in cell-based gene therapy applications [34][35][36][37] . Fat cells have been shown to have a relatively long lifespan 38 . With regard to safety concerns, lipoaspiration or resection of adipose tissue and fat grafting are routinely carried out in the plastic and reconstructive surgery field with minimal risk. Adipocyte-based therapeutic strategy for enzyme replacement therapy is shown in Figure 1.
Recently, adipogenic potential has been shown to suppress the tumorigenic activity of ink4a knockout mesenchymal stem cells 39 . Furthermore, if the gene-transduced cells show an abnormal phenotype, the transplanted cells residing in the transplantation space could be easily excised. In fact, it has already been shown that the transplanted cells can be excised on occurrence of unexpected or abnormal effects 35 . These findings should encourage researchers to develop an adipose tissue-based lifelong and risk-manageable treatment for patients with serum protein deficiencies.
SCAFFOLD DEVELOPMENT FOR CELL TRANSPLANTATION
For the successful treatment of such cell transplantation-based therapies, it is important to select suitable scaffolds for the transplanted preadipocytes, adapting the transplantation site to optimize their survival, differentiation and protein expression. These materials must fulfill several requirements, including mechanical support and the ability to guide tissue reconstruction, as well as biocompatibility, biodegradability and easy handling 40,41 . In this context, fibrin glue is capable of supporting the secretion of the exogenously transduced gene product from preadipocytes in vivo 42 . Considering the previous reports showing the importance of various cytokines for the regulation of cell function and the surrounding matrix conditions [43][44][45][46][47][48][49][50] , these combinations with our fibrin gel condition could improve the outcomes of adipocyte-based gene therapies.
PREADIPOCYTES WITH HIGH ADIPOGENIC POTENTIAL
Recently, adipose tissue has been demonstrated to be a source of proliferative cells for cell-based therapies, such as regenerative medicine and gene transfer applications. Two types of preparation have been reported to be sources of adipose tissue-derived proliferative cells. One is stromal vascular fractions (SVF), which can be obtained as a sediment by the centrifugation of collagenase-digested fat tissue 9 and is the most commonly used technique. The adherent cells obtained from SVF are now recognized as adipose tissue-derived stem cells (ASC), which are pluripotent and can differentiate to yield various cell types, including cardiomyocytes, chondrocytes and osteoblasts, in addition to adipocytes, thus providing a relatively heterogeneous cell population appropriate for regenerative therapy [51][52][53] . However, these data show that SVF are heterogeneous, and therefore imply that SVF might not result in a stable therapeutic gene vehicle for gene therapy purposes.
The other cell preparation is obtained from the floating mature fat cell fraction obtained after the centrifugation, followed by a ceiling culture 54 . Because the cells are propagated using the buoyant properties of mature adipocytes in this preparation, the progeny cells are more homogeneous than ASC. Proliferative adipocytes were propagated by the ceiling culture technique from the mature adipocyte fraction, and the cells were designated as ceiling culture-derived proliferative adipocytes (ccdPA) 55 . The ccdPA are nearly homogeneous and show only a trace of mature adipocytes by analysis of surface antigen profiles. On stimulation to induce differentiation, the ccdPA showed increased lipid droplet accumulation accompanied with higher adipogenic marker gene expression compared with the ASC, even after in vitro passaging, suggesting the commitment of ccdPA to the mature adipocyte lineage 56 .
GENE-TRANSDUCED ADIPOCYTES AS VEHICLE CELLS
MoMLV-mediated gene transduction in human ccdPA resulted in a high gene transduction efficiency 55 . In search of optimal transplantation conditions, the 3-D long-term culture system using fibrin gel, a tissue sealant utilized in the clinic, was established. The gene-transduced ccdPA spontaneously accumulate lipid droplets without any artificial stimulation in 3-D culture using the fibrin glue ( In one study, the insulin gene-transduced cells were propagated, and the efficacy of these cells was evaluated in a diabetic mouse model 35 . The transplantation of the cells improved hyperglycemia and blood HbA 1c concentrations in a manner that was dependent on the cell number, without causing hypoglycemia. The plasma insulin concentration was dependent on the implanted cell number, and the systemic effect of the circulating insulin was confirmed by a marked improvement in bodyweight reduction and liver glycogen content. Thus, the autotransplantation of gene-transduced ccdPA could serve as a novel clinical application for a variety of systemic metabolic disorders. AN EX VIVO GENE THERAPY TRIAL USING EXOGENOUS GENE-TRANSDUCED ADIPOCYTES Lecithin-cholesterol acyltransferase (LCAT) deficiency has been identified as a genetic metabolic disorder. Cholesteryl ester levels are markedly reduced in lipoproteins, and abnormal cholesterol deposition is observed in the tissues of these patients, who often develop severe complications including corneal opacity, anemia, proteinuria and renal failure 57 . LCAT deficiency is caused by mutations in the lcat gene, and more than 40 different mutations have been identified to date 58 . Protein replacement treatment was suggested to be effective; however, no approach for the permanent correction of the symptoms has been reported.
However, in a previous study, the human lcat gene was transduced into human ccdPA by a retroviral vector. The transduced cells secreted functional LCAT protein in vitro, correlating with the integrated copy number of vector genomes 55 . The secreted LCAT protein clearly ameliorated the disturbed high-density lipoprotein subpopulation profile caused by impaired LCAT function in patients' serum by the in vitro incubation assay, strongly suggesting the feasibility of our strategy 59 . An application of this in vitro assay system to evaluate the responsiveness of patients is now under investigation. The LCAT delivery achieved in the mouse model with the clinically available fibrin scaffold was enough to suggest the efficacy of the ex vivo gene therapy strategy to prevent a poor prognosis in those patients 41 .
The potential safety issues related to the ccdPA have been carefully addressed 55 . Gene transduction did not affect the cell growth, adipogenic differentiation or surface antigen profiles of the cells. The averaged integrated copy number was stable during the in vitro expansion process, and clonal expansion was not observed, indicating no predominant growth of gene-transduced cells. The transplantation experiments showed no signs for side-effects.
CONCLUSION
There are high hopes that a successful gene therapy approach can be developed in the future to treat rare genetic defects. Numerous studies have been carried out to develop such treatment strategies, both on the basic level and in the clinic. Although hematopoietic cells are proven target cells for ex vivo gene therapies, especially for immune-related diseases in which those cell functions are primarily affected by the gene defects, they might not be suitable targets for the many metabolic diseases that result in impairment of multiple organs. The physiological functions and applicability of adipose tissue would enable researchers to develop a novel therapeutic strategy to deliver therapeutic proteins systemically.
Mature adipocytes have been explored as a source of target cells for ex vivo gene therapy. Propagated ccdPA would provide an excellent platform for a novel adipocyte-based protein replacement therapy for patients with serum protein deficiencies who require long-term therapeutic protein supplements. A good manufacturing practice production procedure has been established, and the gene-transduced cells can be expanded up to nearly 10 12 cells from 1 g of fat tissue within 1 month after fat tissue preparation 55 . To further expand the adipocyte-based therapeutic strategy for the supplementation of other proteins, it will be necessary to evaluate the characteristics of ccdPA from various kinds of fat diseases, such as those from subjects with metabolic syndrome, which might affect the secretion function of adipose tissues, and to develop an allogeneic transplantation method for patients with lethal conditions in childhood, as well as to establish the necessary transplantation procedure. After the careful consideration of the safety in combination of efficacy, the novel transplantation therapy developed using adipocytes might be applicable not only for genetic deficiencies, but also for lifestyle-related diseases, including diabetes mellitus.
|
2016-05-12T22:15:10.714Z
|
2011-06-22T00:00:00.000
|
{
"year": 2011,
"sha1": "81e3ce2a8d41bcb66b5ba755c92d855e07a8ad5d",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.2040-1124.2011.00133.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "81e3ce2a8d41bcb66b5ba755c92d855e07a8ad5d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14062408
|
pes2o/s2orc
|
v3-fos-license
|
FM807, a curcumin analogue, shows potent antitumor effects in nasopharyngeal carcinoma cells by heat shock protein 90 inhibition
Nasopharyngeal carcinoma (NPC) is an epithelial malignancy usually associated with overexpression of both epidermal growth factor receptor (EGFR) and β-catenin. FM807 is a novel curcumin analogue with antitumor activity against both poorly and well-differentiated NPC cell lines as well as good selectivity for tumor cells. FM807 actions were shown to include inhibition of cell growth, induction of necrotic/late apoptotic cell death, and G1 arrest in NPC cells. Crucially, it exhibited potent antitumor effects both in vitro and in vivo. Binding of FM807 to the N-terminus of Hsp90 disrupted Hsp90/client complexes, resulting in degradation of the Hsp90 client protein EGFR and inhibition of the downstream Raf/MEK/ERK and PI3K/AKT pathway. FM807 also depleted levels of the intranuclear transcription factors β-catenin, Cyclin D1 and c-Myc levels by inhibiting Hsp90 chaperoned nuclear transport. In conjunction with its low toxicity in NPC xenograft mice, these results provide a sound preclinical basis for further development of FM807 as a novel therapeutic agent in the treatment of NPC.
INTRODUCTION
Nasopharyngeal carcinoma (NPC) is a malignant tumor arising from the epithelia cells of the nasopharynx. High rates of NPC occur in Southeast Asia, especially southern China, with significant racial and regional distribution characteristics [1]. As early symptoms are not obvious, 60-70% of patients with NPC are not diagnosed until the disease reaches an advanced stage [2]. Radiotherapy and chemotherapy are the standard methods for NPC, and their clinical outcome have been greatly improved since the applications of intensity modulated radiotherapy (IMRT) [3]. Although several trials reported very encouraging results with locoregional control rate over 90%, clinicians are still puzzled by the local recurrence and/or distant metastasis as the major pattern of disease failure [4].
Heat shock protein 90 (Hsp90) is a highly conserved molecular chaperon involved in the maturation and stabilization of over 200 oncogenic client proteins [5]. Most Hsp90 client proteins, such as epidermal growth factor receptor (EGFR), AKT, C-Raf (also called Raf-1), Cdk4, Bcr-Abl, and p53, are essential for tumor growth, proliferation and survival [5][6][7]. The Hsp90 inhibitor AT13387 could inhibit NPC C666-1 cell growth and induced cellular senescence with the downregulation of multiple Hsp90 client oncoproteins EGFR, AKT, CDK4, and significantly suppressed tumor formation in C666-1 NPC xenografts [8]. Hsp90 inhibitor AUY922 combined with erlotimib could also overcome the EGFR mutant with acquired resistance to tyrosine kinase inhibitors (TKIs) [9]. The classical Hsp90 inhibitor 17-AAG could also downregulated mutant EGFR expression and sensitized EGFR mutant tumors [10]. Therefore, inhibition of the Hsp90 machinery is considered to be a potent strategy in NPC therapies.
Curcumin (Cur), a polyphenol derived from the herb Curcuma longa, has been extensively studied as chemopreventive agent ascribed to the modulation of signal-transduction pathways associated with cell proliferation, invasion and angiogenesis [11]. Cur has been reported to down-regulate some clients of Hsp90,
Research Paper
Oncotarget 15365 www.impactjournals.com/oncotarget such as BCR-ABL, AKT, EGFR, and dissociate the Hsp90 co-chaperone p23, suggesting an inhibition of Hsp90 function [12,13]. Despite its poor bioavailability in vivo, Cur remains a good lead compound [14], as hundreds of curcumin derivatives have been reported to show lead-like properties and proving to be more active recently [15][16][17][18]. Therefore, a series of curcumin derivatives have been synthesized in our lab, and tested for their anti-tumor activies by dozens of cancer cells [19,20]. We incorporated a 2-hydroxy-benzoic acid chain into Cur and synthesized a novel Cur analogue FM807 (2-hydroxy-, 4-[(1E, 6E)-7-(4-hydroxy -3-methoxyphenyl)-3,5-dioxo -1,6-heptadien-1-yl]-2-methoxyphenyl ester), which retains the β-diketone structure and exhibits better anti-infalmmatory activity than that of Cur [21]. In the present work, we showed that FM807 inhibited the proliferation and induced G1 arrest in well-differentiated CNE1 and poorly-differentiated CNE2 cells in vitro, and exhibited antitumor activity in CNE2 xenograft in vivo. We demonstrated that FM807 physically bound to N-terminus of Hsp90 and inhibited Hsp90 function by affecting ATP-binding activity of Hsp90, leading to the degradation of Hsp90 client proteins including EGFR and downstream Raf/MEK/ERK and PI3K/AKT signal pathways. Moreover, subcellular fractionation of NPC cells provided information that differentially abundant intranuclear transcription factors such as β-cantenin would likely cause various chemosensitivity to CNE1 and CNE2, and the molecular chaperone function of Hsp90 for nuclear transport could also be inhibited by FM807. These data suggest that FM807 is a potent Hsp90 inhibitor against nasopharyngeal carcinoma cells.
FM807 inhibits proliferation of NPC cell lines
FM807 was synthesized as a derivative of Cur ( Figure 1A). To evaluate its effectiveness on cellular proliferation, Cur and FM807 were administrated to the NPC cell lines CNE1 and CNE2 for 72 h, and subjected to an MTS assay ( Figure 1B). The IC 50 values of Cur and FM807 were determined to be 37.01 and 25.75 µM for CNE1, 15.33 and 3.59µM for CNE2, respectively. These indicated that the inhibitory action both Cur and its analogue FM807 on the poorly-dfferentiated NPC cell line CNE2 was stronger than on the well-differentiated CNE1, moreover FM807 was more potent than Cur. To confirm these findings, we next performed a colony formation assay to test effectiveness of Cur and FM807 on cell proliferation ( Figure 1C 1a and 1b). These data also revealed that CNE2 cells were more sensitive to their effects. Importantly, FM807 displayed greater inhibition of cell proliferation than Cur in NPC cell lines.
To gain further insight into the efficacy of FM807, we next performed colony formation assays using the human normal epithelial cell line Rhek ( Figure 1C, 1c). Although colony formation ability of non-malignant cells was lower than that of tumor cells, FM807 (10 µM) had no effect on Rhek colony formation (P > 0.05).
FM807 induces CNE1 and CNE2 cell cycle arrest and apoptosis
We next assessed the effectiveness of FM807 on inducting cell death in both CNE1 and CNE2 cells. Cells from both lines were treated with FM807 for 24 h and analyzed for apoptotic cell death using the FITC: Annexin-V Apoptosis Detection Kit I. Results showed a dose-dependent induction of apoptosis that had an obvious necrotic/late apoptotic cell death in FM807-treated cells (Figure 2A, P < 0.05). Western blot analysis showed that anti-apoptotic proteins (Bcl-2 and Bcl-xl) exhibited similar reductions, which were accompanied by increases in the levels of pro-apoptotic proteins (Bax and cleaved caspase-3) ( Figure 2B), suggesting that the induction of apoptosis is one of the major anti-tumor mechanisms of FM807 in CNE1 and CNE2 cells.
When compared to vehicle-treated controls, FM807treated cells displayed showed marked G1 phase arrest after 24 h of treatment. The increase in the G1 phase cell population induced by FM807 was accompanied by a concomitant decrease in cells in S and G2/M phases. Moreover, FM807-treated cells were more remarkable at inducing G1 arrest ( Figure 2C, P < 0.05).We next turned to western blot analysis to further investigate the molecular mechanisms involved in FM807-induced cell cycle arrest of CNE1 and CNE2 cells. CDK2, Cyclin A and Cyclin E showed no significant change with FM807 exposure. Since the induction of the cell cycle regulator CDK4 and its cognate cyclin, Cyclin D1, by mitogenic signals is a critical event in the G1/S transition [25], the observed G1 arrest was consistent with down-regulation of Cyclin D1 and CDK4 ( Figure 2D).
FM807 blocks EGFR-mediated downstream signaling pathways
Aberrant EGFR overexpression is frequently observed in NPC, and is associated with tumor metastasis, recurrence, and poor survival in NPC patients [26][27][28]. Binding of the EGF ligand to the extracellular domain of EGFR activates the receptor and its downstream signaling pathways, ultimately activating or modulating various cellular processes [29]. Therefore, targeting EGFR has been suggested as a potent strategy for NPC therapy [30,31].
To gain a better understanding of the mechanisms behind FM807, we tested the effect of FM807 on activation of EGFR as well as the downstream Raf/ MEK/ERK and PI3K/AKT pathways in EGF-stimulated and unstimulated CNE1 and CNE2 cells ( Figure 3A and 3B). As expected, EGF activated EGFR as well as www.impactjournals.com/oncotarget the Raf/MEK/ERK and PI3K/AKT pathways (see in lane 2). However, FM807 reduced levels of EGFR, C-Raf (also called Raf-1), and AKT in both unstimulated and EGF-stimulated CNE1 and CNE2 cells treated with increasing FM807 concentrations. Their phosphorylated forms also decreased in a similar manner by the growing dose of FM807. Quantitative real-time PCR analysis revealed that FM807 did not block transcription of EGFR, C-Raf or AKT ( Figure 3C) in either cell line, indicating that the degradation was occurring at the level of proteinprotein interaction.
Finally, the degradation was completely blocked with treatment of the proteasome inhibitor MG132, indicating that the proteasomal system was responsible for FM807-induced protein degradation ( Figure 3D).
Antitumor activity of FM807 in xenograft models in vivo
We next turned to CNE1 and CNE2 xenograft models to assess the antitumor effects of FM807 in vivo. Nude mice were injected subcutaneously with 1 × 10 7 CNE1 or CNE2 cells. After injection, tumor-bearing mice were randomly assigned to one of three experimental groups (50, 100 and 200 mg/kg FM807, i.g.) or vehicle control group. FM807 inhibited tumor growth in CNE1 ranging from 0 to 80 µM. Cellular viability was measured using MTS assay and expressed as a percentage of vehicle-treated control. Results are presented as means ± SD of three independent experiments. *P < 0.05: significant difference from control by ANOVA; **P < 0.01, ***P < 0.001: very significant difference from control by ANOVA. (C) Effect of Curcumin and FM807 on the colony formation of (a) CNE1, (b) CNE2 and (c) Rhek cells. Quantification of the colony formation efficiency is presented as means ± SD of three independent experiments, *P < 0.05: significant difference from control by ANOVA; **P < 0.01, ***P < 0.001: very significant difference from control by ANOVA.
As shown by immunohistochemical and western blot analyses, FM807 induced a significant decrease in EGFR as well as inhibited downstream Raf/MEK/ERK and PI3K/AKT pathways in CNE2-derived tumors. Importantly, FM807 was able to induce EGFR, C-Raf and AKT degradation ( Figure 4E and 4F). Western blot analysis of tumor tissues also showed that total EGFR, C-Raf, AKT and their phosphorylated forms were significantly decreased in all FM807-treated groups ( Figure 4F). These data were consistent with the results obtained at the cellular level and confirmed the antitumor effect of FM807 in targeting EGFR and the downstream Raf/MEK/ERK and PI3K/AKT signaling pathways.
FM807 physically binds to the N-terminal of Hsp90 and blocks Hsp90 ATPase activity in vitro
Previous work done by our lab has shown that Cur degrades Hsp90 client proteins, thereby suggesting that it is an Hsp90 inhibitor [12]. Moreover, EGFR, C-Raf and AKT are all key client proteins of Hsp90 and highly susceptible to Hsp90 inhibition [7].Thus, we next examined the effect of FM807 on Hsp90. First, we created the FM807-loaded resins [22], and incubated with CNE2 cell lysate. To this, the lysate and resin were incubated with one of the following: (1) histidine-tagged full-length, for 24 h, and apoptotic cell death was detected by staining cells with an Annexin-V: FITC Apoptosis Detection Kit for later flow cytometry analysis. (B) CNE1 and CNE2 cells were treated with increasing doses of FM807 for 24 h, and apoptosis signal proteins were detected using western blot analysis, β-actin was used as a loading control. (C) CNE1 and CNE2 cells were treated with increasing doses of FM807 for 24 h. Cells were then fixed with 70% ethanol at -20°C overnight, incubated with RNase A at 37°C for 30 min, stained with propidium iodide for 10 min, and analyzed with flow cytometry. (D) CNE1 and CNE2 cells were treated with increasing doses of FM807 for 24 h, and cell cycle signal proteins were detected using western blot analysis. www.impactjournals.com/oncotarget (4) C-terminal dimerization domain (CDD) (15 kDa) of yeast Hsp90. This affinity-based screen showed that FM807 bound to NBD of Hsp90, but not to MD or CDD ( Figure 5A).
We then used Molsoft ICM 3.5a to model the interaction between Hsp90 (Protein Data Bank ID 2CCS) and FM807. As shown in Figure 5B, the side chain of ASN51, LYS58, GLy97 and THR184 could potentially form four hydrogen bonding interactions with the NBD of Hsp90 ( Figure 5B).
Since the chaperone function of Hsp90 is dependent on ATP binding, and ASN51 and GLY97 are both involved in the ATP binding sites of Hsp90 [32], we next tested the influence of FM807 on the ATPase activity of Hsp90. Our colorimetric assay for inorganic phosphates revealed that FM807 could inhibit the ATPase activity of Hsp90 in a concentration-dependent manner. Importantly, the inhibition of FM807 on Hsp90 ATPase was greater than that of Cur ( Figure 5C).
Hsp90 requires a series of co-chaperones such as cell division cycle protein 37 (Cdc37), Hsp70 and p23 to form a super-chaperone complex. This complex is then released at various time points to regulate the folding, assembly and maturation of Hsp90 client proteins [33].
Given this, we sought to further investigate the influence of FM807 on the interactions between Hsp90 and its cochaperones using CNE1 and CNE2 cells. Hsp90, Cdc37 and p23 levels all showed no distinct changes as a function of increasing FM807 concentration. However, Hsp70 is a mark of Hsp90 inhibition, and was increased by treatment after FM807 treatment ( Figure 5D). Furthermore, after depletion of Hsp90 protein by siRNA, FM807 was unable to induce EGFR, C-Raf and AKT protein degradation, suggest that Hsp90 protein is a direct target of FM807 ( Figure 5E).
FM807 suppresses nuclear β-catenin signaling pathway
The data showing a greater sensitivity of CNE2 to FM807 than CNE1 were both interesting and unexpected. We next wonder: Is there differential Hsp90 clients expression that underlies the difference in FM807 chemosensitivity between CNE1 and CNE2 cell lines? To answer this question, we used western blot analysis to determine protein levels of EGFR, C-Raf and AKT of CNE1 and CNE2 cells. Counter to our hypothesis, the result showed that there were no significant differences of EGFR, C-Raf and AKT protein levels between CNE1 and CNE2 ( Figure 6A). This finding indicating that EGFR and its downstream signaling might not be the reason for the observed differences in chemosensitivity.
It has been reported that CNE1 is more radioresistant than CNE2, and that the transcriptional activity of β-catenin is closely associated with radiation resistance in cancer progenitor and stem cells [34,35]. β-catenin is the key mediator of canonical signaling in the Wnt/β-catenin pathway, and a hallmark of Wnt/β-catenin signaling activation is the stabilization and nuclear translocation of cytoplasmic β-catenin [36]. Intranuclear β-catenin then interacts with numerous downstream proliferation signals, including c-Myc and Cyclin D1 [37].
Western blot analysis was used to determine cytoplasmic and nuclear protein levels in both CNE1 and CNE2 ( Figure 6B). Although there were no significant differences in cytoplasmic levels of β-catenin, Cyclin D1 or c-Myc between CNE1 and CNE2 cell lines, results showed significantly higher nuclear expression of β-catenin, Cyclin D1 or c-Myc in CNE1 cells relative to CNE2 cells. We next determined the effect of FM807 on the nuclear transcription factors in CNE1 and CNE2 cells. Additional western blot analysis further confirmed decreases of nuclear β-catenin, CyclinD1 and c-Myc in both CNE1 and CNE2 cells following FM807 treatment ( Figure 6C). Therefore, the differences in FM807 sensitivity between CNE1 and CNE2 cell lines might be associated with intranuclear protein expression. However, the exact relationship between FM807 chemosensitivity and nuclear transcription factor levels still remains unclear.
It has been reported that Hsp90 is an essential molecular chaperone for nuclear transport, and Hsp90 The "-" symbol of FM807-resin represents no drug-loaded affinity column, and the "+" symbol represents the drug-loaded affinity column. The test proteins were CNE2 cell lysate, His-tagged yeast Hsp90, NBD, MD and CDD of yeast Hsp90. Panels a and c show Human Hsp90 antibody; panels b and d show His-probe antibody. (B) Docking model of FM807 binding to N-terminal Hsp90. (C) The influence of FM807 on the ATPase activity of Hsp90 were determined by the malachite green reagent. The assay used 0.5 μM Hsp90 protein, 1 mM ATP, and FM807 or Cur at 0, 5, 10, 20, 40, 80 and 160 μM, to test the absorbance at 635 nm. Results are presented as means ± SD of three independent experiments. *P < 0.05: significant difference from control by ANOVA; **P < 0.01, very significant difference from control by ANOVA. (D) CNE1 and CNE2 cells were treated with FM807 for 24 h; Hsp90, Hsp70, Cdc37 and p23 protein levels were analyzed using western blotting. (E) CNE2 cells were treated by Hsp90 small interfering RNA (siRNA) and control siRNA for 6 h and incubated in normal growth medium for another 48 h. Whole-cell lysates were analyzed using western blotting with antibodies against EGFR, C-Raf, AKT and β-actin.
Oncotarget 15371 www.impactjournals.com/oncotarget could modulate both β-catenin and its phosphorylation levels [38]. Given this, β-catenin might be a client of Hsp90 [39], therefore meaning that FM807-induced depletion of these nuclear transcription factors would likely operate through Hsp90 inhibition. To confirm this, co-immunoprecipiation experiments were performed with an anti-Hsp90 antibody in both cytoplasmic and nuclear fractions isolated from CNE2 cells ( Figure 6D). Results showed that nuclear β-catenin was successfully detected in the Hsp90 co-immunoprecipitation, indicating that β-catenin might be chaperoned by Hsp90. Reverse immunoprecipiation of β-catenin confirmed coimunoprecipiation of Hsp90 ( Figure 6E). Nuclear Cyclin D1 and c-Myc bands were also visible in both the Hsp90 and β-catenin immunoprecipitation, demonstrating the interaction between β-catenin, CyclinD1 and c-Myc ( Figure 6D and 6E). To further clarify the interaction between Hsp90 and nuclear β-catenin, we downregulated Hsp90 expression in CNE2 using Hsp90 siRNA transfection ( Figure 6F). Results showed that FM807 was unable to deplete nuclear β-catenin, CyclinD1 or c-Myc protein levels after Hsp90 knockdown, suggesting that FM807 likely reduces the nuclear translocation of these proteins by inhibiting Hsp90 chaperone functioning.
DISCUSSION
Hsp90 is a highly conserved and abundant molecular chaperone found in all eukaryotes [7]. Hsp90 is required for proper folding and maturation of most oncogenic proteins which can then aberrantly activate multiple signaling pathways [40][41][42]. Therefore, the molecular chaperone Hsp90 is an attractive target for cancer therapy.
In the present study, we have shown that the Curcumin analogue FM807, has better anti-proliferative abilities than Curcumin ( Figure 1B and 1C), has both in vitro and in vivo antitumor effects in NPC cells, and displays selective inhibition of proliferation in non- We have also shown that FM807 is an Hsp90 inhibitor that directly binds to the N-terminus of Hsp90 ( Figure 5A and 5B) and interrupts Hsp90/client complexes ( Figure 5E). Inhibition of cell growth and induction of necrotic/late apoptotic cell death and G1 phase arrest (Figure 2) by FM807 was also shown in both well-differentiated CNE1 and poorlydifferentiated CNE2 cells.
Overexpression of EGFR, one of the most sensitive Hsp90 client proteins, has been reported in 85% of NPC tissues and is associated with poorer patient prognosis [43][44][45]. EGFR is the tyrosine kinase receptor of the natural ligand EGF, with EGFR activating numerous signaling cascades associated with proliferation, migration, and drug resistance, notably the Raf/MEK/ERK and PI3K/AKT pathway [46]. Both in vitro and in vivo analyses revealed that FM807 potently inhibited EGFR and its downstream signaling ( Figures 3A, 3B, 4C and 4D). Moreover, FM807 was found to suppress EGF-stimulated EGFR/Raf/MEK/ERK and PI3K/AKT cascades and to simultaneously downregulate total EGFR, C-Raf, and AKT levels. This observation raises the possibility that FM807 may have clinical benefit for NPC patients with high EGFR expression.
Moreover, FM807 showed better inhibition in poorly-differentiated CNE2 cells compared to welldifferentiated CNE1 cells. It also had better antitumor effects in CNE2 xenografts models when compared to CNE1-derived models ( Figure 4A and 4C), with IC 50 values of 3.59 and 25.75 µM, respectively. This is a promising finding, as CNE2 cells are more malignant than CNE1 [1].
To better clarify how FM807 might lead to such different chemosensitivities, we used western blot analysis to determine cytoplasmic and nuclear protein levels. Our results revealed higher nuclear β-catenin, Cyclin D1 and c-Myc levels in CNE1. β-catenin is a key effector of Wnt signaling, and nuclear β-catenin levels are increased in 92% of NPC tumors, thus rendering it an important component of NPC development [27]. Aberrant intranuclear β-catenin activates numerous downstream transcription factors, including Cyclin D1 and c-Myc, to promote cell proliferation and differentiation. Cyclin D1 is response for cell progression through G1 phase, and overexpression of Cyclin D1 enables cells with unrepaired structural or genomic damage to traverse the G1/S checkpoint, thus increasing the risk of tumor formation [47][48][49]. C-Myc is also critical to the regulation of G1/S phase proteins, and upregulation of c-Myc is a common occurrence in cancers for chromosomal translocations and point mutations [50,51]. Decreases in nuclear β-catenin, Cyclin D1 and c-Myc were observed in both CNE1 and CNE2 cells ( Figure 6B, 6C and 6D), These decreases likely contributed to a decrease in the DNA repair and replication capacity resulting from FM807 inhibition of Hsp90, thus preventing cell from transitioning into the S phase and arresting it in the G1 phase ( Figure 2).
Cell lines and reagents
Well-differentiated and poorly-differentiated NPC cell lines CNE1 and CNE2 (obtained from Fujian Provincial Tumor Hospital of China) were cultured in RPMI 1640 supplemented with 10% heat-inactivated fetal bovine serum (FBS) (GIBCO, Gaithersburg, MD, USA), 100 IU/ml penicillin and 100 IU/ml streptomycin. Human epithelial cell line Rhek (obtained from the Medical Center of Minnesota University) was grown in MEM supplemented with 10% fetal bovine serum, 100 IU/ml penicillin, 100 IU/ml streptomycin and 1% L-glutamine. FM807 (purity ≥ 98.5%) was synthesized from Curcumin (obtained from the Third Reagent Factory of Shanghai, China) at the New Drug Institute of Fujian Medical University, China. All other chemical reagents were obtained from Sigma Aldrich.
Chemoproteomics assay
CNBr-activated Sepharose ™ 4B (GE Healthcare) was swelled in 1mM HCl and washed with coupling buffer (0.1 M NaHCO 3 , 0.5M NaCl, pH = 8.3). After the resin was swelled, washed, and added to the coupling buffer, FM807 was dissolved in dimethyl sulfoxide (DMSO) and added to the resin (up to 10 µmoles per mL of medium). The mixture was rotated end over end for 4 h at room temperature, and excess ligand was removed by washing with coupling buffer. Any remaining active groups were blocked for 2 h at room temperature with the capping solution. The column was then equilibrated with coupling buffer for 1 h. Hsp90 test proteins were added to the resin, the mixture was rotated overnight at 4°C. Any unbound proteins were removed by washing buffer. Loading buffer was added to the resin, boiled for 10 min, and separated using 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis. Protein separation was followed by western blotting for the protein of interest. www.impactjournals.com/oncotarget
MTS assay
Cells were seeded in 96-well plates (5 × 10 3 /well), and treated with various concentrations of Cur and FM807. Cell viability was assessed after treatment using an MTS assay (Promega) according to the manufacturer's instructions. Results were calculated based on the principle that the number of living cells was proportional to MTS absorbance at 490 nm. All results are presented as means ± SD from three independent experiments. Inhibition graphs used mean values obtained from each concentration relative to control values. Half maximal inhibitory concentrations (IC 50 ) were calculated using PASWstatistics 18 (SPSS, Inc).
Colony formation assay
Cells were plated in 24-well plates (500/well) and treated with various concentrations of Cur and FM807. After two weeks of treatment, colonies were staining with Giemsa for 10 min, and counted using light microscopy. Colonies were defined as ≥ 50 cells.
Cell cycle analysis
Cells were seeded in 6-well plates and treated with various concentrations of FM807 for 24 h. Cells were harvested, washed with phosphate-buffered saline (PBS), and fixed in 70% ethanol overnight at −20°C. After additional washing, cells were incubated with RNase A (20 µg/mL) at 37°C for 30 min, stained with propidium iodide (100 µg/mL; Sigma Aldrich) for 10 min, and analyzed with flow cytometry (BD FACSC autoTM II).
Apoptosis assay
Apoptosis was determined using the Annexin-V:FITC Apoptosis Detection Kit I (BD Biosciences) according to the manufacturer's protocol. Briefly, DMSO or FM807 -treated cells were collected via centrifugation and washed once with PBS. Cells were stained with fluorescein and propidium iodide for 15 min at room temperature and subsequently analyzed by flow cytometry.
Immunoprecipitation
Samples (500 μg of total protein) were incubated overnight with 2 μg of primary antibody at 4°C, after which 20 μL of protein A Mag Sepharose ™ (GE Healthcare, UK) were added to the mixture. The mixture was then rotated for 2 h at 4°C. The immunoprecipitated protein complexes were then washed once with lysis buffer and twice with ice-cold PBS. The resulting supernatant was discarded and the antibody/protein complexes were resuspended in 30 μL of loading buffer and boiled for 5 min. Proteins were then separated with 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis and assayed with immunoblotting.
Colorimetric determination of ATPase activity
Malachite green reagent [50,51] was prepared on the day of use, which contained malachite green (0.0812%, w/v), polyvinyl alcohol (2.32%, w/v, dissolves with difficulty and requires heating), ammonium molybdate (5.72%, w/v, in 6 M HCl), and argon water mixed in a ratio of 2:1:1:2 to a golden yellow solution. The assay buffer consisted of 100 mM Tris-HCl, 20 mM KCl, and 6 mM MgCl 2 , with a pH of 7.4. The experiments were performed in 100 µL of test solution containing 80 µL of malachite green reagent. The test solutions contained 0.5 µM Hsp90 protein, 1 mM ATP, and different concentrations of FM807 or vehicle (DMSO) were shaken for 15 min at room temperature. Absorbance values were derived from the plate reading at 635 nm.
Small interfering RNA (siRNA) gene knockdown
Cells were seeded in antibiotic-free normal growth medium supplemented with FBS. Single siRNA oligonucleotides (obtained from Santa Cruz Biotechnology) targeting human Hsp90α/β (sc-35608) and control siRNA (sc-37007) were diluted in siRNA Transfection Medium (sc-36868) and mixed with siRNA Transfection Reagent (sc-29528) according to the manufacturer's protocol. Cells were first incubated with the transfection complexes for 6 h followed by normal growth medium for 48 h. Cells were then treated with either DMSO or FM807 for 24 h before cell lysates were prepared and analyzed using western blot.
Preparation of cell lysates and cell fractions
For whole cell lysates, 1 × 10 7 cultured cells were harvested and washed twice with ice-cold PBS, and then lysed for 15 min at 4°C with 500 µL lysis buffer (10 mM Tris-HCl pH 8.0, 1mM EDTA, 2% sodium dodecyl sulfate (SDS); 5mM dithiothreitol (DTT); 10mM phenylmethanesulfonyl fluoride (PMSF, Sigma Aldrich), a cocktail of protease and phosphatase inhibitors (Roche, Indianapolis, IN), and PhosSTOP (Roche Diagnostics), then centrifuged at 12,000 rpm for 10 min and the supernatant was collected and stored at −70°C until late use.
For the preparation of cytoplasmic and nuclear factions, 1 × 10 7 cultured cells were washed with PBS and suspended in 200 µL of lysis buffer (10 mM Hepes, pH7.9; 10 mM KCl; 0.1 mM EDTA; 0.1 mM EGTA; 1mM DTT; 0.5 mM PMSFand protease inhibitor cocktail). Cells were incubated on ice for 15 min, after which 6.5 µL of 12.5% NP-40 were added; the contents were mixed and then centrifuged for 1 min at 12,000 rpm. The supernatant was saved as the cytoplasmic fraction. The pellet was resuspended in 12.5 µL of ice-cold nuclear extraction buffer (20 mM Hepes, pH7.9; 0.4 M NaCl; 1 mM EDTA; 1 mM EGTA; 1mM DTT; 1 mM PMSF and protease inhibitor cocktail) and incubated on ice for 40 min with mixing every 10 min. The solution was then centrifuged at 12,000 rpm for 5 min at 4°C. The resulting supernatant was saved as the nuclear fraction. The cytosolic and nuclear fractions were stored at −70°C until later use.
Western blot analysis
Protein concentration was determined using the BCA Protein Assay Kit (Thermo Scientific) according to the manufacturer's instructions. Equal amounts of protein were separated using SDS-PAGE, transferred to PVDF membranes and blotted with specific primary antibodies. Proteins were detected via incubation with horseradish peroxidase-conjugated secondary antibodies and visualized with SuperSignal WestPico (Thermo Scientific). All Western blots were repeated at least three times to ensure replicability.
Tumor xenografts
BALB/c (nu/nu) athymic male mice, aged 4-5 weeks and weighing approximately 18-20g, were purchased from Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences. For the xenografts, 6 mm 3 tumor fragments were implanted into the subcutaneous tissue of the axillary region of mice using a trocar needle after which mice were randomly divided into control and treatment groups (n = 6). Animals were given daily intragastric (i.g.) administration of either vehicle or FM807 (incorporated drug solid dispersions with poloxamer188 and further diluted in physiological saline) daily. Tumor volumes were calculated using the following ellipsoid formula: [D × (d 2 )]/2, in which D is the large diameter of the tumor, and d is the small diameter. Tumor volumes were plotted as means ± SD. All animal experiments were approved by the Animal Care and Use Committee, Fujian Medical University, China.
Immunohistochemistry (IHC)
Briefly, paraffin sections were processed according to the following steps: dewaxing, endogenous peroxidase blockade using H 2 O 2 treatment, serum blockade before treatment with primary antibody (1:100) at 4°C and biotin-conjugated secondary antibody (1:100) at 37°C. Visualization was performed using DAB, followed by dehydration, transparentization and tissue mounting. The number of positive targets was counted using 400 × microscope, and analyzed by HMIAS high-definition color medical image analysis system (Wuhan, China).
Statistical analysis
ANOVA was employed for comparisons across multiple groups, and data were reported as mean ± SD. Statistical analysis was performed using PASWstatistics 18 (SPSS, Inc). P < 0.05 was considered to be statistically significant.
CONCLUSIONS
As a novel analogue of curcumin, FM807 inhibits NPC cell growth, and induces G1 phase arrest and necrotic/late apoptotic cell death in CNE1 and CNE2 cells in vitro. Binding of FM807 to the N-terminus of Hsp90 likely blocks formation of Hsp90/client complexes, resulting in degradation of the Hsp90 client protein EGFR and inhibition of the downstream Raf/MEK/ERK and PI3K/AKT pathway. Different intranuclear transcription factors such as β-catenin, Cyclin D1 and c-Myc, would likely cause disparate FM807 chemosensitivities. However, FM807 depletes these intranuclear transcription factor levels by inhibiting Hsp90 nuclear chaperone functioning. Based on these findings of tumor suppressive effects and low toxicity in xenograft mice, this study provides a preclinical basis for the further development of FM807 as a novel therapeutic agent in the treatment of NPC.
|
2018-04-03T01:20:59.490Z
|
2017-02-01T00:00:00.000
|
{
"year": 2017,
"sha1": "bca57f7e571d728a5b14d7ef06428c9b7f126c14",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=14970&path[]=47841",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bca57f7e571d728a5b14d7ef06428c9b7f126c14",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53183705
|
pes2o/s2orc
|
v3-fos-license
|
Geometric Symmetry of Dielectric Antenna Influencing Light Absorption in Quantum-Sized Metal Nanocrystals: A Comparative Study
Silica nanoparticles, optically transparent in the visible spectral region, represent a class of dielectric antenna to tune the propagation and local field distribution of the visible light through surface scattering while the energy loss is minimized. The light scattering on the surface of silica nanoparticles include resonant scattering and random scattering that strongly depend on their geometry: spherical silica nanoparticles with the highest geometrical symmetry favors the light scattering resonances on the nanoparticle surfaces to promote resonant scattering while non-spherical silica nanoparticles mainly support random scattering. Both resonant scattering and random scattering of light on the silica nanoparticles are capable of enhancing the light absorption in quantum-sized metal nanocrystals attached to the surfaces of the silica nanoparticles. The contributions of resonant scattering and random scattering to the enhancement of light absorption have been compared and discussed. The understanding highlights the importance of the geometry of the silica nanoparticle antenna on the design and synthesis of composite materials for efficient light harvesting.
INTRODUCTION
A dielectric antenna consisting of a block of ceramic material of varying shapes interacts with electromagnetic waves while loses much less energy than the metal counterparts, resulting in an efficient modulation of the incident waves (Ashkin and Dziedzic, 1981;Ausman and Schatz, 2008;Matheu et al., 2008;Anderson, 2010;Grandidier et al., 2013;Yin et al., 2013;Kuznetsov et al., 2016;Huang et al., 2018). For instance, a dielectric resonator antenna with an appropriate geometry can allow the incident electromagnetic wave to bounce back and forth against the antenna surface, supporting scattering resonances to form new standing waves near the antenna surface, behaving as resonant scattering. The new surface standing waves can possibly radiate and propagate into space if the antenna surface is leaky. In contrast, an incident electromagnetic wave can also (elastically) scatter away from the antenna surface into space regardless of the geometry of the antenna, behaving as random scattering. Therefore, a dielectric antenna supporting different scattering modes leads to a difference in influencing the absorption spectrum of a material that can absorb the incident electromagnetic energy when it is placed near the antenna. The random scattering usually does not alter the absorption spectrum profile of the energy-absorbing material while the resonant scattering does. A typical example extensively reported in literature is the use of silica particles with sizes of several hundreds of nanometers and larger as a class of dielectric antenna to improve the capability of light absorption in active materials of light-harvesting devices such as solar cells (Matheu et al., 2008;Matsko, 2009;Grandidier et al., 2011;Gupta et al., 2018). Since silica is transparent in the visible spectral region and silica nanoparticles do not absorb visible light, the enhanced light absorption in active materials is attributed to the strong light scattering (both resonant scattering and random scattering) on the surfaces of the silica nanoparticles. When the silica nanoparticles are mixed with the light-absorbing active materials to form a composite in a given volume, the light scattering from the surface of the silica nanoparticles elongates the light traveling path to benefit light absorption in the active materials (Matheu et al., 2008;Kumari and Narayana, 2012;Son et al., 2013;Ullah et al., 2015;Rahman et al., 2016). In addition to the light scattered away from the surface of the silica nanoparticles, the light scattering resonances on the surface of the silica nanoparticles also influence the optical response of active materials attached to the silica nanoparticles (Ausman and Schatz, 2008;Grandidier et al., 2011;Liu et al., 2015). The surface scattering resonances usually create electrical fields much stronger than the incident light near the surface of the silica nanoparticles, significantly enhancing the light absorption of active materials attached to the surface of the silica nanoparticles (Zhang et al., 2016;Codrington et al., 2017;Rasamani et al., 2017Rasamani et al., , 2018Dai et al., 2018b). Herein, we study the influence of geometry of silica nanoparticles on their light scattering behavior as well as the corresponding enhanced light absorption in quantum-sized metal nanocrystals attached to the silica nanoparticles. Using the quantum-sized metal nanocrystals as the unique probe, the contributions of resonant scattering and random scattering to the enhanced light absorption are compared. The enhanced light absorption in quantum-sized metal nanocrystals, for example, the nanocrystal catalysts made of platinum group metals (PGMs), is beneficial for more efficiently exciting hot electrons in the quantumsized metal nanocrystals to drive chemical transformations as discussed elsewhere (Wei et al., 2018).
Synthesis of Silica Nanospheres (SiO x NSs)
Silica nanospheres (SiO x NSs) were prepared through a solgel process relying on a controlled hydrolysis and condensation of tetraethyl orthosilicate (TEOS, 98%, Sigma-Aldrich) (Green et al., 2003). An appropriate amount (1.7 mL) of TEOS was added to a solution containing 29.1 mL of absolute ethanol (Pharmco-Aaper), 3.21 mL of deionized (DI) water, and 1.96 mL of ammonia hydroxide (28-30 wt.% in water, Fisher Scientific). The reaction proceeded for 2 h at a stirring rate of 600 rpm to complete the growth of SiO x NSs. Changing the amount of water and TEOS could tune the size of the synthesized SiO x NSs. The resultant SiO x NSs were collected through two cycles of centrifugation and washing with ethanol, and then dried overnight in an oven set at 60 • C.
Synthesis of Truncated Silica Nanospheres (t-SiO x NSs) and Ellipsoidal Silica Nanoparticles (e-SiO x NPs)
The t-SiO x NSs and e-SiO x NPs were synthesized by following the procedure reported in literature with slight modifications (Rahmani et al., 2017). A desirable amount (50-300 mg) of cetyl trimethylammonium bromide (CTAB, 99.5%, Chem-Impex Int'l Inc.) was added to a mixed solvent containing 50 mL of deionized water and 5 mL of ethanol. The temperature of the solution was maintained at 50 • C and stirred at 800 rpm for 30 min. To this solution was added 350 µL of NaOH aqueous solution (3 M, Fisher Scientific) to dissolve CTAB completely. While the temperature and magnetic stirring was maintained, adding 575 µL of TEOS to the CTAB solution initiated the nucleation and growth of silica nanoparticles. Continuous reaction for 2 h completed the synthesis of silica nanoparticles. The geometry of the silica nanoparticles was determined by the concentration of CTAB, i.e., 100 mg of CTAB for t-SiO x NSs and 200 mg of CTAB for e-SiO x NPs. The resulting SiO x nanoparticles were collected through multiple cycles of centrifugation (at 6,000 rpm for 10 min) and washing with ethanol. The collected particles were dried overnight in an oven set at 60 • C.
Synthesis of Rodlike Silica Nanoparticles (r-SiO x NPs)
Rodlike silica nanorods (r-SiO x NPs) with different aspect ratios were synthesized and purified based on the method reported in previous studies (Kuijk et al., 2011;Murphy et al., 2016). In a typical synthesis of r-SiO x NPs with an aspect ratio of 0.8, 30 g of polyvinylpyrrolidone (PVP, molecular weight ∼55,000, Sigma-Aldrich) was first dissolved in 30 mL of 1pentanol (99%, Acros Organics) with assistance of sonication in a 50-mL centrifuge tube. To this PVP solution was sequentially added 0.5 mL of absolute ethanol, 1.2 mL of DI water, and 0.3 mL of 0.18 M aqueous sodium citrate dehydrate (Fisher Scientific). The solution was homogenized via vortex for 30 s. 0.4 mL of ammonium hydroxide was then added to the solution followed by vortex for 30 s. To the solution was finally added 0.3 mL of TEOS. The solution was then vortexed for 1 min and maintained still for 1.5 h, forming r-SiO x NPs with the aspect ratio of 0.8. The length and radius (thus the aspect ratio) of r-SiO x NPs could be tuned by changing the amount of ethanol, water, sodium citrate, ammonium hydroxide, TEOS, and reaction time. The synthesized r-SiO x NPs were collected by three cycles of centrifugation (at 6,000 rpm for 20 min) and washing with ethanol and water. The rods were then re-dispersed in ethanol with assistance of sonication for 2 h. The dispersion was centrifuged at 800 rpm to remove larger rods in the sediment, leaving nearly mono-dispersed r-SiO x NPs in the supernatant.
Synthesis of Quantum-Sized Pt and Rh Nanocrystals
Colloidal Pt nanocrystals were synthesized through reduction of hexachloroplatinate anions in an aqueous solution at ambient condition. In a typical synthesis, 26 mL of aqueous solution of trisodium citrate (4 mM, Alfa Aesar) was added to 50 mL of aqueous solution of hexachloroplatinic acid hexahydrate (0.4 mM, Acros Organics) while the solution was stirred at 800 rpm. To this solution was added 5 mL of aqueous solution of 4 mM sodium borohydride (98%, Sigma-Aldrich) slowly, triggering the reduction of hexachloroplatinate anions to immediately turn the solution from yellow to brown. The reaction lasted 2 h under a continuous stirring, forming a black dispersion containing Pt nanocrystals.
Colloidal Rh nanocrystals were synthesized through a polyol process involving sequential reactions at two different temperatures (Biacchi and Schaak, 2015). In a typical synthesis, 0.238 g of potassium bromide (KBr, Alfa Aesa), 0.088 g of PVP, 0.024 g of sodium hexachlororhodate (III) dodecahydrate (Na 3 RhCl 6 ·12 H 2 O, Alfa Aesa) were mixed with 7 mL of ethylene glycol (EG, Fisher Scientific) in a 20-mL glass vial. Warming up the solution at 40 • C for 1 h dissolved the reagent powders completely. The solution was then heated up to 90 • C and this temperature was maintained for 15 min to initiate nucleation. The reaction was heated to 150 • C and maintained for 1 h, facilitating the growth of colloidal Rh nanocrystals. The resulting dispersion of Rh nanocrystals was mixed with 3 mL of acetone/water (9/1 in V/V), followed by centrifugation at 13,400 rpm for 10 min. The settled powders were then redispersed with 3 mL of acetone/water (9/1 in V/V). Repeating the centrifugation/re-dispersion cycles for 5 times removed PVP and ions from the dispersion of the Rh nanocrystals.
Functionalization of Silica (SiO x ) Nanoparticles
The synthesized SiO x nanoparticles were functionalized with (3amniopropyl) triethoxysilane (APTES, 98%, Acros Organics) to introduce positively charged surfaces. An ethanolic dispersion of 2 mg mL -1 of silica nanoparticles was first prepared with assistance of ultrasonication. To 10 mL of the silica nanoparticle dispersion was dropwise added 0.1 mL of APTES while the temperature of the dispersion was maintained at 60 • C. It took 30 s to complete the addition of APTES. The dispersion was continuously stirred for 8 h, leading to the conjugation of APTES to the surface of the silica nanoparticles. The functionalized silica nanoparticles were collected through centrifugation and washing with ethanol, followed by drying in an oven set at 60 • C. As for r-SiO x NPs, the corresponding powders were calcined at 500 • C for 3 h to burn off PVP from the r-SiO x NPs. An appropriate amount of the calcinated r-SiO x NPs were then added to ethanol, forming a dispersion with a silica concentration of 2 mg mL -1 . To 12 mL of the r-SiO x NP dispersion was added 7 mL of aqueous solution of hydrochloric acid (37 wt.%, Fisher Scientific). The dispersion was constantly stirred overnight at room temperature. The pre-treated r-SiO x NPs were then collected by centrifugation and washing with ethanol for further surface modification with APTES.
The APTES-modified silica nanoparticles exhibited positively charged surfaces, to which metal nanocrystals with negatively charged surfaces could be attached through strong electrostatic attractions (Zhang et al., 2016;Rasamani et al., 2017Rasamani et al., , 2018. In a process of loading 1 wt.% Pt nanocrystals to the silica NPs, 4.21 mL of the as-synthesized Pt nanocrystal solution was slowly added to 10 mL aqueous dispersion of the silica nanoparticles with a concentration of 2 mg mL −1 . Constantly stirring the dispersion of two types of nanoparticles at 600 rpm for 15 min resulted in the attachment of the Pt nanocrystals to the surface of the silica nanoparticles, forming composite SiO x -NP/Pt particles. The obtained SiO x -NP/Pt particles were collected via centrifugation for 10 min at 6,000 rpm, followed by drying in an oven set at 60 • C for 2 h. The same method was also used to load the synthesized Rh nanocrystals to the APTESmodified silica nanoparticles. The loading of metal nanocrystals was tuned by controlling the amount of metal nanocrystals and silica nanoparticles used in the synthesis.
MATERIALS CHARACTERIZATION
Transmission electron microscopy (TEM) images of metal nanocrystals were recorded on a microscope (JEOL JEM-1400). Scanning electron microscopy (SEM) images of silica nanoparticles and silica/metal composite particles were characterized with a field-emission microscope (FEI Quanta FEG 450) that was operated at an acceleration voltage of 20 kV. Inductively coupled plasma atomic emission spectroscopy (ICP-OES, Thermo Scientific 7000 Series) was used to determine the loading contents of metal nanocrystals on the silica/metal composite particles. Diffuse reflectance spectroscopy (DRS) was analyzed by an ultraviolet-visible (UV-vis) spectrophotometer (Thermo Scientific, Evolution 220) equipped with an integration sphere.
RESULTS AND DISCUSSION
Silica nanoparticles with large enough sizes can always produce random scattering in the visible spectral region regardless of their morphology, but generating resonant scattering strongly depends on their morphology. For instance, theoretical modeling and calculations have shown that spherical SiO 2 particles of several microns supports Fabry-Perot or Whispering Gallery resonances, forming electrical fields near the SiO 2 surfaces much stronger than that of the incident light (Ausman and Schatz, 2008). Varying the size of the SiO 2 particles tunes the resonant wavelengths and the enhancement of electrical fields near the particle surfaces. Decreasing the size of silica spheres down to the sub-micrometer scale still supports surface scattering resonances despite the broadness of the resonance peaks. The light scattering resonances on the surfaces of silica nanospheres (SiO x NSs) significantly enhance the optical absorbance and change the spectral profile of absorption in quantum-sized Pt nanocrystals (with size <10 nm) that are attached to the surface of the SiO x NSs (Zhang et al., 2016). An aqueous dispersion of well-dispersed Pt nanocrystals with an average size of 3.2 nm exhibits a brown color and a peakless absorption spectrum in the range of 300-800 nm (Figure 1A). The Pt nanocrystals have negatively charged surfaces, which allows them to attach to the surfaces of positively charged SiO x NSs through the strong electrostatic attraction. This hybridization process results in a uniform distribution of the Pt nanocrystals on the SiO x NSs, forming SiO x -NS/Pt composite particles ( Figure S1). A powder of the SiO x -NS/Pt particles is greenish (inset, Figure 1B), corresponding to the strong light absorption around 530 nm. Figure 1B compares the SiO x -NS/Pt particles and the SiO x NSs with regard to the DRS spectra, which exclude the contribution of scattering and are only sensitive to the absorption. The negligible signal of the SiO x NSs indicates that the absorption signal of the SiO x -NS/Pt particles originates only from the Pt nanocrystals. Different from the freestanding Pt nanocrystals dispersed in an aqueous solution, the SiO x -NS/Pt particles exhibit well-defined peaks, which are consistent with the light scattering resonances on the surface of the SiO x NSs with the highest geometric symmetry (R 3 ) (Zhang et al., 2016). The DRS spectrum of the SiO x -NS/Pt particles also exhibits a nonzero baseline, more likely independent of the wavelength, which agrees with the feature of random scattering. The comparisons indicate that both resonant scattering and random scattering of the dielectric SiO x NSs are responsible for enhancing the light absorption in the quantum-sized Pt nanocrystals. The respective contribution of resonant scattering and random scattering to the enhanced light absorption is not distinguished.
Theoretical modeling and calculations indicate that the spherical geometry of dielectric particles favors resonant scattering and reducing the geometrical symmetry of the dielectric particles weakens resonant scattering (Gupta et al., 2018). Figure 2A highlights that the geometrical symmetry of silica nanoparticles can be lowered by tuning the concentration of CTAB in the sol-gel reaction as described elsewhere (Figure S2) (Hao et al., 2015;Rahmani et al., 2017). Highly symmetrical SiO x NSs are formed by using the modified Stöber method (Figure S1). Adopting a different synthetic procedure with addition of CTAB yields SiO x with varied morphologies. Addition of a low concentration (4.9 mM) of CTAB to the reaction solution slightly truncates the two poles of SiO x NSs, forming silica nanoparticles (labeled as t-SiO x NSs shown in Figure 2B) with a lower geometric symmetry than the SiO x NSs. Adding more CTAB further truncates the silica nanoparticles and silica nanoparticles with an ellipsoidal shape (labeled as e-SiO x NPs, Figure 2C) are formed with an extremely high concentration (9.8 mM) of CTAB. Same as the SiO x NSs, both the t-SiO x NSs and e-SiO x NPs also exhibit negligible absorption signal in the corresponding DRS spectra (purple and pink curves, Figure 2D). In contrast, the Pt nanocrystals on the t-SiO x NSs and e-SiO x NPs exhibit strong light absorption (red and blue curves, Figure 2D), indicating that both the t-SiO x NSs and e-SiO x NPs are still capable of enhancing light absorption of the Pt nanocrystals due to light scattering of the silica nanoparticles. The corresponding DRS spectra become essentially peakless, implying the absence of resonant light scattering on the t-SiO x NSs and e-SiO x NPs with lowered geometric symmetry of D ∞h . The spectral difference of the Pt nanocrystals on the differently shaped silica nanoparticles highlights the importance of geometric symmetry of the silica nanoparticles on determining their light scattering mode. Resonant scattering, which is responsible for the appearance of well-defined intense absorption peaks in the DSR spectra, is very sensitive to the geometric symmetry of silica nanoparticles. Only the SiO x NSs with the highest geometrical symmetry of R 3 supports strong surface scattering resonances. Lowering the geometric symmetry of the silica nanoparticles drastically suppresses the resonant scattering while random scattering is barely influenced (Figure 2D). The difference in the DRS intensity shows that the resonant scattering and random scattering of SiO x NSs make approximately equal contributions to enhance the light absorption in the Pt nanocrystals at the resonance frequencies. At the non-resonance frequencies, the random scattering dominates the enhancement of light absorption in the Pt nanocrystals.
Light scattering on the SiO x NSs can enhance optical absorption in any nanoparticles attached to the SiO x NSs (Zhang et al., 2016;Rasamani et al., 2017Rasamani et al., , 2018Dai et al., 2018b). For example, quantum-sized Rh nanocrystals with sizes of ∼3 nm exhibit a peakless absorption spectrum while strong light absorption with well-defined peaks is observed from the Rh nanocrystals loaded to the SiO x NSs (with an average Frontiers in Chemistry | www.frontiersin.org diameter of 356 nm, Figure S3). To further verify the importance of spherical symmetry of the silica nanoparticles on resonant scattering, another set of silica nanoparticles with reduced geometrical symmetry are synthesized by controlling the aspect ratio of the rodlike particles. The silica nanoparticles start nucleation at nanosized water-rich emulsion droplets followed by anisotropic growth along the direction away from emulsion droplets, forming rodlike nanoparticles with one flat end and one rounded end (Figure 3A, Figure S4). These rodlike silica nanoparticles (r-SiO x NPs) exhibit a geometric symmetry of C ∞V and their geometric aspect ratio is determined by the lateral dimensions shown in Figure 3A. Similar to the SiO x NSs, these r-SiO x NPs are also feasible to attract the quantum-sized Rh nanocrystals to their surfaces, forming SiO x /Rh composites. Figures 3B,C show the SEM images of bare silica and composite samples (insets) formed from the r-SiO x NPs with aspect ratios of 0.8 and 1.4, respectively. The corresponding powders of these composite particles exhibit strong optical absorption although their DSR spectra are different from that of the SiO x -NS/Rh composite particles (Figure 3D). The spectral difference shows that the intensity of the absorption peaks decreases as the aspect ratio of the r-SiO x NPs deviates from the unity (i.e., 1) that corresponds to the aspect ratio of the SiO x NSs. This relationship can be quantitatively compared by integrating these DRS spectra in the range of 300-800 nm, and the integrated values are shown in Figure S5 as a function of aspect ratio of the supporting SiO x NPs. The volcano shape with a maximum at the aspect ratio of 1 (corresponding to spherical SiO x NPs) again highlights that the SiO x NSs are more effective in enhancing light absorption in the Rh nanocrystals than the rodlike silica nanoparticles regardless of the aspect ratios >1 or <1. A more deviation of the aspect ratio weakens the absorption peaks more, further confirming that high geometric symmetry of the SiO x FIGURE 4 | (A) DRS spectra of aqueous dispersions of well-dispersed Rh nanocrystals (green dotted) and SiO x /Rh composite particles formed from SiO x NSs (black), r-0.8 SiO x NPs (blue), and r-1.4 SiO x NPs (red). The intensity of these DRS spectra were normalized against the mass concentration of Rh nanocrystals. (B) Calculated enhancement spectra for the SiO x NSs (black), r-0.8 SiO x NPs (blue), and r-1.4 SiO x NPs (red).
NSs is crucial to support strong resonant scattering on the dielectric silica nanoparticles. Regardless of the aspect ratio of the silica nanoparticles, the featureless baselines of these DRS spectra remain essentially consistent, indicating that the random scattering of the silica nanoparticles makes an approximately constant contribution to enhance the light absorption in the Rh nanocrystals. At the resonance frequencies, the light absorption in the Rh nanocrystals enhanced by the resonant scattering of the SiO x NSs is comparable to the absorption enhanced by the random scattering. The light absorption is dominated by the enhancement originated from the random scattering at the nonresonance frequencies. The DRS spectra shown in Figures 2D, 3D consistently highlight that both resonant scattering and random scattering of the silica nanoparticles are capable of enhancing the light absorption in the quantum-sized metal nanocrystals attached to the silica nanoparticles. The occurrence FIGURE 5 | Schematic illustration highlighting the difference of light scattering on silica nanoparticles with different geometric symmetries. (A) A SiO x NS with the highest geometric symmetry of R 3 supports both strong resonant scattering, which generates new standing waves with stronger electric fields near the surface of the SiO x NS, and random scattering, which scatters the incident light off the surface of the SiO x NS in directions different from its original propagation direction. (B) A SiO x NP with non-spherical shape, corresponding to a geometric symmetric lower than R 3 , merely supports random scattering. The influence of geometry of SiO x NPs on the light scattering mode can be transferred to the enhanced optical absorption of quantum-sized metal nanocrystals attached to the SiO x NPs as presented in Figures 1-4. The red and blue colors denote high and low electric field intensity, respectively. of resonant scattering strongly depends on the geometric symmetry of the silica nanoparticles. The SiO x NSs with the highest geometric symmetry supports the strongest resonant scattering while the resonant scattering weakens with decrease of their geometric symmetry. In contrast, the random scattering is independent of the geometry of the silica nanoparticles.
The light scattering efficiency of the dielectric silica nanoparticles also relies on the refractive index (n) of the surrounding environment (Mishchenko et al., 2002;Zhu et al., 2010). A large difference of refractive indexes between the surround medium and the silica nanoparticles (n silica ≈1.4-1.45 for the sol-gel silica) promotes light scattering. Therefore, the silica nanoparticles dispersed in water (n water = 1.33 at room temperature) exhibit a much lower light scattering efficiency compared to the dry powder of the silica nanoparticles in air (n air = 1). The variation of refractive index of the surround medium also influences the scattering resonance frequencies of the SiO x NSs. Figure 4A compares the DRS spectra of aqueous dispersions of freestanding Rh nanocrystals and SiO x /Rh composite particles formed with varying silica nanoparticles, which exhibit profiles different from the DRS spectra of the corresponding SiO x /Rh dry powders ( Figure 3D). However, the light absorption in the Rh nanocrystals is always enhanced when the Rh nanocrystals are attached to the silica nanoparticles. The good dispersion of the Rh nanocrystals in water and on the surfaces of the silica nanoparticles ensures that the measured DRS signals represent the accumulation of optical absorption of individual Rh nanocrystals with exclusion of possible interparticular coupling between the adjacent Rh nanocrystals. As a result, the enhancement spectrum, i.e., enhancement factor as a function of wavelength, enabled by the silica nanoparticles can be calculated by dividing the DRS spectrum of a dispersion of the corresponding SiO x /Rh composite particles against the DRS spectrum of the dispersion of the freestanding Rh nanocrystals (green dotted curve, Figure 4A). Figure 4B presents the calculated enhancement spectra enabled by the SiO x NSs and the r-SiO x NPs with aspect ratios of 0.8 and 1.4, showing that the maximum enhancement factor can reach ∼12 around 750 nm for the SiO x NSs. In the visible spectral region (i.e., at λ > 450 nm), the enhancement factor is always higher than 7 for the SiO x NSs although it is lower for the r-SiO x NPs. Since the light scattering efficiency of the dry SiO x NSs in air is much higher than that of the wet SiO x NSs in solvents (e.g., ethanol, water, etc.), the optical absorption power of the Rh nanocrystals attached to the SiO x NSs could be enhanced by a higher factor when SiO x -NS/Rh composite particles are used in gas atmospheres. In order to achieve the maximum scattering-enhanced light absorption in quantum-sized metal nanoparticles, the synthesis of SiO x NSs has to be carefully controlled to ensure the refractive index of the SiO x NSs to reach the possible maximum value (Li et al., 2014;Zulfiqar et al., 2016;Dai et al., 2018a).
CONCLUSION
In conclusion, silica nanoparticles with lateral dimensions of hundreds of nanometers and larger represent a class of dielectric antenna that does not absorb visible light, exhibiting strong light scattering in the visible spectral region with a minimum energy loss. Regardless of the geometry of the silica nanoparticle antenna, random scattering of the incident light is always observed while the wave function is not altered. When the geometry of the silica nanoparticles exhibits a high enough symmetry, the nanoparticle antenna can also support surface scattering resonances that generate new standing waves with wave functions different from the incident light. For example, light scattering resonances on the surface of SiO x NSs with the highest geometric symmetry result in the formation surface "hot spots, " at which the new electromagnetic waves exhibit higher power density than the incident light. Figure 5 schematically highlights the dependence of scattering mode on the geometry of the silica nanoparticles. The highly symmetric SiO x NSs support both resonant scattering and random scattering while the silica nanoparticles with low geometric symmetry (e.g., r-SiO x NPs with large aspect ratios shown in Figure S4) support merely random scattering. When light-absorbing species are attached to the surface of the silica nanoparticles, their optical absorption power can be enhanced by both random scattering and resonant scattering. The DRS spectra of the SiO x /Pt and SiO x /Rh composite particles shown in Figures 2D, 3D represent the typical examples highlighting that the optical absorption of quantum-sized metal nanocrystals is significantly enhanced by light scattering (including both random scattering and resonant scattering) on the silica nanoparticles. The random scattering of light from the silica nanoparticles propagates in directions different from that of the incident light, elongating the propagation pathway of light in a material block made of silica/metal composite particles. The longer light pathway allows more incident light to be absorbed by the metal nanocrystals attached on the silica nanoparticles, resulting in an enhancement of overall light absorption. It is worth pointing out that the random scattering does not change the spatial power density of the scattered light. In contrast, resonant scattering of the incident light creates "hot spots" on the surface of SiO x NSs to significantly increase the local electrical fields (corresponding to power density), which enhance the optical (energy) absorption in the metal nanocrystals attached to the surface of the SiO x NSs. The mechanistic difference in enhancing light absorption in quantum-sized metal nanocrystals for random scattering and resonant scattering might be more influential toward non-linear optical properties, for example, photo-excited hot electron generation in the quantum-sized metal nanocrystals (Wei et al., 2018). The understanding sheds light on designing composite materials with the dielectric silica nanoparticle antenna to promote performance of applications.
AUTHOR CONTRIBUTIONS
YS, XD, and KR: experimental design, data analysis and interpretation, manuscript writing, and manuscript revision; XD, KR, GH, and RM: material synthesis and characterizations, data acquisition.
ACKNOWLEDGMENTS
This work is supported by the startup from Temple University.
|
2018-10-16T09:00:42.693Z
|
2018-10-16T00:00:00.000
|
{
"year": 2018,
"sha1": "2d73c2617a399cb04cb1a3f2862db96c83b9b5a1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2018.00494/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d73c2617a399cb04cb1a3f2862db96c83b9b5a1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
271599735
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy of stromal vascular fraction in the treatment of scars: A systematic review and meta‐analysis
Abstract Objective Our aim was to assess the effectiveness of stromal vascular fraction (SVF) in treating scars using the latest meta‐analysis. Methods We used PubMed, Embase, Cochrane, and Web of Science to search the studies used to evaluate the efficacy of SVF in scar treatment. At least one of the following outcome measures were reported: vascularity, pigmentation, thickness, relief, pliability, surface area, pain, itching and color. Results A total of four eligible articles comprising 145 patients (64 SVF patients and 81 non‐SVF patients) were included. The findings of this meta‐analysis indicated that SVF had significant therapeutic effects in terms of vascularity (SMD/MD, 95% CI: ‐1.12, ‐0.02; p = 0.04), itching (SMD/MD, 95% CI: ‐0.61, ‐0.13; p = 0.002), POSAS (SMD/MD, 95% CI: ‐5.93, ‐1.47; p = 0.001), and thickness (SMD/MD, 95% CI: ‐1.04, ‐0.35; p < 0.001). In terms of OSAS (SMD/MD, 95% CI: ‐9.14, 0.59; p = 0.09), pigmentation (SMD/MD, 95% CI: ‐1.02, 0.06; p = 0.08), relief (SMD/MD, 95% CI: ‐1.14, 0.16; p = 0.14), surface area (SMD/MD, 95% CI: ‐0.91, 0.26; p = 0.27), PSAS (SMD/MD, 95% CI: ‐7.20, 0.49; p = 0.09), pain (SMD/MD, 95% CI: ‐0.87, 0.07; p = 0.10), pliability (SMD/MD, 95% CI: ‐0.57, 0.01; p = 0.06), and color (SMD/MD, 95% CI: ‐1.78, 0.48; p = 0.26), there were no significant statistical differences. Conclusion In view of the heterogeneity and potential selective bias, further large‐scale, prospective, and multicenter clinical trials are needed to confirm the efficacy and reliability of SVF in the treatment of scars.
the high recurrence rate, which can be as high as 70% for pathological scar surgical revision. 7,80][11] SVF is a cell extract composed of heterogeneous cell groups.Its regeneration stems from the potential differentiation and paracrine action of adipose-derived stem cells (ADSCs) and various growth factors like vascular endothelial growth factor. 124][15] At present, research is being carried out to determine the mechanism that plays a role at the cellular level.Epidermal growth factor (EGF) plays a pivotal role in skin wound healing and epithelialization, making it one of the most critical growth factors in this process.EGF induces the migration and growth of keratinocytes and fibroblasts, and accelerates wound healing.Moreover, the expressions of wound healing genes such as vascular endothelial growth factor a, epidermal growth factor, fibroblast growth factor-2, and connective tissue growth factor precursor in SVF are higher than those in adipose-derived mesenchymal stem cells.Under the action of these factors, SVF implantation greatly accelerated wound closure and increased cellularization and re-epithelization. 16sides its acknowledged importance in regenerative medicine, SVF has quickly gained attention in the aesthetics field.8][19][20][21] These studies confirmed that SVF improved the scores of patient and observer scar assessment scale (POSAS), especially Patient Scale of the POSAS, skin quality, pigmentation, and flexibility.Many studies have shown that this method is effective for managing scars in a large number of patients. 22,23Previous studies show that the application of SVF can promote tissue healing and maintain the volume of fat transplantation.Li M et al. used SVF-gel prepared by physical methods for the first time to fill and improve mature striae distensae (SD).It can be found that the color and depth of the SD were alleviated, and a relatively satisfactory clinical effect was achieved. 24wever, for the specific efficacy of SVF in scar treatment, the evidence is currently limited to clinical research, and there is no evidencebased medicine evidence.Therefore, this study finally included four original studies published in 2020-2023 on the treatment of scars with SVF, and performed a meta-analysis to provide further evidence for the treatment of scars with SVF.
Inclusion criteria
The inclusion criteria included (1)
Data extraction
From the included studies, we extracted the following data: first author, study period, study country, retrospective or prospective, intervention measures, control subjects, number and age of people who used and did not use SVF, and follow-up period.[27]
Literature quality evaluation
Among the four articles included in this paper, three were prospective randomized controlled studies and one was a retrospective casecontrol study.The NOS scale was used to evaluate the quality of observational case-control studies, with a total score of 9 points.A study with a final score of ≥ 7 points was considered high-quality.The quality assessment of eligible RCTs was performed in accordance with the Cochrane Handbook for Systematic Reviews of Interventions 5.1.0.
This assessment was based on seven criteria: random sequence generation, allocation concealment, blinding of outcome assessment, blinding of participants and personnel, selective reporting, incomplete outcome data, and other potential sources of bias. 28Each aspect of the studies was evaluated and categorized into three outcomes: low risk, high risk, or unclear risk.Studies with a low risk of bias were considered to be of higher quality.Two researchers independently evaluated the qual-F I G U R E 1 Document screening process and results.
ity and evidence level of qualified studies and resolved any differences through discussion.
Statistical analysis
Review manager 5.4 was used for meta-analysis and forest plotting, and all indicators were reported with 95% confidence interval (CI).p value less than 0.05 or I 2 greater than 50% indicated the presence of heterogeneity, so the fixed effect model was selected for metaanalysis; otherwise, the hypothesis of heterogeneity was rejected, and there was homogeneity among the included studies.Hence, the random effect model was selected for meta-analysis.By using Stata 15.1 to perform Egger's test, the publication bias was evaluated intuitively.A p value less than 0.05 indicated the presence of a statistical publication bias.
Literature search results
According to the search terms, 302 related articles were retrieved in
Literature quality evaluation
The basic information on the included studies is shown in Table 1.
The four studies included a total of 145 patients with scars.Their age ranged from 18 to 72.9 years.The follow-up time varied from 3 to 12 months.The quality of three prospective 12,29,30 RCTs was assessed using the RCT scale (Table S1).One study 12 on 32 scar patients did not specify how the patients were randomly included, and was rated as having an unclear risk of bias.The remaining items in this article and the other two articles were all at a low risk of bias (Figures 2 and 3).
The quality of the included retrospective case-control study 28 was evaluated using the NOS scale, with a score of 8 (Table S2).
Observer scale of the POSAS
A total of two articles 28,30 F I G U R E 2 Risk of bias graph: review authors' judgments about each risk of bias item presented as percentages across all included studies.
was 92%, and a random effects model was applied.The meta-analysis results indicated that no significant statistical difference was observed in Observer Scale of the POSAS between the two groups (MD, 95% CI: −9.14,0.59;p = 0.09) (Figure 4A).However, according to the original data in the included studies, the original POSAS Observer Scale in the SVF group was significantly lower than that in the non-SVF group.
Vascularity
A total of two articles 12,30 reported the data on vascularity (27 SVF cases vs. 27 non-SVF cases, a total of 54 patients).I 2 was 35%, and a fixed effects model was applied.The meta-analysis results indicated that there was a significant statistical difference between the two groups (SMD, 95% CI: −1.12, −0.02; p = 0.04) (Figure 4B).The vascularity of the SVF group was significantly lower than that of the non-SVF group.Vascularity is a key factor in promoting scar hyperplasia, and its index decreases obviously under the treatment of SVF, which shows that SVF has a certain effect on inhibiting scar hyperplasia.
Pigmentation
Two articles 12,30 reported the data on pigmentation (27 SVF cases vs. 27 non-SVF cases, a total of 54 patients).I 2 was 0%, and a fixed effects model was applied.The meta-analysis results indicated that no significant statistical difference was observed in pigmentation between the two groups (SMD, 95% CI: −1.02, 0.06; p = 0.08) (Figure 4C).
Relief
A total of two articles 28,30 reported the data on relief (46 SVF cases vs. 63 non-SVF cases, a total of 109 patients).I 2 was 72%, and a random effects model was applied.The meta-analysis results indicated that no significant statistical difference was observed in relief between the two groups (MD, 95% CI: −1.14, 0.16; p = 0.14) (Figure 4D).
Surface area
A total of two articles 29,30 reported the data on surface area (23 SVF cases vs. 23 non-SVF cases, a total of 46 patients).I 2 was 0%, and a fixed F I G U R E 3 Risk of bias summary: review authors' judgments about each risk of bias item for each included study.
effects model was applied.The meta-analysis results indicated that no significant statistical difference was observed in surface area between the two groups (SMD, 95% CI: −0.91, 0.26; p = 0.27) (Figure 4E).
Patient scale of the POSAS
A total of two articles 28,30 reported the data on Patient Scale of the POSAS (46 SVF cases vs. 63 non-SVF cases, a total of 109 patients).I 2 was 91%, and a random effects model was applied.The meta-analysis results indicated no significant statistical difference was observed in Patient Scale of the POSAS between the two groups (MD, 95% CI: −7.20, 0.49; p = 0.09) (Figure 5A).However, according to the original data in the included studies, the original Patient Scale of the POSAS in the SVF group was significantly lower than that in the non-SVF group.
Pain
A total of two articles 28,30 reported the data of pain (46 SVF cases vs. 63 non-SVF cases, a total of 109 patients).I 2 was 56%, and a random effects model was applied.The meta-analysis results indicated no significant statistical difference was observed in pain between the two groups (MD, 95% CI: −0.87, 0.07; p = 0.10) (Figure 5B).
Itching
A total of two articles 28,30 reported the data of itching (46 SVF cases vs. 63 non-SVF cases, a total of 109 patients).I 2 was 17%, and a fixed effects model was applied.The meta-analysis results indicated that there were significant statistical differences in the itching score (MD, 95% CI: −0.61, −0.13; p = 0.002) (Figure 5C).The itching score in the SVF group was significantly lower than that in the non-SVF group.Itching caused by a scar may make patients feel strong discomfort, and its index is obviously decreased under the treatment of SVF, which shows that SVF has a significant effect on scar treatment.
Color
A total of two articles 28,30 reported the data of color (46 SVF cases vs. 63 non-SVF cases, a total of 109 patients).I 2 was 91%, and a random effects model was applied.The meta-analysis results indicated that no significant statistical difference was observed in color between the two groups (MD, 95% CI: −1.78, 0.48; p = 0.26) (Figure 5D).
Patient and observer scar assessment scale (POSAS)
A total of two articles 28,30 reported the data of POSAS.Observer Scale of the POSAS was assessed for 106 patients (46 SVF cases vs. 63 non-SVF cases) and Patient Scale of the POSAS was assessed for 106 patients (46 SVF cases vs. 63 non-SVF cases).I 2 was 88%, and a random effects model was applied.The meta-analysis results indicated that there were significant statistical differences (MD, 95% CI: −5.93, −1.47; p = 0.001) (Figure 6).Whether it was Observer Scale of the POSAS or Patient Scale of the POSAS, most of the evaluation indicators were abnormal, such as thickness, color, and pain, which gradually decreased with the effective treatment.The analysis indicated that POSAS in the SVF group was significantly lower than that in the non-SVF group, suggesting that SVF played a more obvious role in scar treatment.However, Egger's test (p = 0.008) (Table S3) and sensitivity analysis (−8.02, −0.76) (Figure 7) showed that there was a certain publication bias.
Thickness
A total of three articles [28][29][30] 8), and the thickness of the SVF group was significantly lower than that of the non-SVF group.
The thickness was very intuitive in evaluating the curative effect in the treatment of scars.This analysis showed that the thickness of scars was significantly reduced under SVF treatment, that is, SVF was effective in the treatment of scars.No statistically significant publication bias was detected by Egger's test (p = 0.714) (Table S4) and sensitivity analysis (−1.78, −0.13) (Figure 9).In particular, one article did not use POSAS to evaluate the curative effect in the treatment of scars, so it was considered as an Observer Scale of the POSAS index by default, and it was also the same below.
Pliability
A total of three articles 12,28,30 reported the data of pliability.The pliability in Observer Scale of the POSAS was assessed in 54 patients (27 SVF cases vs. 27 non-SVF cases) and the pliability in Patient Scale of the POSAS was assessed for 109 patients (46 SVF cases and 63 non-SVF cases).I 2 was 0%, and a fixed effects model was applied.The results of the meta-analysis suggested that there was no significant statisti-cal difference in pliability in different assessment scales (MD, 95% CI: −0.57,0.01;p = 0.06) (Figure 10).After Egger's test (p = 0.787) (Table S5) and sensitivity analysis (−1.03, 0.13) (Figure 11), no statistically significant publication bias was detected, but it was not stable.
DISCUSSION
In this study, we conducted a meta-analysis on 145 patients from 4 articles in PubMed, Embase, Cochrane, and Web of Science to evaluate the efficacy of SVF in scars treatment.The POSAS was widely used for measurements of scars quality, which encompasses visual, tactile, and sensory characteristics of the scar. 31A statistical analysis and thickness in POSAS but did not ameliorate pliability. 27rthermore, this study revealed that SVF had an obvious therapeutic effect in the whole scar treatment, no matter from the observer's or the patient's point of view.Interestingly, when Patient Scale of the POSAS or Observer Scale of the POSAS was analyzed alone, there was no significant statistical difference.Nevertheless, when they were evaluated as a whole, there were significant statistical differences.
The inconsistent findings might be because the evaluation scales are subjective and a single evaluation scale is not accurate enough.
In addition, the results from one of the included articles were different from the findings in this study.Hee Tae Koo et al. indicated that it F I G U R E 1 1 Sensitivity analysis of pliability.
was impossible to prove that SVF had any clinical and histological influence on scar quality. 12However, as a preliminary study, their study has obvious limitations in the number of participants, the number of injections, and the evaluators.This different conclusion warns us that SVF is an innovative but controversial method for the treatment of scars.
Hence, further research is needed to investigate the efficacy of SVF in scar treatment.
Finally, this study has several limitations.First of all, our metaanalysis mainly includes three prospective randomized studies and one retrospective case-control study.The sample size is limited.Second, although the studies included in this article mostly employ a prospective cohort design, subgroup analysis is not performed due to the limited number of included studies and the restricted analysis methods.Additionally, certain indicators display significant heterogeneity, so caution is advised when interpreting outcomes with lower credibility.Although this study has the above limitations, it is the first meta-analysis to study the application of SVF in scar treatment.The conclusion of this study is consistent with most published data.SVF is effective in scar treatment, providing the latest evidence-based medical evidence for the clinical application of SVF in scar treatment and offering theoretical support for further prospective large-scale clinical trials.
CONCLUSION
Our statistical analysis shows that SVF has an obvious effect on improving vascularity, itching, and thickness in patients during scar treatment.There are also obvious statistical differences in POSAS.
However, in view of the lack of included studies, heterogeneity, and potential selective bias, further large-scale, prospective, and multi-center clinical trials are needed to clarify the efficacy of SVF in the treatment of scars.
PubMed (n = 65), Embase (n = 44), Cochrane (n = 14), and Web of Science (n = 179).The articles were screened according to the preestablished inclusion and exclusion criteria.The literature screening process and results are shown in Figure 1.Finally, four articles were included.Among these articles, there were three prospective studies and one retrospective study.Of the four included articles, the studies by Behrangi 29 and Kwon 30 were retrieved from Cochrane; the study by Tae Koo Tee 12 was retrieved from Web of Science; The study by Moon 28 was retrieved from Embase.
F I G U R E 4
Meta-analysis forest plot: (A) Observer Scale of the POSAS.(B) Vascularity.(C) Pigmentation.(D) Relief.(E) Surface area.
F I G U R E 5
Meta-analysis forest plot: (A) Patient Scale of the POSAS.(B) Pain.(C) Itching.(D) Color.F I G U R E 6 Meta-analysis forest plot of POSAS.was conducted on specific indicators from these scales, and indicators with only one research data were excluded.We found that SVF had a significant effect on Vascularity, Itching, and Thickness, while there was no significant statistical difference in POSAS Observer Scale, Pigmentation, Relief, Surface area, POSAS Patient Scale, Pain, Pliability, and Color.Our analysis provides clues for SVF in treating scars.Scar tissue replaces the original normal tissue structure, which has a great influence on the appearance, function, and even psychology F I G U R E 7 Sensitivity analysis of POSAS.F I G U R E 8 Meta-analysis forest plot of thickness.ofpatients, and will make patients feel inferior, anxious, and even depressed.The impact of pathological scars are obvious using lasers, scar revision, and/or steroid injection.The risk of recurrence exists for any therapeutic modality.Current clinical research suggests that incorporating SVF cells into artificial dermis can decrease wound contraction, which is crucial for minimizing scar contracture.This effect might be related to the role of myofibroblasts in reducing wound contraction.During the inflammatory stage, SVF cells could potentially reduce mast cell and myofibroblast levels through immunosuppressive and anti-inflammatory mechanisms.Additionally, SVF cells might hinder fibroblast migration from the wound, offering another explanation for their ability to reduce wound contraction.32 Lee et al. proved the clinical efficacy of SVF in surgical treatment of contracted and sunken scars, and explained their findings by applying these characteristics to different stages of wound healing.29However, its exact curative effect has not been recognized at home and abroad.SVF can be obtained with no need for any cell separation or culturing conditions.The commonly used technique for isolating SVF from lipoaspirate involves the digestion of the fatty issue in the lipoaspirate with collagenase, resulting in two distinct contents: a floating fraction comprising mature adipocytes and a lower aqueous fraction containing the desired cellular components.30,33The separation can be enhanced by centrifugation.The centrifugation of the aqueous fraction results in F I G U R E 9 Sensitivity analysis of thickness.F I G U R E 1 0 Meta-analysis forest plot of pliability.a reddish pellet that contains SVF cells, which can be lysed to obtain a purer population of SVF cells.If SVF has a definite effect in the treatment of scars, progress would be made in the treatment of scars.In this case, we performed a meta-analysis of four comparative studies, which played a certain role in guiding the research of SVF in scar treatment.First, it was found in this article that SVF has an obvious therapeutic effect on scar thickness.The research of Elham Behrangi et al. confirmed that SVF had a favorable effect on wound healing.21The apparent variables of the wound on the treatment side were observed through injection of SVF.The volume, area, and thickness were significantly improved during a month, but this improvement did not occur on the face control side.The research of Hyeokjae Kwon et al. also showed that subcutaneous injection of SVF had obvious and significant effect in thickness aspect.Second, our study revealed that SVF was effective in relieving itching caused by a scar.The study of Hyeokjae Kwon et al. demonstrated that SVF significantly improved pain, itching, color, stiffness, Basic characteristics of included studies.
reported the data on Observer Scale of the POSAS (46 SVF cases vs. 63 non-SVF cases, a total of 109 patients).I 2 TA B L E 1
|
2024-08-02T06:17:39.656Z
|
2024-07-31T00:00:00.000
|
{
"year": 2024,
"sha1": "00053913558849bc61f7827fb42d2a910f75aaa7",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "820fb6aeaa255b4c4f983e3945d909381862c15e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224825164
|
pes2o/s2orc
|
v3-fos-license
|
Enhanced Heat Resistance of Acrylic Pressure-Sensitive Adhesive by Incorporating Silicone Blocks Using Silicone-Based Macro-Azo-Initiator
To improve the heat resistance of acrylic-based pressure-sensitive adhesive (PSA), silicone-block-containing acrylic PSAs (SPSAs) were synthesized using a polydimethylsiloxane (PDMS)-based macro-azo-initiator (MAI). To evaluate the heat resistance of the PSA films, the probe tack and 90° peel strength were measured at different temperatures. The acrylic PSA showed that its tack curves changed from balanced debonding at 25 °C to cohesive debonding at 50 °C and exhibited a sharp decrease. However, in the case of SPSA containing 20 wt% MAI (MAI20), the balanced debonding was maintained at 75 °C, and its tack value hardly changed with temperature. As the MAI content increased, the peel strength at 25 °C decreased due to the microphase separation between PDMS- and acryl-blocks in SPSA, but the shear adhesion failure temperature (SAFT) increased almost linearly from 41.3 to 122.8 °C. Unlike stainless steel substrate, SPSA showed improved peel strength on a polypropylene substrate due to its low surface energy caused by PDMS block. Owing to the addition of 20 wt% silicone-urethane dimethacrylate oligomer and 200 mJ/cm2 UV irradiation dose, MAI20 showed significantly increased 90° peel strength at 25 °C (548.3 vs. 322.4 gf/25 mm for pristine MAI20). Its heat resistance under shear stress assessed by shear adhesion failure test (SAFT) exhibited raising in failure temperature to 177.3 °C when compared to non-irradiated sample.
Introduction
Pressure-sensitive adhesive (PSA) is a semisolid material that adheres to various substrates at room temperature without additional chemical reactions and does not leave a residue after removal. Owing to these unique characteristics, it has been used in various industrial fields, such as packaging, labeling, medical, pharmaceutical, automotive, and electronics [1][2][3][4]. In general, PSA can be divided into three types: acrylic copolymers, polysiloxane, and rubber. Among these three types, acrylic PSA is the most widely used because it has advantages such as low cost, good resistance to light and oxygen, and optical clarity. However, acrylic PSA also has disadvantages, such as low adhesion to low surface-energy substrates and low heat resistance. In particular, the low heat resistance of acrylic PSA is an obstacle to expanding its application [4,5].
Synthesis of Silicone Block-Containing Acrylic PSA (SPSA)
SPSA was synthesized by solution polymerization using MAI (Scheme 1). The weight ratio of acrylic monomer was fixed to 2-EHA:AA:IBA = 75:5:20, and the composition was changed according to the amount of initiator (see Table 1). Control PSA was synthesized using AIBN, a general thermal initiator. Acrylic monomers (2-EHA, AA, IBA) were dissolved in EA to an overall concentration of 40 wt% and added to a 2-neck round bottom flask equipped with a reflux condenser. Then, the temperature was raised to 80 °C, and the mixture was stirred for 5 min under N2 atmosphere. The initiator was diluted in solvent (40% concentration) and added dropwise. After 12 h, to complete the reaction, the mixture was cooled to room temperature, and solvent was added to adjust the solid content to 40%. In Table 1, sample code means SPSA synthesized using different amount of MAI (e.g., MAI5 means SPSA synthesized with 5 wt% of MAI). Scheme 1. Synthesis of silicone block-containing acrylic pressure-sensitive adhesive (PSA) (SPSA).
Synthesis of Silicone Urethane Dimethacrylate (SiUDMA)
SiUDMA was synthesized based on our previous work [22], and its chemical structure is given in Figure 2. IPDI (17.83 g, 80 mmol) and DBTDL (0.07 g, 0.1 wt% of total mass) were added to a 250-mL 2-neck round bottom flask and stirred for 15 min at 50 °C under N2 atmosphere. Reactive PDMS, KF6000 (50 g, 53 mmol) was slowly added dropwise over 20 min, followed by stirring for 40 min. To introduce UV cross-linkable functionality, HEMA (3.48 g, 27 mmol) was added dropwise over 20 min, stirred for 40 min, and then cooled to room temperature to complete the reaction. The numberand weight-average molecular weights of SiUMDA were 4800 and 7500, respectively. The solution viscosity was 3199 ± 212 mPas (50 wt% in EA).
Synthesis of Silicone Block-Containing Acrylic PSA (SPSA)
SPSA was synthesized by solution polymerization using MAI (Scheme 1). The weight ratio of acrylic monomer was fixed to 2-EHA:AA:IBA = 75:5:20, and the composition was changed according to the amount of initiator (see Table 1). Control PSA was synthesized using AIBN, a general thermal initiator. Acrylic monomers (2-EHA, AA, IBA) were dissolved in EA to an overall concentration of 40 wt% and added to a 2-neck round bottom flask equipped with a reflux condenser. Then, the temperature was raised to 80 • C, and the mixture was stirred for 5 min under N 2 atmosphere. The initiator was diluted in solvent (40% concentration) and added dropwise. After 12 h, to complete the reaction, the mixture was cooled to room temperature, and solvent was added to adjust the solid content to 40%. In Table 1, sample code means SPSA synthesized using different amount of MAI (e.g., MAI5 means SPSA synthesized with 5 wt% of MAI). ). and used as a photoinitiator (Germany). All chemicals were used without further purification.
Synthesis of Silicone Block-Containing Acrylic PSA (SPSA)
SPSA was synthesized by solution polymerization using MAI (Scheme 1). The weight ratio of acrylic monomer was fixed to 2-EHA:AA:IBA = 75:5:20, and the composition was changed according to the amount of initiator (see Table 1). Control PSA was synthesized using AIBN, a general thermal initiator. Acrylic monomers (2-EHA, AA, IBA) were dissolved in EA to an overall concentration of 40 wt% and added to a 2-neck round bottom flask equipped with a reflux condenser. Then, the temperature was raised to 80 °C, and the mixture was stirred for 5 min under N2 atmosphere. The initiator was diluted in solvent (40% concentration) and added dropwise. After 12 h, to complete the reaction, the mixture was cooled to room temperature, and solvent was added to adjust the solid content to 40%. In Table 1, sample code means SPSA synthesized using different amount of MAI (e.g., MAI5 means SPSA synthesized with 5 wt% of MAI). Scheme 1. Synthesis of silicone block-containing acrylic pressure-sensitive adhesive (PSA) (SPSA).
Synthesis of Silicone Urethane Dimethacrylate (SiUDMA)
SiUDMA was synthesized based on our previous work [22], and its chemical structure is given in Figure 2. IPDI (17.83 g, 80 mmol) and DBTDL (0.07 g, 0.1 wt% of total mass) were added to a 250-mL 2-neck round bottom flask and stirred for 15 min at 50 °C under N2 atmosphere. Reactive PDMS, KF6000 (50 g, 53 mmol) was slowly added dropwise over 20 min, followed by stirring for 40 min. To introduce UV cross-linkable functionality, HEMA (3.48 g, 27 mmol) was added dropwise over 20 min, stirred for 40 min, and then cooled to room temperature to complete the reaction. The numberand weight-average molecular weights of SiUMDA were 4800 and 7500, respectively. The solution viscosity was 3199 ± 212 mPas (50 wt% in EA).
Synthesis of Silicone Urethane Dimethacrylate (SiUDMA)
SiUDMA was synthesized based on our previous work [22], and its chemical structure is given in Figure 2. IPDI (17.83 g, 80 mmol) and DBTDL (0.07 g, 0.1 wt% of total mass) were added to a 250-mL 2-neck round bottom flask and stirred for 15 min at 50 • C under N 2 atmosphere. Reactive PDMS, Polymers 2020, 12, 2410 4 of 15 KF6000 (50 g, 53 mmol) was slowly added dropwise over 20 min, followed by stirring for 40 min. To introduce UV cross-linkable functionality, HEMA (3.48 g, 27 mmol) was added dropwise over 20 min, stirred for 40 min, and then cooled to room temperature to complete the reaction. The number-and weight-average molecular weights of SiUMDA were 4800 and 7500, respectively. The solution viscosity was 3199 ± 212 mPas (50 wt% in EA).
Preparation of PSA Tape Samples
A photoinitiator (Irgacure 907) was added in 0.1 part to solutions of PSA. To prepare PSA films, solutions of the control PSA and SPSA were spread onto corona-treated polyethylene terephthalate (PET) films using a Baker applicator, and the films were placed in a convection oven at 80 °C for 30 min to remove the residual solvents. After drying, the film thickness measured with a vernier caliper was approximately 50 μm.
To modify the adhesion properties of SPSA, UV irradiation was conducted with SiUDMA oligomer. When SiUDMA was added at 10, 20, 30, 40 wt% relative to solid content of MAI20, the highest 180° peel strength was obtained at 20 wt% (see Figure S1 in Supplementary Materials). Accordingly, SiUDMA content was fixed at 20 wt% and UV irradiation (200 and 1000 mJ/cm 2 ) was applied to induce a crosslinking reaction after drying. The compositions of SiUDMA-modified SPSA samples are presented in Table 2. * The solid content of MAI20 is 40 wt% (EA is 60 wt%).
Measurements
The number-and molecular-average molecular weights (Mn and Mw) and polydispersity index (PDI) for each of PSA and SiUDMA were measured using gel permeation chromatography (GPC, PL-GPC 220, Agilent Technologies, Santa Clara, CA, USA). The samples were dissolved in a tetrahydrofuran solution at a concentration of 5 wt%, and the operation temperature was 30 °C.
Fourier transform infrared (FTIR) spectra of PSAs were recorded using a NICOLET 6700 (Thermo Fisher Scientific, Waltham, MA, USA). PSA solution was coated on a PET film and then dried at 80 °C for 1 h to form a 50-μm film. Attenuated total reflection (ATR) mode was used with a diamond prism and a 45° incident angle. The measurement range was 650-4000 cm −1 , and the resolution was 4 cm −1 .
Contact angle measurement was performed using a contact angle goniometer (Phoenix 300, Surface & Electro-Optics, Suwon, Korea). Distilled water and diiodomethane were dropped on the PSA film, and the contact angle was recorded after 5 s. The process was repeated three times at 23 ± 2 °C. Based on the measured contact angle, the surface energy of PSAs was calculated by the Owens-Wendt method [23].
The solution viscosities of the PSAs were measured using a Brookfield DV-II (Middleboro, MA, USA) at 25 °C. The solid contents of all samples were fixed at 40 wt%. For temperature stabilization, the samples were placed in a viscometer for 20 min. Then, the viscosities of the samples were measured by selecting the suitable rpm and spindle.
The storage modulus (G') and loss tangent of PSAs were measured using a rheometer (MCR 102, Anton Paar, Graz, Austria). The samples were mounted on an 8-mm-diameter round plate, and the gap between the plates was 0.5 mm. The plates were twisted under conditions of 1% strain and 1-Hz
Preparation of PSA Tape Samples
A photoinitiator (Irgacure 907) was added in 0.1 part to solutions of PSA. To prepare PSA films, solutions of the control PSA and SPSA were spread onto corona-treated polyethylene terephthalate (PET) films using a Baker applicator, and the films were placed in a convection oven at 80 • C for 30 min to remove the residual solvents. After drying, the film thickness measured with a vernier caliper was approximately 50 µm.
To modify the adhesion properties of SPSA, UV irradiation was conducted with SiUDMA oligomer. When SiUDMA was added at 10, 20, 30, 40 wt% relative to solid content of MAI20, the highest 180 • peel strength was obtained at 20 wt% (see Figure S1 in Supplementary Materials). Accordingly, SiUDMA content was fixed at 20 wt% and UV irradiation (200 and 1000 mJ/cm 2 ) was applied to induce a crosslinking reaction after drying. The compositions of SiUDMA-modified SPSA samples are presented in Table 2. * The solid content of MAI20 is 40 wt% (EA is 60 wt%).
Measurements
The number-and molecular-average molecular weights (M n and M w ) and polydispersity index (PDI) for each of PSA and SiUDMA were measured using gel permeation chromatography (GPC, PL-GPC 220, Agilent Technologies, Santa Clara, CA, USA). The samples were dissolved in a tetrahydrofuran solution at a concentration of 5 wt%, and the operation temperature was 30 • C.
Fourier transform infrared (FTIR) spectra of PSAs were recorded using a NICOLET 6700 (Thermo Fisher Scientific, Waltham, MA, USA). PSA solution was coated on a PET film and then dried at 80 • C for 1 h to form a 50-µm film. Attenuated total reflection (ATR) mode was used with a diamond prism and a 45 • incident angle. The measurement range was 650-4000 cm −1 , and the resolution was 4 cm −1 .
Contact angle measurement was performed using a contact angle goniometer (Phoenix 300, Surface & Electro-Optics, Suwon, Korea). Distilled water and diiodomethane were dropped on the PSA film, and the contact angle was recorded after 5 s. The process was repeated three times at 23 ± 2 • C. Based on the measured contact angle, the surface energy of PSAs was calculated by the Owens-Wendt method [23].
The solution viscosities of the PSAs were measured using a Brookfield DV-II (Middleboro, MA, USA) at 25 • C. The solid contents of all samples were fixed at 40 wt%. For temperature stabilization, Polymers 2020, 12, 2410 5 of 15 the samples were placed in a viscometer for 20 min. Then, the viscosities of the samples were measured by selecting the suitable rpm and spindle.
The storage modulus (G') and loss tangent of PSAs were measured using a rheometer (MCR 102, Anton Paar, Graz, Austria). The samples were mounted on an 8-mm-diameter round plate, and the gap between the plates was 0.5 mm. The plates were twisted under conditions of 1% strain and 1-Hz frequency. The temperature range was from −40 to 80 • C with a 10 • C/min heating rate.
The gel fraction of each PSA was obtained by an extraction method using metallic paper (20 mesh). The dried PSA sample (1 g) was wrapped with metallic paper and then put in a 40 • C EA solution and stirred for 24 h. The remaining solid in the metallic paper was filtered and dried in an 80 • C vacuum oven until a constant weight was obtained. The gel fraction was calculated by comparing the extracted sample weight to the initial sample weight.
The haze of each PSA film was measured using the HZ-V3 haze meter (Suga, Japan). After the baseline was measured using an uncoated PET film, the hazes of the PSA films were measured.
The probe tacks, loop tacks, 90 • peel and 180 • peel strengths of PSA films were measured using a SurTA system (ChemiLab, Suwon, Korea). For probe tack measurement, 60-µm-thick PSA coating a SUS (type 304) plate was mounted on the bottom grip of the instrument. The temperature of the plate was set at 25, 50, 75, and 100 • C. The probe diameter was 1/8 inch. The debonding speed was 0.5 mm/s, and the maximum debonding load value was recorded.
For 90 • peel strength measurement, PSA films were prepared by cutting into a size of 25 mm × 60 mm. The samples were attached to the SUS, rolled twice using a 2-kg rubber roller, and left for 20 min. After that, the sample was mounted on the instrument, and the temperature was set at 25, 50, 75, and 100 • C. The peeling speed was 300 mm/min.
The loop tack of the PSA film was evaluated by the ASTM D6195 method. The dried PSA film was prepared in a size of 25 mm × 100 mm. The sample was inserted into the upper grip in a loop shape to expose the adhesive side, and the substrates were inserted into the bottom grip. SUS and PP were used as substrates, the surface energies of which were 46.1 and 32.5 mN/m, respectively. When the contact area between the film and the substrate reached 25 mm × 25 mm, the looped PSA film was raised at a speed of 0.5 mm/s, and the maximum force was recorded.
To measure the 180 • peel strength, a PSA film was prepared with a size of 25 mm × 60 mm, attached to the substrates and rolled twice with a 2-kg rubber roller. The peeling speed was 300 mm/min.
The shear adhesion failure test (SAFT) was performed to evaluate the heat resistance and shear strength of the PSA films. Each specimen (25 mm × 25 mm) was attached to the SUS and left at 25 • C for 24 h. A load was applied using a 1-kg weight, and each sample was placed in an oven and heated to 25-200 • C at a rate of 0.5 • C/min.
Synthesis and Characterization of SPSAs
The SPSAs were synthesized using MAI, and their basic properties are displayed in Table 3. As the MAI content increased, the molecular weight of SPSA decreased, and its PDI value increased. This is well understood by considering that the molecular weight of acrylic PSA decreases and its PDI value increases with the concentration of azo-initiator [1,20,24]. In particular, MAI30 showed a rapid increase in solution viscosity during polymerization. This is due to the gelation caused by the excessive reaction heat with high MAI concentration. The gel fraction of MAI30 was calculated to be 15.84%. As a result of gel formation, the SPSA becomes non-homogenous and its molecular mass is most likely determined only for the smaller molecules of polymer dissolved in solvent, not for the polymer in gel-like state dispersed in solvent. Meanwhile, MAI5 showed a higher molecular weight and viscosity than control PSA, although two samples have the same concentration of azo group. This is because MAI is a macro-azo initiator whose average molecular weight is 82,300 g/mol, which is much larger than that of AIBN. Figure 4a shows the probe tack of various SPSAs as a function of temperature. The probe tack of control PSA decreased rapidly with temperature. However, the probe tack of SPSAs decreased less steeply with temperature compared to control PSA. Moreover, they changed much less with temperature if the MAI content was more than 20 wt% (MAI20 and MAI30). Figure 4a shows the probe tack of various SPSAs as a function of temperature. The probe tack of control PSA decreased rapidly with temperature. However, the probe tack of SPSAs decreased less steeply with temperature compared to control PSA. Moreover, they changed much less with temperature if the MAI content was more than 20 wt% (MAI20 and MAI30). Figure 4b,c shows the unloading behavior of the probe tack of the control and MAI20 PSAs with different temperatures. The control PSA showed a plateau after debonding at 25 • C, the load rapidly decreased to 0, and no residue was found on the probe. However, as the temperature increased to 50 • C, liquid-like behavior began to appear, and high elongation without a plateau was observed. The residue was left on the probe. According to Gdalin et al., the types of tack curves are largely divided into three categories, such as balanced, cohesive, and solid-like debonding, depending on the balance between cohesive force and free volume of PSA [25].
Adhesion Property and Thermal Stability of SPSA
The behavior of control PSA at 25 • C is "balanced debonding," which occurs when cohesion force and free volume are balanced. The behavior above 50 • C is expressed as "cohesive debonding," which occurs in a liquid-like adhesive with low cohesive force. This means that the cohesive force of the control PSA decreases sharply but the free volume increases with temperature. In addition, at 50 • C or above, the load does not become zero, and elongation continues. This means that, although the cohesive force of PSA decreases, its adhesion strength increases due to the improved surface wetting resulting from the viscosity lowered by temperature (see the inset of Figure 4b). Alternately, MAI20 showed similar probe tack curves at 50 • C or above, which was different from the control PSA. As the temperature increased, the maximum force value decreased somewhat, but there still remained a plateau, and the final load became zero (see the inset of Figure 4c). This means that MAI20 belongs to optimized adhesion (balanced debonding) and has superior heat resistance to control PSA. Si-C stretching Wavenumber (cm -1 ) Figure 3. FTIR spectra of Control and MAI20 PSAs. Figure 4a shows the probe tack of various SPSAs as a function of temperature. The probe tack of control PSA decreased rapidly with temperature. However, the probe tack of SPSAs decreased less steeply with temperature compared to control PSA. Moreover, they changed much less with temperature if the MAI content was more than 20 wt% (MAI20 and MAI30). Figure 5 shows the 90 • peel strength of SPSAs as a function of temperature. The trend was similar to that of the probe tack. In the MAI20 and MAI30 samples, cohesive failure did not appear until 100 • C, and their peel strength did not change significantly with temperature, although their initial values were relatively small. To explain this, the viscoelastic properties of the control PSA and MAI20 were examined and are displayed in Figure 6. Figure 6 shows that the storage modulus (G') of control PSA decreased sharply with temperature, and the slope changed and became almost constant at approximately 100 • C, indicating that melting occurred. Alternately, MAI20 showed relatively lower storage modulus variation with temperature compared to control PSA (25 • C/100 • C = 86,841/12,280 vs. 25 • C/100 • C = 45,992/2397). This helps to understand the relatively small changes in the probe tack and 90 • peel strength of MAI20 with temperature. In the case of tan δ (see Figure 6b), the glass transition temperature (T g ) of MAI20 was lower than that of control PSA (−13.86 • C vs. −9.86 • C), which was due to the contribution of higher flexibility of PDMS to the MAI20 flexibility on molecular level.
Adhesion Property and Thermal Stability of SPSA
to optimized adhesion (balanced debonding) and has superior heat resistance to control PSA. Figure 5 shows the 90° peel strength of SPSAs as a function of temperature. The trend was similar to that of the probe tack. In the MAI20 and MAI30 samples, cohesive failure did not appear until 100 °C, and their peel strength did not change significantly with temperature, although their initial values were relatively small. To explain this, the viscoelastic properties of the control PSA and MAI20 were examined and are displayed in Figure 6. Figure 6 shows that the storage modulus (G') of control PSA decreased sharply with temperature, and the slope changed and became almost constant at approximately 100 °C, indicating that melting occurred. Alternately, MAI20 showed relatively lower storage modulus variation with temperature compared to control PSA (25 °C/100 °C = 86,841/12,280 vs. 25 °C/100 °C = 45,992/2397). This helps to understand the relatively small changes in the probe tack and 90° peel strength of MAI20 with temperature. In the case of tan δ (see Figure 6b), the glass transition temperature (Tg) of MAI20 was lower than that of control PSA (−13.86 °C vs. −9.86 °C), which was due to the contribution of higher flexibility of PDMS to the MAI20 flexibility on molecular level. In the case of 90° peel strength at 25 °C, they decreased significantly with MAI content (see Figure 5). This is due to the microphase separation between the PDMS and acrylic blocks in the SPSA polymer chains. This microphase separation can be estimated by measuring the haze of SPSAs, as shown in Figure 7. The haze of SPSAs increased steeply according to the MAI content, which means that microphase separation was induced by MAI. The effect of phase separation on the adhesion properties of PSA was well studied by Kim et al. [26][27][28]. They found that the tack, peel strength, and shear strength of adhesives decrease sharply by phase separation. Therefore, the compatibility of SPSAs should be improved to overcome this problem, which will be discussed in Section 3.4. In the case of 90 • peel strength at 25 • C, they decreased significantly with MAI content (see Figure 5). This is due to the microphase separation between the PDMS and acrylic blocks in the SPSA polymer chains. This microphase separation can be estimated by measuring the haze of SPSAs, as shown in Figure 7. The haze of SPSAs increased steeply according to the MAI content, which means that microphase separation was induced by MAI. The effect of phase separation on the adhesion properties of PSA was well studied by Kim et al. [26][27][28]. They found that the tack, peel strength, and shear strength of adhesives decrease sharply by phase separation. Therefore, the compatibility of SPSAs should be improved to overcome this problem, which will be discussed in Section 3.4. polymer chains. This microphase separation can be estimated by measuring the haze of SPSAs, as shown in Figure 7. The haze of SPSAs increased steeply according to the MAI content, which means that microphase separation was induced by MAI. The effect of phase separation on the adhesion properties of PSA was well studied by Kim et al. [26][27][28]. They found that the tack, peel strength, and shear strength of adhesives decrease sharply by phase separation. Therefore, the compatibility of SPSAs should be improved to overcome this problem, which will be discussed in Section 3.4. A shear adhesion failure test (SAFT) was also performed to evaluate the shear strength and heat resistance characteristics of the SPSA (Figure 8). The failure temperature increased linearly according to the PDMS content in the adhesive. Although the molecular weight of SPSA decreased slightly with MAI content, the improvement of heat resistance by PDMS was prevalent. A shear adhesion failure test (SAFT) was also performed to evaluate the shear strength and heat resistance characteristics of the SPSA (Figure 8). The failure temperature increased linearly according to the PDMS content in the adhesive. Although the molecular weight of SPSA decreased slightly with MAI content, the improvement of heat resistance by PDMS was prevalent. Figure 9 shows the water and diiodomethane contact angles and surface energies of SPSAs synthesized with different amounts of MAI. SPSAs showed a high contact angle and low surface energy compared with control PSA. However, the changes in contact angle and surface energy with increasing MAI content were not so large. This means that most of the surface of the SPSA film is saturated with the PDMS chain even with a small amount of MAI. This result can be understood by referencing the work of Inoue et al. [29]. They reported that the water contact angle of a poly(dimethylsiloxane-b-methyl methacrylate) block copolymer was higher than 100° even with 1-2% PDMS content. They found that, if the molecular weight of the PDMS block is over 2000, the block copolymer has a PDMS-rich surface even at a small amount of PDMS. The PDMS chain of MAI is long enough (Mn = approximately 10,000) for this phenomenon to occur. Figure 9 shows the water and diiodomethane contact angles and surface energies of SPSAs synthesized with different amounts of MAI. SPSAs showed a high contact angle and low surface energy compared with control PSA. However, the changes in contact angle and surface energy with increasing MAI content were not so large. This means that most of the surface of the SPSA film is saturated with the PDMS chain even with a small amount of MAI. This result can be understood by referencing the work of Inoue et al. [29]. They reported that the water contact angle of a poly(dimethylsiloxane-b-methyl methacrylate) block copolymer was higher than 100 • even with 1-2% PDMS content. They found that, if the molecular weight of the PDMS block is over 2000, the block copolymer has a PDMS-rich surface even at a small amount of PDMS. The PDMS chain of MAI is long enough (M n = approximately 10,000) for this phenomenon to occur. saturated with the PDMS chain even with a small amount of MAI. This result can be understood by referencing the work of Inoue et al. [29]. They reported that the water contact angle of a poly(dimethylsiloxane-b-methyl methacrylate) block copolymer was higher than 100° even with 1-2% PDMS content. They found that, if the molecular weight of the PDMS block is over 2000, the block copolymer has a PDMS-rich surface even at a small amount of PDMS. The PDMS chain of MAI is long enough (Mn = approximately 10,000) for this phenomenon to occur. Owing to the low surface energy, SPSA showed improved adhesion properties on the low surface energy substrate. Figure 10 shows the loop tack and 180° peel strength of SPSAs measured on SUS and PP. Compared with control PSA, the SPSA series showed remarkably higher peel strength on PP. Owing to the low surface energy, SPSA showed improved adhesion properties on the low surface energy substrate. Figure 10 shows the loop tack and 180 • peel strength of SPSAs measured on SUS and PP. Compared with control PSA, the SPSA series showed remarkably higher peel strength on PP.
Adhesion Property of SPSA on SUS and PP
Polymers 2020, 12, x FOR PEER REVIEW 10 of 14 The control PSA showed cohesive failure on SUS, whereas it showed stick-slip behavior on PP. Stick-slip is known to occur if the strength of the adhesive bond (σa) is similar to the cohesive strength of the PSA (σc) [30]. As a result that the surface energy of control PSA is higher than that of PP (36.9 vs. 32.5 mN/m), σa of control PSA on PP will decrease and become close to σc. Meanwhile, SPSA showed that the tack and peel strength decreased with MAI content. This is due to the microphase separation between the two incompatible blocks (PDMS and acrylic blocks) in MAI, as explained previously.
Adhesion Property Modification of SPSA by SiUDMA
As displayed in Figure 5, MAI20 and MAI30 had overly low 90° peel strength at 25 °C with maintaining this level over the entire temperature range. To improve the adhesion properties of these SPSAs, SiUDMA oligomer was added, and UV irradiation was conducted. SiUDMA was added to MAI20, which showed high thermal stability, and the detailed properties of SiUMDA-modified MAI20 are given in Table 4. By the addition of SiUDMA, the solution viscosity (40 wt% solid in EA, 25 °C) of MAI20 was reduced from 58,300 mPas to 25,500 mPas by the low viscosity of SiUDMA, and the film haze also decreased from 34.65% to 18.34% due to partial suppression of phase separation [22]. The control PSA showed cohesive failure on SUS, whereas it showed stick-slip behavior on PP. Stick-slip is known to occur if the strength of the adhesive bond (σ a ) is similar to the cohesive strength of the PSA (σ c ) [30]. As a result that the surface energy of control PSA is higher than that of PP (36.9 vs. 32.5 mN/m), σ a of control PSA on PP will decrease and become close to σ c . Meanwhile, SPSA showed that the tack and peel strength decreased with MAI content. This is due to the microphase separation between the two incompatible blocks (PDMS and acrylic blocks) in MAI, as explained previously.
Adhesion Property Modification of SPSA by SiUDMA
As displayed in Figure 5, MAI20 and MAI30 had overly low 90 • peel strength at 25 • C with maintaining this level over the entire temperature range. To improve the adhesion properties of these SPSAs, SiUDMA oligomer was added, and UV irradiation was conducted. SiUDMA was added to MAI20, which showed high thermal stability, and the detailed properties of SiUMDA-modified MAI20 are given in Table 4. By the addition of SiUDMA, the solution viscosity (40 wt% solid in EA, 25 • C) of MAI20 was reduced from 58,300 mPas to 25,500 mPas by the low viscosity of SiUDMA, and the film haze also decreased from 34.65% to 18.34% due to partial suppression of phase separation [22]. Figure 11 shows the unloading behavior of the tack of SiUDMA-modified MAI20. In the case of MAI20/20, the tack increased significantly, and a plateau appeared after the maximum peak (Figure 11a,b). SiUDMA induced balanced debonding by reduced cohesive force and increasing the adhesive property. UV irradiation decreased the tack and plateau regions by crosslinking. As the temperature increased to 100 • C (Figure 11c), high elongation was observed in MAI20/20, and tack did not immediately drop to zero, which means that MAI20/20 behaved like a liquid-like adhesive, as explained in Section 3.2. The UV crosslinked samples exhibited balanced or solid-like debonding tack behavior at 100 • C depending on the UV dose exemplified for the two dose values. Figure 12 shows the 90° peel strength of the SiUDMA-modified MAI20 series. MAI20/20 showed improved peel strength compared to MAI20 (892.9 vs. 322.4 gf/25 mm), but the failure mode changed from adhesive to cohesive and the peel strength steeply decreased with temperature. By the 200-mJ/cm 2 UV dose (MAI20/20U1), the peel strength at 25 °C decreased but was still higher than that of MAI20 (548.3 vs. 322.4 gf/25 mm). MAI20/20U1 showed thermally stable peel strength until 100 °C. However, a UV dose of 1000 mJ/cm 2 reduced the peel strength too much but improved the thermal Figure 11. Probe tack of SiUDMA-modified MAI20 as a function of temperature (a), probe tack curves at 25 • C (b), and 100 • C (c). Figure 12 shows the 90 • peel strength of the SiUDMA-modified MAI20 series. MAI20/20 showed improved peel strength compared to MAI20 (892.9 vs. 322.4 gf/25 mm), but the failure mode changed from adhesive to cohesive and the peel strength steeply decreased with temperature. By the 200-mJ/cm 2 UV dose (MAI20/20U1), the peel strength at 25 • C decreased but was still higher than that of MAI20 (548.3 vs. 322.4 gf/25 mm). MAI20/20U1 showed thermally stable peel strength until 100 • C. However, a UV dose of 1000 mJ/cm 2 reduced the peel strength too much but improved the thermal stability. Figure 11. Probe tack of SiUDMA-modified MAI20 as a function of temperature (a), probe tack curves at 25 °C (b), and 100 °C (c). Figure 12 shows the 90° peel strength of the SiUDMA-modified MAI20 series. MAI20/20 showed improved peel strength compared to MAI20 (892.9 vs. 322.4 gf/25 mm), but the failure mode changed from adhesive to cohesive and the peel strength steeply decreased with temperature. By the 200-mJ/cm 2 UV dose (MAI20/20U1), the peel strength at 25 °C decreased but was still higher than that of MAI20 (548.3 vs. 322.4 gf/25 mm). MAI20/20U1 showed thermally stable peel strength until 100 °C. However, a UV dose of 1000 mJ/cm 2 reduced the peel strength too much but improved the thermal stability. The SAFT of the SiUDMA-modified MAI20 series ( Figure 13) showed that the failure temperature of MAI20/20 decreased because of the plasticizing effect due to the low molecular weight of SiUDMA. However, the UV-irradiated sample significantly improved the failure temperature via the formation of a semi-IPN structure. As displayed in Table 4, depending on the UV dose, the gel fraction was increased from 0% to 28.43% (200 mJ/cm 2 ) and 88.21% (1000 mJ/cm 2 ). This indicates that a crosslinking reaction occurred in this system. The SAFT of the SiUDMA-modified MAI20 series ( Figure 13) showed that the failure temperature of MAI20/20 decreased because of the plasticizing effect due to the low molecular weight of SiUDMA. However, the UV-irradiated sample significantly improved the failure temperature via the formation of a semi-IPN structure. As displayed in Table 4, depending on the UV dose, the gel fraction was increased from 0% to 28.43% (200 mJ/cm 2 ) and 88.21% (1000 mJ/cm 2 ). This indicates that a crosslinking reaction occurred in this system.
Conclusions
Silicone block-containing acrylic PSAs (SPSAs) were synthesized using a PDMS-based macroazo-initiator (MAI) to improve the adhesion properties of acrylic PSA at an elevated temperature, and the following conclusions were obtained.
SPSAs can be easily synthesized by incorporating PDMS blocks into the acrylic PSA using MAI. SPSA shows a higher viscosity than PSA synthesized using AIBN containing the same concentration of azo group. As the content of MAI increases, the molecular weight of SPSAs decreases, and the
Conclusions
Silicone block-containing acrylic PSAs (SPSAs) were synthesized using a PDMS-based macro-azo-initiator (MAI) to improve the adhesion properties of acrylic PSA at an elevated temperature, and the following conclusions were obtained.
SPSAs can be easily synthesized by incorporating PDMS blocks into the acrylic PSA using MAI. SPSA shows a higher viscosity than PSA synthesized using AIBN containing the same concentration of azo group. As the content of MAI increases, the molecular weight of SPSAs decreases, and the molecular weight distribution becomes broader.
Due to the PDMS blocks, SPSAs exhibit relatively small changes in tack and peel strength with temperature compared to acrylic PSA because the storage modulus of SPSA gradually decreases with temperature. Furthermore, SPSA with a high MAI content maintains balanced debonding even at a relatively high temperature.
The surface of SPSA is almost saturated with PDMS chains even at a small amount of MAI because of the large size of PDMS in MAI. Accordingly, the surface energy of SPSA changes slightly with MAI content. Owing to its low surface energy, SPSA shows improved adhesion properties on PP, a low-surface-energy substrate.
However, SPSAs have lower tack and peel strength than acrylic PSA at 25 • C due to the microphase separation between PDMS and acrylic blocks. The microphase separation in SPSA is leveled by the addition of SiUDMA oligomer, and 90 • peel strength at 25 • C increases with the failure mode change from adhesive to cohesive. However, the tack and peel strength of SPSA decrease rapidly with temperature due to the plasticizing effect of SiUDMA. Through the formation of the semi-IPN structure by an appropriate UV irradiation dose, it is possible to minimize tack and peel strength reduction and improve thermal stability.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-10-22T18:55:54.687Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "2e8d9ed04feb89fab2035f914b930165d3687195",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/12/10/2410/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45376c8bd4bb01ad687125d82e7c7a7510278829",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
232234196
|
pes2o/s2orc
|
v3-fos-license
|
Antibody Afucosylation Augments CD16-Mediated Serial Killing and IFNγ Secretion by Human Natural Killer Cells
One mechanism by which monoclonal antibodies (mAb) help treat cancer or autoimmune disease is through triggering antibody-dependent cellular cytotoxicity (ADCC) via CD16 on Natural Killer (NK) cells. Afucosylation is known to increase the affinity of mAbs for CD16 on NK cells and here, we set out to assess how mAb afucosylation affects the dynamics of NK cell interactions, receptor expression and effector functions. An IgG1 version of a clinically important anti-CD20 mAb was compared to its afucosylated counterpart (anti-CD20-AF). Opsonization of CD20-expressing target cells, 721.221 or Daudi, with anti-CD20-AF increased NK cell cytotoxicity and IFNγ secretion, compared to anti-CD20. The afucosylated mAb also caused a more rapid and greater loss of CD16 from NK cell surfaces. Loss of CD16 has recently been shown to be important for NK cell detachment and sequential engagement of multiple target cells. Here, live-cell time-lapse microscopy of individual cell-cell interactions in an aqueous environment and a three-dimensional matrix, revealed that anti-CD20-AF induced more rapid killing of opsonized target cells. In addition, NK cells detached more quickly from target cells opsonized with anti-CD20-AF compared to anti-CD20, which increased engagement of multiple targets and enabled a greater proportion of NK cells to perform serial killing. Inhibition of CD16 shedding with TAPI-0 led to reduced detachment and serial killing. Thus, disassembly of the immune synapse caused by loss of cell surface CD16 is a factor determining the efficiency of ADCC and antibody afucosylation alters the dynamics of intercellular interactions to boost serial killing.
One mechanism by which monoclonal antibodies (mAb) help treat cancer or autoimmune disease is through triggering antibody-dependent cellular cytotoxicity (ADCC) via CD16 on Natural Killer (NK) cells. Afucosylation is known to increase the affinity of mAbs for CD16 on NK cells and here, we set out to assess how mAb afucosylation affects the dynamics of NK cell interactions, receptor expression and effector functions. An IgG1 version of a clinically important anti-CD20 mAb was compared to its afucosylated counterpart (anti-CD20-AF). Opsonization of CD20-expressing target cells, 721.221 or Daudi, with anti-CD20-AF increased NK cell cytotoxicity and IFNγ secretion, compared to anti-CD20. The afucosylated mAb also caused a more rapid and greater loss of CD16 from NK cell surfaces. Loss of CD16 has recently been shown to be important for NK cell detachment and sequential engagement of multiple target cells. Here, live-cell time-lapse microscopy of individual cell-cell interactions in an aqueous environment and a threedimensional matrix, revealed that anti-CD20-AF induced more rapid killing of opsonized target cells. In addition, NK cells detached more quickly from target cells opsonized with anti-CD20-AF compared to anti-CD20, which increased engagement of multiple targets and enabled a greater proportion of NK cells to perform serial killing. Inhibition of CD16 shedding with TAPI-0 led to reduced detachment and serial killing. Thus, disassembly of the immune synapse caused by loss of cell surface CD16 is a factor determining the efficiency of ADCC and antibody afucosylation alters the dynamics of intercellular interactions to boost serial killing.
INTRODUCTION
Natural Killer (NK) cells are key players in immune defense against cancerous or virally infected cells. NK cell activation is controlled by the balance of signals from a variety of germline encoded activating and inhibitory receptors (1). Upon activation, NK cells can directly kill diseased cells by secretion of lytic granules that typically contain pore-forming perforin and lytic granzymes (2,3). NK cells also augment the immune response by secreting immuno-stimulatory cytokines and chemokines including pro-inflammatory interferon gamma (IFNγ) and tumor necrosis factor alpha (TNFα) (4,5).
Assembly of the immune synapse has been widely studied (6,7), but how activating signals are terminated and how NK cells dissociate from target cells are understudied elements of the overall process (8). An inability for NK cells to detach from target cells leads to prolonged engagement and increased cytokine production (9). Detachment after lysis allows NK cells to move onto additional target cells (10) and serial killing by NK cells has been revealed by in-vitro live microscopy (11,12). However, while it is clear that NK cell detachment is important for effective NK cell killing, very few specific mechanisms have been described.
Fc receptors allow NK cells to recognize and kill antibodyopsonized target cells independent of other co-stimulatory signals (13); a process called antibody dependent cellular cytotoxicity (ADCC). The ability of NK cells to perform ADCC is critical in targeted immunotherapies based on monoclonal antibodies (mAbs) (14). NK cells respond to immunotherapies through their low affinity immunoglobulin gamma Fc region receptor III (FcγRIII) also known as CD16 (15). Ligandor cytokine-induced activation triggers rapid and irreversible shedding of CD16 (16)(17)(18), which can serve as a regulatory mechanism that inhibits the activation of NK cells and prevents excessive inflammatory responses (19). However, we have recently reported that loss of CD16 from NK cell surfaces can allow NK cells to detach from their targets to enhance serial killing (20).
Cancer cell elimination via mAbs involves at least four different mechanisms; ADCC, complement dependent cellular cytotoxicity (CDC), antibody dependent cellular phagocytosis (ADCP) and direct signaling induced cell death (21). ADCC is established to be clinically important and the majority of mAbs approved for treatment in oncology trigger ADCC (22). Specifically, engagement of CD16 has been shown to be important in B-cell chronic lymphocytic leukemia (CLL) and non-Hodgkin's lymphoma, which are treated with anti-CD20 mAbs (23,24). Most anti-CD20 mAbs used clinically are IgG1 (25,26), although various Fc modifications such as glycosylation and afucosylation have been tested in attempts to improve immunotherapy efficacy (27). N-glycans that are located on the Fc portion of human IgG are normally highly fucosylated (∼90%) (28). ADCC assays have shown that lower Fc fucosylation leads to increased CD16-mediated killing (29,30). Removing fucose from asparagine 297 of the antibody heavy chain increases the Fc binding affinity to CD16 and improves Fc-triggered effector functions (31,32). Studies have shown that afucosylated anti-CD20 mAbs triggered more efficient malignant B-cell depletion in vitro (33,34). Afucosylation has also been shown to trigger ADCP of opsonized targets by macrophages at lower mAb concentrations than their unmodified equivalents (35). However, in contrast, Fc afucosylation reduced the binding affinity to complement, and thus CDC associated killing (33,35). Obinutuzumab (GA101), a highly glycosylated and afucosylated anti-CD20 mAb was developed and shown to have superior cytotoxic activity in vitro, and anti-tumor efficacy in vivo compared to its native IgG1 form (36). Despite advances in understanding of mAb afucosylation, little is known about how afucosylation affects NK cell contacts with opsonized targets and the killing kinetics of these interactions.
Here, we investigated how afucosylation of an IgG1 anti-CD20 mAb affects primary NK cell lytic responses. We found that an afucosylated anti-CD20 mAb (anti-CD20-AF) induced a more efficient cytolytic response and improved target-specific lysis. In addition, anti-CD20-AF increased CD16 shedding which led to faster killing and increased sequential killing of opsonized targets in comparison to anti-CD20 mAb. We also demonstrated that inhibition of CD16 shedding abrogated the ability of NK cells to kill multiple anti-CD20-AF-opsonized targets. Taken together, enhanced serial killing and efficient target cell elimination indicate that mAb afucosylation can augment NK cell lytic potential and consequently improve the effectiveness of immunotherapies.
RESULTS
Afucosylated Anti-CD20 mAb Triggers More Killing and IFNγ Release but Downmodulates CD16 Ligation of the Fc receptor, CD16, on NK cells triggers cytolytic activity and the secretion of pro-inflammatory cytokines (5). Here, we used an afucosylated version of anti-CD20 mAb (anti-CD20-AF) that was afucosylated using POTTELIGENT technology (BioWa).
To investigate how afucosylation affected NK cell functional responses, NK cells were incubated with anti-CD20 or anti-CD20-AF mAb opsonized target cells; either the transformed B cell line, 721.221 (hereafter referred to as 221) or the Burkitt's lymphoma, Daudi, cell line (37,38). It may be important to note that both cell lines tested do not express classical class I MHC proteins and are particularly susceptible to NK cell-mediated lysis. To assess opsonization, 221 cells were incubated with increasing concentrations of each mAb. Binding of the mAbs was detected using a fluorescent secondary antibody with maximal opsonization reached at 10 µg/mL (Supplementary Figure 1A). Opsonization of 221 target cells with anti-CD20 and anti-CD20-AF partially reduced surface CD20 availability to fluorescently labeled anti-CD20 added after opsonization, indicating that loading was equivalent between the two antibodies but that not all surface CD20 was bound by the mAbs when opsonizing targets (Supplementary Figure 1B).
NK cell cytolytic responses can be assessed by quantification of surface CD107a, a marker of lytic granules, that transiently appears on the surface of degranulating NK cells (42). To assess how afucosylation affects NK cell degranulation, NK cells were co-incubated with target cells and the percentage of CD107a + NK cells was measured. In the absence of target cells, a small fraction of NK cells exhibited some level of staining with an anti-CD107a mAb (4.4 ± 0.8%). However, in the presence of 221 and Daudi target cells 25.5 ± 9.3% and 15.4 ± 6.5% of NK cells degranulated, respectively (Figures 1D,E). Opsonization of 221 cells with anti-CD20 mAb significantly increased the amount of degranulating NK cells (32.4 ± 8.1%), while anti-CD20-AF opsonization further increased degranulating NK cells (50.0 ± 6.8%). Similarly, anti-CD20 opsonized Daudi cells triggered 23.5 ± 7.4% of NK cells to degranulate, while anti-CD20-AF triggered 36.9 ± 7.2% ( Figure 1E). Together these data are consistent with the higher affinity anti-CD20-AF mAb triggering more NK cell degranulation.
In parallel, CD16 surface expression was quantified relative to unstimulated NK cells. Co-culture with non-opsonized 221 and Daudi target cells induced significant decreases in CD16 surface expression (Figures 1F,G). This is consistent with previous observations that NK cell activation leads to a loss of CD16 from the cell surface, even in the absence of specific ligation of CD16 (17). Opsonization of 221 target cells with anti-CD20 mAb significantly reduced surface CD16 (54.2 ± 12.7% reduction), whereas targets opsonized with anti-CD20-AF reduced surface CD16 expression to an even greater extent (78.8 ± 5.5% reduction; Figure 1F). Similarly, opsonization of Daudi target cells with anti-CD20 significantly reduced surface expression of CD16 by 59.6 ± 8.8% and anti-CD20-AF reduced this by 73.9 ± 6.8% ( Figure 1G). Across both cell lines, degranulation and CD16 surface expression were inversely correlated (221, R 2 = 0.55 and Daudi, R 2 = 0.57; Figures 1H,I) implying that both happen concurrently with NK cell activation. The expression of other NK cell receptors, and integrin LFA-1, were also compared on NK cells co-incubated with Daudi or Daudi cells opsonized with anti-CD20 or anti-CD20-AF. We found that surface expression of the activating receptors NKG2D, NKp30, NKp44, and NKp46, or the integrin LFA-1 were not altered (Supplementary Figure 2). However, NKG2D and NKp30 expression was reduced when NK cells were coincubated with target cells expressing their respective ligands, Daudi-MICA (43) or K562 (44). Overall, anti-CD20-AF mAb triggers potent NK cell activation beyond that seen with anti-CD20 mAb, resulting in strongly reduced expression of surface CD16 with no effect on other activating receptors.
Anti-CD20-AF mAb Augments Rapid Serial Killing of Opsonized Targets
To investigate how mAb afucosylation impacts the killing kinetics of NK cells, we performed live-cell time-lapse microscopy in fibronectin-coated microwells (450 × 450 µm) (45). 221 or Daudi cells were used as target cells and stained with one dye, Calcein Green, before co-incubation with NK cells stained with another dye, Calcein Red-Orange, at an E:T ratio of 1:3. Target cells were also stained with another dye, To-Pro-3, and cell death was indicated by its release (Figures 2A,B). This system enabled detailed quantification of NK cell-target cell interactions, with total contact time, time to kill and time to detachment after kill all assessed ( Figure 2C). Example interactions are shown in Figure 2B and Supplementary Videos 1-5. In some cases, target cells were killed very quickly, within 12 min (Supplementary Video 1), while other targets required interactions of almost 3 h to induce lysis (Supplementary Video 2). Detachment is a crucial step in NK cell serial killing but occasionally NK cells can kill a second target without prior detachment from the previous target (Supplementary Video 3). Interestingly, some NK cells killed two targets within a short time span (Supplementary Video 4), and some killed several targets sequentially ( Figure 2B and Supplementary Video 5).
Detailed analysis of NK cell-target interactions was performed, using either opsonized 221 or Daudi cells. First contacts between NK cells and anti-CD20 mAb opsonized 221 cells that led to kills resulted in less frequent detachment (53.3 ± 3.9%) than anti-CD20-AF-opsonized 221 cells (71.9 ± 2.6%; Figure 2D), whilst NK cells which contacted and killed a second opsonized target cell were equally likely to detach with either mAb. We then focused on contacts between NK cells and opsonized 221 cells that resulted in kills and examined the duration of lysis and total contact time. Opsonization with anti-CD20-AF significantly reduced the time required to kill target cells, 24 ± 10.2 min, in comparison to anti-CD20, 49.8 ± 3.3 min ( Figure 2E). The time taken to kill a second cell was also reduced for 221 targets opsonized with anti-CD20-AF (27.3 ± 7.9 min) compared to anti-CD20 mAb (54 ± 26.1 min). Importantly, detachment time following the first kill was significantly longer with anti-CD20-opsonized 221 cells (28.1 ± 12.8 min compared to anti-CD20-AF, 11.6 ± 2.5 min; Figure 2F). Similarly, detachment after a second kill was faster in the presence of anti-CD20-AF (10.8 ± 3.3 min compared to anti-CD20 mAb, 22.5 ± 2.7 min; Figure 2F). Taken together, both first and subsequent contacts between NK cells and anti-CD20-AF opsonized 221 cells that resulted in a kill were significantly shorter (1st, 42.7 ± 6.6 min and 2nd, 40.8 ± 13.4 min) than with anti-CD20-opsonized 221 cells (1st, 60 ± 8.8 min and 2nd, 81.7 ± 26.4 min; Figure 2G). Altogether, these data indicate that anti-CD20-AF facilitated faster killing and more efficient detachment than anti-CD20 mAb.
To test whether or not these results were generalizable to other target cells, we used Daudi as an alternative CD20expressing cell. For Daudi, there was a significant difference in the number of cells which could detach after first (56.8 ± 3.1% for anti-CD20-AF compared with 44 ± 3.9% for anti-CD20 mAb) or second contacts (59.8 ± 3.3% for anti-CD20-AF compared with 45.2 ± 8.4% for anti-CD20; Figure 2H). The time taken for NK cells to kill the first target cell was significantly shorter for cells opsonized with anti-CD20-AF (21.7 ± 13.1 min compared to 76.5 ± 31.3 min for anti-CD20 mAb; Figure 2I). While the detachment time was unaffected for first or second kills (Figure 2J), the total duration of the second contact of anti-CD20-AF-opsonized Daudi was significantly shorter (43.1 ± 14.4 min compared to 92.6 ± 27 min for cells opsonized with anti-CD20 mAb; Figure 2K). Collectively, these data indicate that the higher affinity anti-CD20-AF mAb altered the dynamics of intercellular interactions and induced more efficient killing of multiple opsonized target cells.
Anti-CD20-AF mAb Augments Rapid Serial Killing of Opsonized Targets in 3D Environments
Intercellular dynamics are known to vary in 3D environments (46). Thus, to analyze NK cell killing kinetics in a more physiological environment, we next sought to investigate whether afucosylation affects NK cell serial killing in a 3D environment. NK cells were labeled with Calcein Red-Orange and co-incubated with 221 target cells stained with Calcein Green opsonized with anti-CD20 mAb or anti-CD20-AF in a 3D Matrigel for 6 h, while dead cells were indicated by To-Pro-3 ( Figure 3A). The density of target cells here was higher, compared to microwells tested above, therefore more NK cells were serial killers. In some cases, NK cells killed up to 5 targets sequentially by properly detaching (Figure 3B and Supplementary Video 6), whereas in other cases NK cells killed multiple available neighboring targets without prior detachment (Supplementary Videos 7, 8). Occasionally, NK cells could kill multiple targets within a particularly short time span (Supplementary Video 9). Interestingly, detachment from first contacts that led to kills was not clearly affected by the mAb used for opsonization but a significantly greater number of NK cells detached from target cells opsonized with anti-CD20-AF and went on to kill a second time (62.4 ± 9.5% compared to 39.8 ± 7.5% for anti-CD20 mAb; Figure 3C). The speed of first kills was slightly, but not significantly faster in the presence of anti-CD20-AF opsonized 221 target cells (25.1 ± 2.8 min compared to 39.3 ± 15.8 min for anti-CD20; Figure 3D). However, there was no difference in the time taken for a second kill. Strikingly, anti-CD20-AF caused significantly faster detachment after second kills (15.3 ± 4.3 min for anti-CD20-AF compared to 28.8 ± 6.1 min for anti-CD20 mAb; Figure 3E). Although the total contact time for first kills was not significantly affected by the afucosylated mAb, second kills of anti-CD20-AF-opsonized target cells were significantly shorter, 39.7 ± 8.7 min, compared to anti-CD20 mAb, 64.5 ± 9.4 min ( Figure 3F). Thus, mAb afucosylation boosted NK cell killing kinetics in a 3D microenvironment.
Notably, in a 3D environment, a greater fraction of NK cells engaged in serial interactions with target cells, irrespective of the mAb they were opsonized with. In microwells, 75.5 ± 3.7% and 60.9 ± 7.5% of NK cells were restricted to a single kill for target cells opsonized with anti-CD20 mAb or anti-CD20-AF, respectively. However, this reduced to 65.8 ± 9.1% and 43.1 ± 9.6% in a 3D environment (Figure 3G). In addition, there was a significant increase in the number of NK cells killing three or more target cells opsonized with anti-CD20-AF (18.9 ± 5.9% compared to 7.4 ± 3.6% with anti-CD20; Figure 3G). Collectively, these data establish that the higher affinity anti-CD20-AF mAb triggered more rapid and efficient killing of opsonized targets, and augmented serial killing of multiple targets in microwells and a 3D environment.
TAPI-0 Inhibitor Triggers IFNγ Secretion, but Blocks Serial Killing
Shedding of CD16 on activated NK cells via the metalloproteinase ADAM17 has been described to aid serial engagement of multiple opsonized targets (20). Using TAPI-0, we investigated how inhibition of CD16 shedding affected NK cell function. NK cells were incubated alone or with 221 targets or 221 targets opsonized with anti-CD20 or anti-CD20-AF for 5 h in the presence of TAPI-0 inhibitor after which we measured surface expression of CD16 and the marker of NK cell degranulation, CD107a. While CD16 expression was not affected on unstimulated NK cells with varying doses of TAPI-0 ( Figure 4A), CD16 shedding following co-incubation with 221 cells was inhibited in all conditions in the presence of TAPI-0 (5 µM). TAPI-0 completely inhibited CD16 shedding and even resulted in increased CD16 expression on NK cells co-incubated with non-opsonized or anti-CD20 opsonized 221 cells. However, CD16 levels were still reduced by 20.2 ± 12% in the presence of anti-CD20-AF opsonized 221 cells despite inhibition with TAPI-0 ( Figure 4B and Supplementary Figure 3). TAPI-0 did not affect the fraction of NK cells expressing CD107a, indicating that degranulation was unaffected (Figure 4C and Supplementary Figure 3). Accordingly, TAPI-0 caused a significant increase in the number of CD107a + CD16 + NK cells, indicating that degranulating NK cells retained surface CD16 (Figures 4D,E). In addition, TAPI-0 increased the secretion of IFNγ across all conditions, although due to donor variability this difference was not statistically significant except for anti-CD20-AF opsonized target cells, where the presence of TAPI-0 inhibitor increased IFNγ secretion from 161 ± 199 to 524 ± 378 pg/mL (Figure 4F and Supplementary Figure 3). In contrast, TAPI-0 did not have any significant effect on the secretion of TNFα (Figure 4G). Thus, inhibition of CD16 shedding with TAPI-0 sustained expression of CD16 and boosted IFNγ secretion especially in the presence of 221 target cells opsonized with anti-CD20-AF.
To investigate whether inhibition of CD16 shedding impacts NK cell-mediated lysis, TAPI-0 inhibitor was added to NK cells co-incubated with target cells (E:T = 1:3) for 2 h. In the presence of non-opsonized Daudi and 221 cells, specific lysis was slightly increased by TAPI-0 for all donors (221; 23.0 ± 10.4% vs. 28.0 ± 8.6% and Daudi; 21.2 ± 15.4% vs. 23.7 ± 15.5%). TAPI-0 had no effect on specific lysis of anti-CD20-opsonized 221 target cells, but caused a slight but significant reduction in lysis of Daudi Figures 4H-J). As before, there was a significant increase in specific lysis of both target cell lines when opsonized with anti-CD20-AF, compared with anti-CD20. However, this difference was eliminated in the presence of TAPI-0 suggesting that CD16 cleavage plays a role in the increased cytotoxicity observed with the afucosylated mAb. These changes in cytotoxicity are small, but measuring the effects of serial killing in bulk assays is challenging, so we next assessed the impact of TAPI-0 on the killing of anti-CD20-AF opsonized targets using time-lapse microscopy.
Shedding of CD16 has been shown to be one factor which can control the disassembly of immune synapses (20), and we next investigated the effect of TAPI-0 on the dynamics of interactions between NK cells and opsonized 221 target cells. As previously, NK cells and 221 target cells were stained with differently colored Calcein dyes, co-incubated in fibronectin-coated microwells, and imaged by time-lapse microscopy (Figures 5A,B). Cells were either left alone or treated during acquisition with 5 µM TAPI-0 inhibitor to block CD16 shedding. TAPI-0 dramatically reduced the percentage of NK cells which detached following both the first (63.4 ± 10% without TAPI-0 and 38.8 ± 12.8% with TAPI-0) and second contacts (60.6 ± 17.2% without TAPI-0 and 26.9 ± 15% with TAPI-0; Figure 5C) that resulted in kills of 221 cells opsonized with anti-CD20-AF. It is possible that sustained interaction with target cells in the presence of TAPI-0 is directly responsible for increasing IFNγ secretion, although this is hard to test directly. An example of TAPI-0 preventing an NK cell from detaching is shown in Supplementary Video 10.
In the absence of TAPI-0, NK cells normally detach as shown in Supplementary Video 11. Of the contacts which resulted in kills and detachment, the presence of TAPI-0 did significantly impact the time for the first kill but not significantly for the second kill (Figure 5D), the time taken to detach after killing (Figure 5E), nor the total time of contact ( Figure 5F). It's important to emphasize that the data shown in Figures 5E,F only represents interactions where cells detached. As shown by Figure 5C, most cells remained attached in the presence of TAPI-0 and therefore the overall times for cells to detach after killing and the total contact times are understated by the values shown. Accordingly, the number of NK cells able to kill three or more anti-CD20-AF opsonized target cells (18.9 ± 9.1%) was dramatically reduced in the presence of TAPI-0 (5 ± 3.5%; Figure 5G and Supplementary Video 12). Thus, inhibition of CD16 shedding by TAPI-0 causes less efficient NK cell detachment inhibits serial killing of target cells opsonized with anti-CD20-AF.
DISCUSSION
ADCC by NK cells is a significant contributor to both clinical and in vivo anti-tumor activity of anti-CD20 mAbs (47). The Fc portion of mAbs is responsible for ligation to the activating Fc receptor CD16 on NK cells, and therefore dictates their ADCC response. Here, we investigated how afucosylation of an anti-CD20 mAb induced potent NK cell activation resulting in more efficient target cell killing and IFNγ secretion. Critically, we observed an increase in the serial killing of multiple NK cell susceptible targets with an afucosylated mAb, anti-CD20-AF. Anti-CD20-AF triggered strong shedding of surface CD16, increasing the fraction of NK cells able to detach effectively from target cells and engage in sequential killing.
Clinically, humanized IgG1 mAbs that have moderate affinity with CD16 are predominantly used (48). However, by Fc engineering of mAbs (49,50), many variants of anti-CD20 mAbs have been produced and tested (51). Here we compared an afucosylated version of anti-CD20 with its native form. We found that afucosylation led to more efficient elimination of opsonized B cells compared to the native anti-CD20 mAb. This is consistent with previous work showing that afucosylated Obinutuzumab increased specific lysis of opsonized neoplastic B cells (36). Stimulation with anti-CD20-AF also induced greater NK cell degranulation, confirming a previous study where afucosylated mAb opsonized Raji B cells induced strong degranulation (52). This increased degranulation has been shown to be a result of increased downstream phosphorylation (34). Moreover, our results agree with findings that showed afucosylated Obinutuzumab induced superior IFNγ secretion (52). In contrast, anti-CD20-AF did not induce more TNFα secretion compared to the native antibody. Overall, this implies that an afucosylated version of anti-CD20 mAbs can be beneficial clinically.
It has been established that surface CD16 is downregulated on activated NK cells. CD16 is shed after cleavage by ADAM17 (17,18) or MMP25 (16). Indeed, here, anti-CD20-AF triggered greater reduction of CD16 surface expression on activated NK cells compared to the native anti-CD20 mAb. Shedding of CD16 has been suggested to function as a regulatory mechanism of activated NK cells (53). However, we recently described that CD16 shedding can boost serial killing (20) and in this study, we found that effective serial killing still occurred alongside significant downregulation of CD16 expression. Previously, Srpan et al. found that even a 60% downregulation of surface CD16 did not prevent NK cells secreting cytotoxic perforin in response to further CD16 stimulations (20). Here, we observe even greater shedding of CD16 following NK cell stimulation with high-affinity anti-CD20-AF but still see effective serial killing. Whether or not further reduction in CD16 expression during subsequent interactions can be detrimental to prolonged serial killing is unclear. Altogether, it seems likely that effective long-term serial killing would involve stimulation through more than one NK cell activating receptor.
Inhibition of ADAM17 with TAPI-0 could not completely restore surface CD16 on activated NK cells in the presence of anti-CD20-AF opsonized target cells. This higher affinity mAb may trigger loss of surface CD16 through an additional mechanism alongside shedding, which would not be prevented by TAPI-0, such as endocytosis. Indeed, strong stimulation with an anti-CD16 mAb (clone; 3G8) has been shown to lead to CD16 endocytosis (18) and co-incubation with Raji cells opsonized with the afucosylated mAb, Obinutuzumab, resulted in targeted lysosomal degradation of CD16-associated signaling elements (52). Therefore, higher affinity anti-CD20-AF may trigger even greater reductions of CD16 than the native antibody by employing additional mechanisms to reduce CD16 surface expression and enhance NK cell killing kinetics.
NK cells are well-established as being able to eliminate multiple targets sequentially. Here, we found that anti-CD20-AF facilitated faster killing and shorter contacts with opsonized targets as well as efficient serial killing. Previously, an anti-CD33 leukemic mAb was Fc engineered to increase affinity to CD16 using a triple mutation Ser293Asp/Ala330Leu/Ile332Glu that resulted in improved killing speed and boosted serial killing (54). Another study used a microscopy-based cytotoxicity assay to observe that the afucosylated mAb Obinutuzumab, increased the number of NK cell serial killing events of a CD20-expressing cell line (WIL-2S) alongside a decreased target cell lysis time (34). When we used a more physiologically relevant 3D model system, anti-CD20-AF caused a significant increase in multiple serial kills compared to the native anti-CD20 mAb. Collectively, these data support that Fc afucosylation boosted the ability of NK cells to rapidly engage and kill several opsonized target cells sequentially.
Serial killing of multiple target cells partially relies on efficient detachment (20). Inhibition of CD16 shedding with TAPI-0 had a dramatic effect on the ability of NK cells to detach from anti-CD20-AF opsonized target cells. In turn, efficient detachment boosted more efficient serial killing. Preventing CD16 shedding by ADAM17 inhibition with either anti-ADMA17 mAb (MEDI3622) or specific inhibitor (BMS566394) also elevated the amount of secreted IFNγ, which could lead to excessive immune activation (17,55). CD16 shedding may, in some cases, not be essential for NK cell detachment and serial killing of targets. Indeed, Zhu et al. demonstrated that noncleavable CD16 did not affect NK cell detachment from adherent epithelial cells in vitro (56).
Anecdotal observations of cellular interactions in the presence of afucosylated anti-CD20 mAb raised several interesting questions. For example, one question is whether or not a new available target cell in close proximity augments NK cell detachment and serial engagement (Supplementary Videos 1, 6-9, 12) (12), perhaps by triggering the relocation of activating receptors or by generating forces to move away. In addition, some NK cells quickly kill the first target cell and then more slowly kill a second target cell which raises the question as to whether or not the mechanisms of killing can vary from one interaction to another. This would be consistent with previous work showing that an initial kill may occur through Granzyme B release, while further lytic interactions may happen via receptormediated killing (57).
In summary, anti-CD20-AF mAb increased the ability of NK cell cells to lyse opsonized targets in comparison to anti-CD20. Crucially, anti-CD20-AF induced strong CD16 downregulation which allowed NK cells to detach more efficiently following a kill and thus altered the dynamics of interactions to enhance the sequential engagement of target cells.
Human Primary NK Cells
Primary human NK cells were isolated from the peripheral blood of healthy donors from the National Blood Service under ethics license REC 05/Q0401/108 (University of Manchester, Manchester, UK). In brief, peripheral blood monocyte cells (PBMC) were purified by density gradient centrifugation using Ficoll-Paque (GE Healthcare, Life Sciences). Primary human NK cells were subsequently isolated from PBMCs by negative magnetic selection, using the human NK cell isolation kit (Miltenyi Biotec). Isolated cells were cultured at 37 • C and 5% CO 2 at 10 6 cells/mL in clone media [Dulbecco's Modified Eagle Medium (DMEM) medium containing 30% Ham's F12 nutrient mixture, 10% human serum, 1% Non-essential amino acids, 1 mM sodium pyruvate (all Sigma-Aldrich), 2 mM L-glutamine, 50 U/mL penicillin/streptomycin, 50 µM 2-mercaptoethanol (all Gibco)]. Primary NK cells were also stimulated with 200 U/mL IL-2 (Roche) and rested for 6 days prior to experiments. NK cells rested in low-dose IL-2 had comparable CD16 expression to freshly isolated NK cells (Supplementary Figure 4).
Cell Line Culture
Daudi (Burkitt's lymphoma) and 721.221 (B lymphoblastoid) cell lines were used as NK cell susceptible targets and purchased from ATCC. Cells were cultured at 37 • C and 5% CO 2 in media comprising RPMI 1640 medium supplemented with 10% FBS, 1% Non-essential amino acids, 1 mM sodium pyruvate (1%), 2 mM L-glutamine (1%) and 50 U/mL penicillin/streptomycin (1%). All cell lines were routinely tested for mycoplasma infection using a PCR-based kit (PromoCell) that utilizes specific primers designed from DNA sequences coding for highly conserved ribosomal RNAs (16S-rRNA). All cells tested were negative.
Monoclonal Antibody Production and Target Opsonization
Human IgG1 anti-CD20 monoclonal antibodies were generated based on publicly available sequences (Drugbank Accession: DB00073). Anti-CD20 was produced using HEK293 6E cells and anti-CD20-AF was produced with POTELLIGENT Technology licensed from BioWa, Inc. This technology involves the reduction of the amount of fucose in the carbohydrate structure of an antibody using a proprietary fucosyl transferase-knockout CHO cell line as a production cell. Evidence shows that this technology can enhance ADCC activity of an antibody in vitro (58,59). Opsonization was based on incubation of target cells with mAbs for 45 min at 37 • C. To assess target cell opsonization, cells were washed with 1% FBS/PBS, fixed with 4% PFA at room temperature for 10 min and blocked with 2% human serum/PBS (Sigma-Aldrich) for 10 min at 4 • C. Following target cell opsonization mAbs were labeled with Alexa Fluor 488conjugated to goat anti-human IgG F(ab')2 fragment specific (Jackson ImmunoResearch) or a second anti-CD20 mAb was added (anti-CD20 PE; clone 2H7, Biolegend) for 20 min at 4 • C. Unless indicated otherwise, all isotype mAbs were used at 10 µg/mL based on experiments that defined the saturating concertation of mAb target opsonization.
Cytokine Detection by Enzyme-Linked Immunosorbent Assay
Primary NK cells were co-incubated with opsonized target cells, and TAPI-0 (5 µM) where indicated, in polystyrene flat bottom 48-well plates (Nunc) for 24 h at 37 • C. Where indicated, TAPI-0 in DMSO (stock solution concentration 10 mg/mL) was added such that the final concentration of the inhibitor was 5 µM. Cell supernatants were collected and centrifuged at 500 × g for 5 min at 4 • C to remove cell debris. Secretion of IFNγ and TNFα were quantified in the supernatants using ELISA (DuoSet, R&D Systems) according to manufacturer's instructions.
Live Time-Lapse Imaging in Microwells or in 3D Matrigel
Sterile microwells were coated with 10 µg/mL fibronectin (Sigma). NK cells were stained with 1 µM Calcein Red-Orange (Invitrogen) and opsonized targets were stained with 1 µM Calcein Green (Invitrogen). To stain with calcein dyes, cells were first washed three times with 10 mL RPMI and then incubated with the dye solution (1 µM) in RPMI for 15 min at 37 • C. Labeled NK cells were then added to opsonized target cells at E:T = 1:3 in clone media supplemented with 1 µl To-Pro-3 per well (Thermo Fisher) to discriminate dead cells.
Where indicated, 5 µM TAPI-0 (Santa Cruz Biotechnology) was added just before acquisition, the absence of the inhibitor was used as a control for Figure 4, while a vehicle only, DMSO, control was used in Supplementary Figure 3. For imaging in microwells (450 × 450 × 300 µm), stained cells were mixed and immediately added. For imaging in 3D Matrigel (Corning Matrigel Matrix), cells were resuspended in a cold Matrigel solution, diluted 1:4 with clone media and added to 8-well chamber slides (1.5 Lab-Tek II; Nunc). Time-lapse imaging was performed at 37 • C and 5% CO 2 for 6 or 8 h with an image acquired every 3 min. Imaging was performed using a Leica TCS SP8 inverted confocal microscope with a 20x/0.75NA objective (SP8, Leica Biosystems) using a white light laser. Fluorescent and bright-field images were merged and cell-cell interactions were analyzed manually using ImageJ. Microwells were produced in-house and used as previously described (10).
Statistical Analysis
For each dataset, a D'Agostino and Pearson omnibus test or Shapiro-Wilk normality test was used to evaluate whether the obtained values were normally distributed. The statistical significance between two groups of data with normal distribution was examined using a two-tailed Student's t-test. The statistical significance between three or more conditions was assessed by one-way ANOVA. Differences were defined as nonsignificant where P ≥ 0.05 and were statistically significant where * P < 0.05; * * P < 0.01; * * * P < 0.001; and * * * * P < 0.0001. All statistical analyses were performed using Prism v8.0 (GraphPad).
DATA AVAILABILITY STATEMENT
The original contributions generated in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by REC 05/Q0401/108. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AK, PB, CR, and AA: acquisition, analysis, and interpretation of data. AK, DR, AA, and DD: wrote the manuscript. All authors: discussed, reviewed and edited the manuscript, contributed to the concept of this work, and approved the submitted version.
|
2021-03-16T13:14:46.697Z
|
2021-03-16T00:00:00.000
|
{
"year": 2021,
"sha1": "b37c50dee159bce6872fb69a69a4aa871df5e060",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.641521/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b37c50dee159bce6872fb69a69a4aa871df5e060",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119422167
|
pes2o/s2orc
|
v3-fos-license
|
The Chevalley group G_{2}(2) of order 12096 and the octonionic root system of E_{7}
The octonionic root system of the exceptional Lie algebra E_8 has been constructed from the quaternionic roots of F_4 using the Cayley-Dickson doubling procedure where the roots of E_7 correspond to the imaginary octonions. It is proven that the automorphism group of the octonionic root system of E_7 is the adjoint Chevalley group G_2(2) of order 12096. One of the four maximal subgroups of G_2(2) of order 192 preserves the quaternion subalgebra of the E_7 root system. The other three maximal subgroups of orders 432,192 and 336 are the automorphism groups of the root systems of the maximal Lie algebras E_6xU(1), SU(2)xSO(12), and SU(8) respectively. The 7-dimensional manifolds built with the use of these discrete groups could be of potential interest for the compactification of the M-theory in 11-dimension.
INTRODUCTION
The Chevalley groups are the automorphism groups of the Lie algebras defined over the finite fields [1]. The group G 2 (2) is the automorphism group of the Lie algebra g 2 defined over the finite field F 2 which is one of the finite subgroups of the Lie group G 2 [2]. Here we prove that it is the automorphism group of the octonionic root system of the exceptional Lie group E 7 .
The exceptional Lie groups are fascinating symmetries arising as groups of invariants of many physical models suggested for fundamental interactions. In the sequel of grand unified theories(GUT's) after SU (5) ≈ E 4 [3], SO(10) ≈ E 5 [4] the exceptional group E 6 [5] has been suggested as the largest GUT for a single family of quarks and leptons. The 11-dimensional supergravity theory admits an invariance of the non-compact version of E 7 [E 7(−7) ] with a compact subgroup SU (8) as a global symmetry [6]. The largest exceptional group E 8 , originally proposed as a grand unified theory [7] allowing a three family interaction of E 6 , has naturally appeared in the heterotic string theory as the E 8 × E 8 gauge symmetry [8].
The infinite tower of the spin representations of SO (9) , the little group of the 11-dimensional M-theory, seems to be unified in the representations of the exceptional group F 4 [9]. Moreover, it has been recently shown that the root system of F 4 can be represented with discrete quaternions whose automorphism group is the direct product of two binary octahedral groups of order 48 × 48 = 2304 [10].
The smallest exceptional group G 2 , the automorphism group of octonion algebra, turned out to be the best candidate as a holonomy group of the 7-dimensional manifold for the compactification of M-theory [11]. For a " topological M-theory" [12] one may need a crystallographic structure in 7-dimensions. In this context the root lattices of the Lie algebras of rank-7 may play some role, such as those of SU (8), E 7 and the other root lattices of rank-7 Lie algebras. The SU (8) is a maximal subgroup of E 7 therefore it is tempting to study the E 7 root lattice. Here a miraculous happens! The root system of E 7 can be described by the imaginary discrete octonions [13]. The Weyl group W (E 7 ) is isomorphic to a finite subgroup of O(7) which is the direct product Z 2 × SO 7 (2) where the latter group is the adjoint Chevalley group of order 2 9 .3 4 .5.7 [14]. However, the Weyl group W (E 7 ) does not preserve the octonion algebra. When one imposes the invariance of the octonion algebra on the transformations of the E 7 roots one obtains a finite subgroup of G 2 , as expected, the adjoint Chevalley group G 2 (2) of order 12096 [13,15]. A G 2 holonomy group of the 7-dimensional manifold admitting the discrete symmetry G 2 (2) may turn out to be useful for E 7(−7) is related to the 11-dimensional supergravity theory.
In what follows we discuss the mathematical structure of the adjoint Chevalley group G 2 (2) using the126 non-zero octonionic roots of E 7 without referring to its matrix representation [16].
In section 2 we construct the octonionic roots of E 8 [13,17] using the two sets of quaternionic roots of F 4 which follows the magic square structure [18] where imaginary octonions represent the roots of E 7 . First we build up a maximal subgroup of G 2 (2) of order 192 which preserves the quaternionic decomposition of the octonionic roots of E 7 . It is a finite subgroup of SO(4) .Section 3 is devoted to a discussion on the embeddings of the group of order 192 in the G 2 (2). In section 4 we study the maximal subgroups of G 2 (2) and their relevance to the root systems of the maximal Lie algebras of E 7 . Finally, in section 5, we discuss the use of our method in physical applications and elaborate the various geometrical structures.
OCTONIONIC ROOT SYSTEM OF E8
In the reference [13] we have shown that the octonionic root system of E 8 can be constructed by doubling two sets of quaternionic root system of F 4 [10] via Cayley-Dickson procedure. Symbolically we can write, where the short roots of F 4 match with the short roots of the second set of F 4 roots and the long roots match with the zero roots. Actually (1) follows from the magic square given by Table 1. The quaternionic scaled roots of F 4 can be given as follows: where T ⊕ T ′ are the set of elements of the binary octahedral group, compactly written as More explicitly, the set of ±1 ± e 1 ± e 2 ± e 3 , even number of (+)signs (4) ±1 ± e 1 ± e 2 ± e 3 , even number of (−)signs ( V + is the quaternionic conjugate of V + ) where e i (i = 1, 2, 3) are the imaginary quaternionic units.
Here T is the set of quaternionic elements of the binary tetrahedral group which represents the root system of SO(8) and T ′ √ 2 represents the weights of the three 8-dimensional representations of SO (8) or, equivalently, T and T ′ √ 2 represent the long and short roots of F 4 respectively. The geometrical meaning of these vectors are also interesting [19]. Here each of the sets V 0 , V + , V − represent a hyperoctahedron in 4-dimensional Euclidean space. The set T is also known as a polytope {3, 4, 3} called 24-cell [20]. Its dual polytope is T ′ where V i (i = 1, 2, 3) are the duals of the octahedron in T . Any two of the sets V 0 , V + , V − form a hypercube in 4-dimensions. Using the Cayley-Dickson doubling procedure one can construct the octonionic roots of E 8 as follows: where e 1 , e 2 and e 7 are the basic imaginary units to construct the other units of octonions 1, e 1 , e 2 , e 3 = e 1 e 2 , e 4 = e 7 e 1 , e 5 = e 7 e 2 , e 6 = e 7 e 3 . They satisfy the algebra where φ ijk is totally anti-symmetric under the interchange of the indices i, j, k and take the values +1 for the indices 123, 246, 435, 367, 651, 572, 741 [21]. The set of E 8 roots in (6) can also be compactly written as the sets of octonions ± e i (i = 1, 2, ..., 7), short roots : From the magic square one can also write the roots of E 7 in the form (SP (3), F 4 ) consisting of only imaginary octonions which can further be put in the form The roots in (10) also follows from a Coxeter-Dynkin diagram of E 8 where the simple roots represented by octonions depicted in Figure 1. As we stated in the introduction, the automorphism group of octonionic root system of E 7 is the adjoint Chevalley group G 2 (2), a maximal subgroup of the Chevalley group SO 7 (2). Below we give a proof of this assertion and show how one can construct the explicit elements of G 2 (2) without any reference to a computer calculation of the matrix representation. We start with a theorem [22] which states that the automorphism of octonions that take the quaternions H to itself form a group [p, q], isomorphic to SO(4) ≈ SU(2)×SU(2)
Z2
. Here p and q are unit quaternions. In a different work [23] where [p, q] represents an SO(4) transformation preserving the norm rr =rr of the quaternion r. More explicitly, it has been shown in [22] that the group element [p, q] acts on the Cayley-Dickson double quaternion as Now we use this theorem to prove that the transformations on the root system of E 7 in (10) preserving the quaternion subalgebra form a finite subgroup of SO(4) of order 192. In reference [10] we have shown that the maximal finite subgroup of SO(3) which preserves the set of quaternions V ′ 0 = {±e 1 , ±e 2 , ±e 3 } representing the long roots of SP (3) as well as the vertices of an octahedron is the octahedral group written in the form [t,t] ⊕ [t ′ ,t] where t ∈ T and t ′ ∈ T ′ . On the other hand e 7 T is left invariant under the transformations [p, q] ⊕ [p ′ , q ′ ] , (p, q ∈ T ; p ′ , q ′ ∈ T ′). Therefore the largest group preserving the structure (V ′ 0 , 0) = V ′ 0 , (0, T ) = e 7 T is a finite subgroup of SO(4) of order 576. We will see that actually we look for a subgroup of this group because it should also preserve the set of roots as well as keeping the form of (12) invariant. A multiplication table shown in Table 2 for the elements of the binary octahedral group [19] will be useful to follow the further discussions. Equation (12) states that the transformation pHp fixes the scalar part of the quaternion H. Therefore the transformation in (12) acting on the root system of E 8 in (6) will yield the same result when (12) acts on the roots of E 7 in (10). Now we check the transformation (12) acting on the roots in (13) and seek the form of [p, q] which preserves (13). More explicitly , we look for the invariance We should check all pairs in and see that only the set of elements (14). Just to see why [V + , V + ], for example, does not work let us apply it on the set of Using Table 2 we obtain that which does not belong to the set of roots of E 7 . Similar considerations eliminate all the subsets of elements in [T, T ] but leaves only [T, and it permutes the three sets of roots of E 7 in (13). Now we study the action of [T ′ , T ′ ] on the roots in (13). We can easily prove that the set of elements [V 1 , V 1 ] does the job: We can check easily that the set of elements [V 2 , V 1 ] and [V 3 , V 1 ] also satisfy the requirements. Note that ; any one of these set of elements, while preserving one set of roots in (13), exchange the other two.
We conclude that the subset of elements of the group [p, q] ⊕ [p ′ , q ′ ] , (p, q ∈ T ; p ′ , q ′ ∈ T ′ ) which preserve the root system of It is also interesting to note that the group [T, T ] ⊕ [T ′ , T ′ ] has another maximal subgroup of order 192 with 13 conjugacy classes whose elements can be written as This group does not preserve the root system of E 7 ,however, it preserves the quaternion algebra in the set of imaginary octonions ±e i (i = 1, 2, ..., 7). This is also an interesting group which turns out to be maximal in an another finite subgroup of G 2 (2) of order 1344 [24]. The group in (16) can also be written as the semi-direct product of [V 0 , V 0 ] and S 3 , however, two groups are not isomorphic because the symmetric group S 3 here is generated by as in the previous case.
EMBEDDINGS OF THE QUATERNION PRESERVING GROUP IN THE CHEVALLEY GROUP
We go back to the equation (6) and note that the binary tedrahedral group T = V 0 + V + + V − played an important role in the above analysis for it represents the root system of SO (8). Any one element of the quaternionic elements of the hypercube V + + V − = 1 2 {±1 ± e 1 ± e 2 ± e 3 } satisfies the relation p 3 = ±1. Actually we have 112 octonionic elements of this type in the roots of E 8 .
We have proven in an earlier paper [23][28] that the transformation b → abā (17) where a 3 = ±1 is an associative product of octonions which preserve the octonion algebra. More explicitly, when e i (i = 1, 2, ..., 7) represent the imaginay octonions the transformation preserves the octonion algebra To work with octonionionic root systems makes life difficult because of nonassociativity. However, the following theorem [25] proves to be useful. Let p be any root of those 112 roots and q be any root of E 8 . Consider the transformations on q : It was proven in [25] that q 1 , q 2 , q 3 form an associative triad (q 1 q 2 )q 3 = q 1 (q 2 q 3 ) satisfying the relations q 1 p for q i .p = 0, (42 triads) Actually this decomposition of E 8 roots is the same as its branching under SU (2) × E 7 where the non-zero roots decompose as 240 = 126 + 2 + (2, 56). The first 42 triads are the 126 non-zero roots of E 7 and ±p are those of SU (2). The remaining 36 × 3 = 108 roots with ±1, ±p constitute the 112 roots of the coset space. In general one can show that 24triads, out of 42 triads, corresponding to the roots of E 6 are imaginary octonions and the remaining 18 triads are those with non-zero scalar parts. The 9 triads of those octonionic roots which satisfy the relation q i .p = − 1 2 are imaginary octonions and their negatives satisfy the relation q i .p = 1 2 . When ±1 represent the roots of SU (2) then all the roots of E 7 are pure imaginary as depicted in Figure 1. For a given octonion p with non-zero real part one can classify the imaginary roots of E 7 as follows: (i) 72 imaginary octonions which are grouped in 24 triads satisfying the relation q i .p = 0 (ii) 27 imaginary roots classified in 9 associative triads whose products satisfy the relation q 1 q 2 q 3 = −1 are the quaternionic units. They represent the weights of the 27 dimensional representation of E 6 .
(iii)The remaining 9 triads are the conjugates of those in (ii) and represent the weights of the representation 27 of E 6 .
In the next section we will prove that the root system of E 8 in (6) and equivalently those of E 7 in (10) can be constructed 63 different ways preserving the octonion algebra so that the automorphism group of the octonionic root system of E 7 is the group G 2 (2) of order 192 × 63 = 12096.
We recall that we have 18 associative triads with non-zero scalar part, each being orthogonal top. To distinguish the imaginary octonions for which we keep the notation q i we denote the roots with non-zero scalar part by r i satisfying the relation r i .p = 0 where r 3 i = ±1 , (i = 1, 2, 3). They are permuted as follows : The scalar product r i .p = 0 can be written as We can use (20) to show that r 1 r 2 = r 2 r 3 = r 3 r 1 = p with conjugatesr 2r1 =r 3r2 =r 1r2 =p.One can easily show that the octonions r 1 , r 2 and r 3 are mutually orthogonal to each other: r 1 .r 2 = r 2 .r 3 = r 3 .r 1 = 0 → r 1r2 + r 2r1 = r 2r3 + r 3r2 = r 3r1 + r 1r3 = 0 which also implies that r 1r2 , r 2r3 , r 3r1 are imaginary octonions.
The orthogonality of r 1 , r 2 and r 3 can be proven as follows. Consider the scalar product Let us assume without loss of generality thatp = 1 − p ,r 1 = 1 − r 1 . Substitutingp = 1 − p andr 1 = 1 − r 1 in (23) and using (21) as well as the Moufang identities [22] (pq)(rp) = p(qr)p (24) one can show that r 1 .r 2 = 0. Similar considerations for the other octonions will prove that the four octonions r 1 , r 2 , r 3 andp are mutually orthogonal to each other so that ±r 1 , ±r 2 , ±r 3 and ±p form the vertices of a hyperoctahedron. Similarly their conjugates forming an orthogonal quarted with their negatives represent the vertices of another hyperoctahedron. The imaginary octonions r 1r2 , r 2r3 , r 3r1 are cyclically rotated to each other in the manner p(r 1r2 )p = r 2r3 ( cyclic permutations of 1, 2, 3 ) and satisfying the relation (r 1r2 ).p = 1 2 where the conjugate r 2r1 satisfies the relation (r 2r1 ).p = − 1 2 . If we denote by the imaginary octonions E 1 = r 3r2 , E 2 = r 1r3 and E 3 = r 2r1 . It is easy to prove the following identities:p Therefore the set of 24 octonions are the quaternions forming the binary tedrahedral group and representing the roots of SO (8). Once this set of octonions are given we can construct the root system of F 4 and form the roots of E 8 similar to the equation (5). It is obvious that for a given p(p) one can construct the elements of the binary tedrahedral group, in other words, SO(8) root system 9 different ways as we have argued in the previous section. Since we have 112 roots of this type and a choice of p includes always ±p and ±p that reduces such a choice to 112 4 = 28. This number further reduces to 28 4 = 7 becausep, r 1 , r 2 , r 3 come always in quartets. It is not only p(p) rotates r 1 , r 2 , r 3 in the cyclic order but any one of them rotates the other three cyclically. One can show, for example, that r 1pr1 = r 2 , r 1 r 2r1 = r 3 , r 1 r 3r1 =p.
The others satisfy similar relations. Therefore the choice of elements of a binary tedrahedral group or equivalently F 4 root system out of octonions is 9 × 7 = 63. Since the group preserving the quaternion structure is of order 192 the overall group which preserves the octonionic root system of E 7 is a group of order 192 × 63 = 12096. It has to be a subgroup of G 2 and the group is certainly the Chevalley group G 2 (2) [2].
MAXIMAL SUBGROUPS OF G2(2) AND THE MAXIMAL LIE ALGEBRAS OF E7
There are four regular maximal Lie algebras of E 7 : (12), SU (8), SU (3) × SU (6); and there are four maximal subgroups of the Chevalley group G 2 (2). It is interesting to see whether any relations between these groups and the octonionic root systems of these Lie algebras exist ( See M. Koca and F. Karsch in reference [2]). There is a one-to-one correspondence between them but with one exception. When one imposes the invariance of the octonion algebra on the root system of SU (3) × SU (6) one obtains a group which is not maximal in the Chevalley group G 2 (2). Yet the maximal subgroup [T, V 0 ]⊕ [T ′ , V 1 ] of order 192 (17) preserves the quaternion algebra of the magic square structure (SP 3 , F 4 ).The other maximal subgroups of G 2 (2) which are of orders 432 (14), 192 (14) and 336 (9) have one-to-one correspondences with the groups which preserve the octonionic root systems of E 6 × U (1), SU (2) × SO (12) and SU (8) respectively. In this section we will discuss the constructions of these three maximal subgroups of G 2 (2) as the automorphism groups of the corresponding octonionic root systems. Their character tables and the subgroup structures can be found in reference [27].
Octonionic root system of E6 × U (1) and the group of order 432 (14) Since U (1) factor is represented by zero root we are essentially looking at the roots of E 6 in E 7 . Either using the simple roots of E 8 in Figure 1 or those roots of E 7 already given in equation (8) we may decompose the roots of E 7 to those roots orthogonal to the vector 1 2 (1 − e 1 − e 2 − e 3 ) which constitute the 72 roots of E 6 and the ones having a scalar product ± 1 2 with it will be the weights of the representations 27 + 27 * . In an explicit form they read: Non-zero roots of E 6 : ±e 4 , ±e 5 , ±e 6 , 1 2 (±e 4 ± e 5 ± e 6 ± e 7 ), ± 1 2 (e 2 − e 3 ± e 4 ± e 7 ), ± 1 2 (e 2 − e 3 ± e 5 ± e 6 ), The number in the bracket is the number of conjugacy classes and is used to distinguish the groups having the same order.
Weights of 27 + 27 * of E 6 : , ± 1 2 (e 2 + e 3 ± e 5 ± e 6 ), ± 1 2 (e 1 + e 2 ± e 5 ± e 7 ), ± 1 2 (e 1 + e 2 ± e 4 ± e 6 ) Now we are in a position to determine the subgroup of the group of order 192 (17) which preserves this decomposition. The magic square indicates that the root system of E 6 can be obtained by Cayley-Dickson procedure as the pair (SU (3), F 4 ) which is clear from (30) where the roots of (SU 3) are represented by the short roots ± 1 2 (e 2 − e 3 ) , ± 1 2 (e 3 − e 1 ) , ± 1 2 (e 1 − e 2 ). It can be shown that the subgroup of the group of order 192(17) preserving this system of roots where the imaginary unit e 7 is left invariant is the group generated by the elements, Here t is given by t = 1 2 (1 + e 1 + e 2 + e 3 ) . More explicitly we can write the elements of the group of interest as follows Each set contains 8 elements hence the group is of order 48. We recall that in the decomposition of the root system of E 7 in (30) and (31) under E 6 the quaternions ±t(±t) and thereby the quaternionic imaginary units e 1 , e 2 , e 3 are used. This implies that the sum 1 √ 3 (e 1 + e 2 + e 3 ) is left invariant under the transformations tqt where q is any octonion. This proves that the group of concern is a finite subgroup of SU (3) acting in the 6-dimensional Euclidean subspace. The discussions through the relations (17)(18)(19)(20) show that one can construct the root system of E 6 in (30), consequently those weights in (31), 9 different ways implying that the group of order preserving the root system of E 6 in (30) is a finite subgroup of SU (3) of order 48 × 9 = 432 with 14 conjugacy classes. The 6 × 6 irreducible matrix representation of this group as well as its character table can be found in reference [27]. The octonionic root system of SU (2) × SO (12) and the group of order 192 (14) Existence of an automorpism group of order 192 is obvious since the SU (2) roots are any imaginary octonion ±q which must be left invariant under any transformation. Since we have 126/2 = 63 choice for the SU (2) roots the group of invariance is 12096/63 = 192. The structure of this group is totally different than the previous group of order 192(17) as we will discuss below.
The remaining roots transform as the weights of the representation (2, 32 ′ ) under SU (2) × SO (12). Since the root ±e 7 remains invariant under any transformation which preserves the decomposition of E 7 under SU (2) × SO(12) the group which we seek is a finite subgroup of SU (3). We recall from the previous discussions that the quaternionic root system of SP (3) which is also a subgroup of the octahedral group. Since we have V 0 and V 1 on the right of the pairs it should be Actually we can write all the group elements explicitly, (±1 ± e 1 ± e 2 ± e 3 ), 1 2 (±1 ± e 1 ± e 5 ± e 6 ), 1 2 (±1 ± e 2 ± e 4 ± e 5 ), 1 2 (±1 ± e 3 ± e 4 ± e 6 ).
One can prove that each set of 16 octonions in (38) will yield to 3 sets of quaternionic imaginary units not involving e 7 . Therefore there are 12 different quaternionic units to build the group structure Z 2 x(Z 4 : Z 2 ) and the number of overall elements of the group preserving the root system in (35) is 12 × 16 = 192. To give a nontrivial example let us choose p = 1 2 (1 + e 2 + e 4 + e 5 ) withp = 1 2 (1 − e 2 − e 4 − e 5 ). The following set of octonions chosen from(35) 1 2 (±e 1 + e 2 + e 4 ± e 6 ), 1 2 (±e 3 + e 2 + e 5 ± e 6 ), have scalar products q i .p = 0 where q i is one of those in (39). Under the rotation pq ip , for example, the quaternionic units are permuted and one can construct (35) with the set of octonions SO(12) roots: SU (2) roots: ±e 7 This is certainly invariant under the quaternion preserving automorphism group of order 16 as discussed above where the imaginary quaternionic units e 1 , e 2 , e 3 in (36-37 ) are replaced by E 1 , E 2 , E 3 in (40). One can proceed in the same manner and construct 12 different sets of quaternionic units by which one constructs the group Z 2 ×(Z 4 : Z 2 ).
First of all, we note that the roots of E 7 decompose under its maximal Lie algebra SU (8) as 126 = 56 + 70. Therefore those roots of E 7 in (8) not displayed in (42) are the weights of the 70 dimensional representation of SU (8).
To determine the automorphism group of the set in (42) we may follow the same method discussed above however here we choose a different way for SU (8) is not in the magic square.
In an earlier paper [16] we have constructed the 7-dimensional irreducible representation of the group P SL 2 (7) : Z 2 of order 336 and proved that this group preserves the octonionic root system of SU (8). Below we give three matrix generators of the Klein's group P SL 2 (7), a simple group with 6 conjugacy classes, These matrices satisfy the relation The matrices A and B generate the octahedral subgroup of order 24 of the Klein's group.
Any one of these subgroups or the whole group G 2 (2) could be used to construct the manifolds which can be useful for the compactification of a theory in 11 dimension.
|
2019-04-14T02:54:16.013Z
|
2005-09-26T00:00:00.000
|
{
"year": 2005,
"sha1": "ae273a507fbb48b5d32b9f5cbc25dd50a762af98",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.laa.2006.12.011",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "ae273a507fbb48b5d32b9f5cbc25dd50a762af98",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
221382180
|
pes2o/s2orc
|
v3-fos-license
|
Diagnosis and Treatment of Patients With Acute Myeloid Leukemia With Myelodysplasia-Related Changes (AML-MRC)
Abstract Objectives Acute myeloid leukemia (AML) with myelodysplasia-related changes (AML-MRC) represents a high-risk and somewhat diverse subtype of AML, and substantial confusion exists about the pathologic evaluation needed for diagnosis, which can include the patient’s clinical history, cytogenetic analysis, mutational analysis, and/or morphologic evaluation. Treatment decisions based on incomplete or untimely pathology reports may result in the suboptimal treatment of patients with AML-MRC. Methods Using a PubMed search, diagnosis of and treatment options for AML-MRC were investigated. Results This article reviews the current diagnostic criteria for AML-MRC, provides guidance on assessments necessary for an AML-MRC diagnosis, summarizes clinical and prognostic features of AML-MRC, and discusses potential therapies for patients with AML-MRC. In addition to conventional chemotherapy, treatment options include CPX-351, a liposomal encapsulation of daunorubicin/cytarabine approved for treatment of adults with AML-MRC; targeted agents for patients with certain mutations/disease characteristics; and lower-intensity therapies for less fit patients. Conclusions Given the evolving and complex treatment landscape and the high-risk nature of the AML-MRC population, a clear understanding of the pathology information necessary for AML-MRC diagnosis has become increasingly important to help guide treatment decisions and thereby improve patient outcomes.
The classification of acute myeloid leukemia with myelodysplasia-related changes (AML-MRC) in general includes patients with acute myeloid leukemia (AML) that develops after myelodysplastic syndromes (MDSs) or MDS/myeloproliferative neoplasms (MPNs), AML with multilineage dysplasia, and de novo AML with certain MDS-related cytogenetic abnormalities. 1 The classification of AML-MRC overlaps somewhat with the traditional term "secondary AML," which includes patients with AML that develops from an antecedent hematologic disorder (including MDS and MDS/MPN), as well as those with therapy-related AML that develops after prior cytotoxic therapy, radiotherapy, or immunosuppressive therapy 2 ; however, therapy-related AML is not included in the AML-MRC category.
It has been estimated that AML-MRC represents up to 48% of all adult AML cases. 3,4 Outcomes for patients with AML-MRC, or more generally those with secondary AML, following conventional combination chemotherapy, are poor compared with many other AML subtypes, with lower remission rates and shorter overall
Diagnostic Criteria for AML-MRC
The prognostic significance of dysplastic changes in the nonblast cells of patients with AML was first described by Gahn et al 9 in 1996. In 2002, the World Health Organization (WHO) introduced the category of "AML with multilineage dysplasia," which applied to patients who had 20% or more blasts in the blood or bone marrow and dysplasia in 50% or more of cells in two or more hematopoietic cell lineages. 10 In 2008, the WHO expanded the category to include patients with a history of MDS or MDS/MPN and those with certain myelodysplasiarelated cytogenetic abnormalities, thus creating the AML-MRC category. 11 According to the 2016 WHO Classification ❚Table 1❚, the current AML-MRC designation applies to patients with AML who have 20% or more blasts in the blood or bone marrow and who meet any of the following criteria: a history of MDS or MDS/MPN, such as chronic myelomonocytic leukemia (CMML); an MDS-related cytogenetic abnormality; or multilineage dysplasia in 50% or more of two or more cell lineages (ie, dysgranulopoiesis, dyserythropoiesis, or dysmegakaryopoiesis; ❚Image 1❚) in the absence of NPM1 or biallelic CEBPA mutations (if the diagnosis is based on multilineage dysplasia alone). 1 AML-MRC includes a variety of cytogenetic abnormalities, including complex karyotypes (defined as three or more unrelated abnormalities, not including core binding factor rearrangements and the PML/RARA rearrangement) and other specified unbalanced and balanced abnormalities 1 ( Table 1). Dysgranulopoiesis includes hypogranularity, nuclear hyposegmentation of granulocytes (ie, Pelger-Huët-like anomaly), abnormal granularity (pseudo-Chédiak-Higashi granules), and abnormally segmented nuclei. 3,4 Dyserythropoiesis includes megaloblastosis, nuclear budding, irregular nuclear contours, nuclear fragmentation, multinucleation, karyorrhexis, nuclear bridging, ring sideroblasts, and cytoplasmic vacuolization. 3,4 Dysmegakaryopoiesis includes small size, nuclear hypolobation, nuclear hypersegmentation, and separated nuclear lobes. 3,4 The AML-MRC diagnosis excludes cases of therapyrelated AML and AML with cytogenetic abnormalities qualifying for a diagnosis of AML with recurrent genetic abnormalities, such as t(8;21), inv(3), and t(6;9), the latter two of which may have multilineage dysplasia. 1,12 Although not part of the disease definition, various gene mutations are more commonly associated with AML-MRC, including mutations of ASXL1, TP53, and U2AF1, and may have prognostic significance within this group. [13][14][15] The patient's clinical history, cytogenetic analysis, mutational analysis, and morphologic evaluation are all important for the diagnosis and prognosis of AML-MRC, as well as for informing treatment decisions. Since there are now newer initial treatment options for this subset of patients, it is critical for the pathologist to offer the diagnosis of AML-MRC as soon as possible, which may require amending reports after receipt of cytogenetic and molecular genetic results. Patients with a known history of MDS or MDS/MPN are the easiest to diagnose, as they can be diagnosed based on clinical history. However, diagnosis on the basis of karyotype requires a longer period of time to complete the necessary assessments. Assessment of multilineage dysplasia requires a skilled hematopathologist comfortable with evaluation of dysplastic features, as well as adequate aspirate samples to judge morphologic changes and sufficient residual hematopoietic precursors to confidently comment on dysplastic features in 50% or more of the cells. If AML-MRC is diagnosed based on only multilineage dysplasia, then mutational analysis is also required to exclude patients with NPM1 and biallelic CEBPA mutations, and this information is typically not immediately available. Some, 16 but not all, 17,18 early studies suggested that multilineage dysplasia alone was not predictive of a worse prognosis in patients with intermediate-risk cytogenetics who lacked a history of MDS or MDS/MPN. However, subsequent studies have shown this is only the case in the presence of NPM1 and CEBPA mutations. AML cases with these mutations may have multilineage dysplasia, which is not prognostically significant. In the absence of such mutations, however, multilineage dysplasia remains a predictor of poor prognosis 19 and a criterion for the diagnosis of AML-MRC. 1 The pathologist must integrate all of this information into the final report as quickly as possible to allow the clinician to make timely treatment decisions. Because an accurate diagnosis of AML-MRC requires critical clinical information, as well as integration of morphology, cytogenetics, and, at times, molecular genetic studies, such a diagnosis creates reporting challenges. Timing of the analyses to accurately diagnose patients with AML-MRC also represents a significant challenge to the optimal treatment of patients. Such challenges in the reporting of AML specimens have been summarized in more detail elsewhere, [20][21][22] but all reports need to record information related to prior MDS or MDS/MPN and morphologic descriptions with quantification of dysplasia in nonblast cells, when present. Subsequent cytogenetic and molecular genetic findings must be integrated into a final report to ensure an accurate diagnosis. Ultimately, changes in the diagnostic and therapeutic pathways may be needed to provide optimal treatment of patients with AML-MRC.
Assessments Necessary for AML-MRC Diagnosis
The 2017 guidelines from the College of American Pathologists and the American Society of Hematology specify that the following assessments and testing ❚Image 1❚ Morphologic features of multilineage dysplasia. A, Dysplastic changes are usually most prominent on the bone marrow aspirate or peripheral blood smear. In this case, erythroid precursors show irregular nuclear contours (white arrow), granulocytes demonstrate clumped nuclear chromatin without complete segmentation and hypogranular cytoplasm (black arrows), and a small, hypolobated megakaryocyte is present (white arrowhead). Some blast cells are small (black arrowheads) and may be mistaken for lymphocytes (Wright-Giemsa, ×600). B, The bone marrow biopsy specimen is hypercellular with a heterogenous cellular composition. Dysplastic small and large megakaryocytes, with detached nuclear lobes, are easily identified on the biopsy specimen (some marked with arrows) (H&E, ×400). methods should be employed to accurately identify different subtypes of AML, including AML-MRC. 20 These assessments can help to differentiate AML-MRC from other subtypes of AML ❚Figure 1❚.
A thorough patient history and relevant clinical data, including a physical examination, imaging findings, and blood laboratory values, should be obtained. 20 Patient medical history is important for identifying and excluding individuals with therapy-related AML. 3 According to the current WHO classification, patients with a history of prior cytotoxic therapy should be diagnosed as having therapy-related AML even if they also have features of AML-MRC (eg, antecedent therapy-related MDS that develops into AML). 1 Metaphase cytogenetic analysis, fluorescent in situ hybridization (FISH) testing, and/or reverse transcriptase-polymerase chain reaction (RT-PCR) should be performed to identify cytogenetic abnormalities and differentiate AML-MRC from the WHO classification category of AML with recurrent cytogenetic abnormalities. FISH analysis (as opposed to metaphase karyotype) of de novo AML may be able to more rapidly identify patients with AML-MRC based on MDS-related cytogenetic abnormalities. AML FISH panels typically include probes for −5, del(5q), −7, and del(7q), which may aid in identifying patients with AML-MRC. Furthermore, probes for other translocations could identify deletions on other chromosomes; if three or more abnormalities are detected, a diagnosis of AML-MRC could be considered. Finally, FISH panels will exclude t(8;21), inv (16), and t(15;17), which are excluded from AML-MRC regardless of the complexity of the karyotype. 1 However, it should be noted that the WHO classification is based on karyotype and not FISH findings, and the significance of an abnormal FISH result in the setting of a normal karyotype with 20 metaphases remains to be determined. A study from the University of Pennsylvania compared results of rapid FISH testing (MDS panel; turnaround time of <6 hours) with those of metaphase chromosome analysis in 31 adults thought to potentially have therapyrelated AML or AML-MRC. Fifteen (48%) patients were identified as having MDS-related cytogenetics by metaphase chromosome analysis; of these, 12 (80%) patients were also identified by rapid FISH analysis and one additional patient was known to have a history of MDS, demonstrating the feasibility of rapid FISH analysis in combination with clinical history for identifying patients with AML-MRC. 23 While one could argue that rapid FISH testing is not necessary when an adequate karyotype is available, such testing may allow patients to receive specialized therapy for AML-MRC earlier, which might justify the added expense. Furthermore, the karyotypic analysis may fail due to lack of metaphases; the FISH analysis could provide valuable clinical information in this setting as well.
A fresh bone marrow aspirate smear in conjunction with a bone marrow trephine core biopsy specimen, bone marrow trephine touch preparations, and/or marrow clots should undergo morphologic evaluation by a hematopathologist. There are a few specific situations in which bone marrow may have some of the features of AML-MRC but cannot be classified as such. For example, AML not otherwise specified (AML-NOS) categories, including acute megakaryoblastic leukemia, may have dyspoiesis in one cell line but cannot be considered AML-MRC based on the presence of unilineage dysplasia alone. Cases of pure erythroid leukemia and cases previously diagnosed as acute erythroleukemia (erythroid/ myeloid type) may have myelodysplasia-related cytogenetic abnormalities and dyspoiesis in multiple cell lines, but these cases do not have 20% or more myeloblasts. 1,3,20 Cases meeting prior criteria of the erythroid/myeloid type of erythroleukemia are now classified as MDS. 1,24 Finally, mutational analysis should minimally be performed for FLT3, NPM1, CEBPA, RUNX1, IDH1, and IDH2 based on various guidance 1,20,25,26 ; although these mutational analyses are currently insufficient for a diagnosis of AML-MRC by themselves, additional mutational analyses can be useful for estimating prognosis and informing treatment decisions. While TP53 mutations are commonly associated with a complex karyotype in AML and therefore a diagnosis of either AML-MRC or therapy-related AML, 13 such testing is not included in most guidelines since other features are present in these cases to determine the diagnosis. Immunohistochemical analysis of p53 shows an increase in some patients with AML-MRC, 27 but this is generally related to a complex karyotype and other features of AML-MRC and is not usually needed for diagnosis.
Clinical and Prognostic Features of AML-MRC
AML-MRC represents up to 48% of all adult AML cases. 3,4 It occurs primarily in elderly patients (median age, 68 years) and is uncommon in children. 3,4,14,15 AML-MRC occurs more often as de novo AML with MDS-related cytogenetic changes or multilineage dysplasia than as AML arising secondarily from prior documented MDS or MDS/MPN. 4,12 By definition, patients with AML-MRC have a high frequency of adverse cytogenetics, including complex karyotype, and they often present with severe pancytopenia. 3,4,7,14,15 AML-MRC is also characterized by a relatively high frequency of ASXL1 mutations (35% of patients) and low frequencies of FLT3 and DNMT3A mutations. 14 A recent study that evaluated mutation frequencies also found that patients with de novo AML-MRC tended to have a higher frequency of TP53 mutations, and those with antecedent MDS or MDS/MPN had a higher frequency of SETBP1, RUNX1, and SRSF2 mutations compared with the other evaluated groups; meanwhile, patients with AML-MRC tended to have lower frequency of SF3B1 mutations than patients with MDS and NPM1, FLT3-ITD, and NRAS mutations than patients with AML other than AML-MRC. 28 Characteristics of 262 patients with AML-MRC underscored the challenging nature of this group: 57% were 75 years or older, 53% had poor-risk cytogenetics, and approximately one-third were reported to have had antecedent MDS. 29 In general, patients with AML-MRC have a worse prognosis, with lower remission rates and shorter OS, compared with patients who have most other AML subtypes. In a cohort of 100 patients with AML, those with AML-MRC had significantly shorter median OS and progression-free survival, as well as a lower complete remission (CR) rate, than those with AML-NOS. AML-MRC was identified as a predictor of poor OS independent of age or cytogenetic risk. 17 Similarly, in a cohort of 85 patients with AML, those with AML-MRC had a significantly lower CR rate (48% vs 78%) than those with other AML subtypes, although the 2-year OS rates were similar for the two groups. 14 Results from a larger-scale retrospective analysis of a cohort of 449 patients with AML indicated that those with AML-MRC had significantly shorter median OS (10 vs 16 months) and diseasefree survival (5 vs 12 months), as well as a lower CR rate (61% vs 78%), compared with patients with AML-NOS. 7 Different multivariate analyses have reported conflicting results regarding factors associated with a poorer prognosis among patients with AML-MRC. Results from one study identified older age (≥60 years), adverse cytogenetics, and antecedent MDS or MDS/MPN as independent factors associated with shorter OS and disease-free survival in patients with AML-MRC. 7 A separate study also identified antecedent MDS or MDS/ MPN and de novo AML with MDS-related cytogenetics as conferring a worse prognosis compared with patients with AML-MRC who had a diagnosis based on multilineage dysplasia (median OS of 5.3 and 6.3 vs 20.4 months). 8 In contrast, results from a second analysis indicated that MDS-related cytogenetics, antecedent MDS, and multilineage dysplasia did not influence OS in patients with AML-MRC. Of note, this analysis did identify ASXL1 and TP53 mutations as independent factors associated with shorter OS. 15 Another study found that patients with AML-MRC who had RUNX1 mutations had shorter OS compared with those who had any AML with RUNX1 mutations or AML-NOS with wild-type RUNX1 (11 vs 19 months and not reached, respectively), suggesting AML-MRC with a RUNX1 mutation represents a poor prognosis group. 30
Current Treatment Options for Patients With AML-MRC
The selection of treatment for patients with AML can be influenced by multiple factors, including age, cytogenetic risk, performance status, and others, 29 but there has been very limited evaluation of variables that might influence selection of therapy for patients with AML-MRC. Physicians must carefully consider the goals of therapy (curative vs palliative) prior to finalizing a treatment plan. Traditional combination chemotherapy and CPX-351 (Vyxeos; Jazz Pharmaceuticals; daunorubicin and cytarabine liposome for injection) are the most common intensive induction therapies for patients with AML-MRC, although patients who also have certain mutations and clinical features may benefit from targeted therapy, and patients who are frail may be appropriate for lessintensive treatment approaches.
Conventional Chemotherapy
Combination chemotherapy regimens are the historical standard of care for intensive induction in AML, including AML-MRC, and commonly consist of continuous cytarabine infusion for 7 days plus 3 days of an anthracycline ("7 + 3" regimen). 25 However, this approach has not provided satisfactory OS in patients with AML-MRC. 2,5-7 A retrospective analysis of results for 449 adults with AML-MRC (n = 115) or AML-NOS who were treated with conventional chemotherapy indicated a lower median OS of 10 months and CR rate of 52% among patients with AML-MRC, compared with 16 months and 77% for those with AML-NOS. 7
CPX-351
CPX-351 is a dual-drug liposomal encapsulation of daunorubicin and cytarabine at a synergistic 1:5 molar drug ratio. The liposomal-based carrier system maintains the synergistic drug ratio for over 24 hours after administration, resulting in longer drug exposure; in vitro studies also demonstrated preferential uptake of CPX-351 by leukemic blasts compared with normal cells in the bone marrow. Together, these properties contribute to increased antileukemic activity. 31,32 CPX-351 is approved by the US Food and Drug Administration (FDA) and the European Medicines Agency for the treatment of adults with newly diagnosed AML-MRC and therapy-related AML. 33,34 In addition, the National Cooperative Cancer Network guidelines for AML recommend CPX-351 for adults younger than 60 years (category 2B) and adults 60 years or older who are candidates for intensive therapy (category 1) who have antecedent MDS/CMML, cytogenetic changes consistent with MDS, or therapy-related AML (other than core binding factor/acute promyelocytic leukemia). 25 The approval of CPX-351 for patients with AML-MRC was based on results from a multicenter, randomized, open-label, phase 3 clinical study of CPX-351 vs conventional 7 + 3 chemotherapy in 309 patients aged 60 to 75 years with newly diagnosed, high-risk, or secondary AML. 35 Hematopoietic cell transplantation (HCT) could be received at the discretion of the treating physician and was reported for 33.3% of patients in the CPX-351 arm vs 24.4% of patients in the 7 + 3 arm (OR, 1.53; 95% CI, 0.86-2.74). Median OS landmarked from the date of HCT was not reached for CPX-351 vs 10.68 months for 7 + 3 (HR, 0.48; 95% CI, 0.24-0.96). The safety profile of CPX-351 in patients with AML-MRC was generally consistent with that for conventional 7 + 3, except that the time to neutrophil and platelet count recovery was longer for patients achieving CR + CRi following CPX-351 compared with 7 + 3. However, the early mortality rates for CPX-351 and 7 + 3, respectively, were 5% and 9% within 30 days and 14% and 20% within 60 days. Grade 5 treatment-emergent adverse events occurring in more than one patient in a treatment arm included sepsis (2.4% and 0.8%), disease progression (1.6% and 3.4%), multiorgan failure (0.8% and 1.7%), and respiratory failure (0.8% and 1.7%). 36
Targeted Agents
No targeted therapies are specifically approved or recommended for patients with AML-MRC, but some patients with AML-MRC may also have mutations or clinical features that make them candidates for treatment with these agents.
Gemtuzumab Ozogamicin
Studies showing a higher expression of CD33 on granulocytic cells from individuals with AML-MRC vs AML-NOS 37 and a high proportion (69%) of CD33-positive AML-MRC cases 38 suggest CD33 might be a therapeutic target for some patients with AML-MRC. Gemtuzumab ozogamicin (Mylotarg; Pfizer) is approved by the FDA for the treatment of adults with newly diagnosed CD33positive AML and adults or pediatric patients 2 years or older with relapsed/refractory CD33-positive AML. Approval of gemtuzumab ozogamicin was based on the ALFA-0701 study, which was a multicenter, open-label, phase 3 study of 280 patients aged 50 to 70 years with newly diagnosed, de novo AML who were randomized to receive 7 + 3 with or without the addition of gemtuzumab ozogamicin. 39,40 However, at present there is no information regarding the efficacy or safety of gemtuzumab ozogamicin in patients with AML-MRC. The ALFA-0701 study excluded patients with prior MDS or MDS/ MPN, and there has not been an analysis of the effect of multilineage dysplasia on outcomes in the ALFA-0701 study. However, patients with poor-risk karyotype (such as those seen in de novo AML-MRC) did not benefit from the addition of gemtuzumab ozogamicin. 40
Midostaurin and Gilteritinib
FLT3-internal tandem duplication (ITD) mutations are relatively common in AML, reported in approximately 25% to 30% of AML cases overall 41,42 and in 13.5% of patients with AML-MRC. 7 Midostaurin (Rydapt; Novartis Pharmaceuticals) and gilteritinib (Xospata; Astellas Pharma) are small-molecule FLT3 inhibitors that are approved for the treatment of adults with newly diagnosed FLT3-mutated AML in combination with conventional cytarabine/daunorubicin or adults with relapsed/refractory FLT3-mutated AML, respectively. Although these agents are not approved specifically in patients with AML-MRC, they could be appropriate for some patients who have an FLT3 mutation. The approval of midostaurin was based on the results from the phase 3 RATIFY trial (CALGB 10603; n = 717), a randomized, placebo-controlled study carried out to determine whether the addition of midostaurin to standard 7 + 3 chemotherapy would improve the OS of patients (aged 18-59 years) with FLT3-mutated AML. 43 The approval of gilteritinib was based on an interim analysis of results from the ADMIRAL trial, which included 138 adults with relapsed/refractory AML who had an FLT3-ITD, FLT3-D835, or FLT3-I836 mutation. 44 Studies of both agents showed promising outcomes in their overall study populations 43,45 ; however, no subanalyses of patients with AML-MRC have been reported.
Ivosidenib and Enasidenib
IDH1 and IDH2 mutations have been reported at frequencies of approximately 4% and 21%, respectively, among patients with AML-MRC. 14 Ivosidenib (Tibsovo; Agios Pharmaceuticals) and enasidenib (Idhifa; Celgene) are approved for the treatment of adults with relapsed/ refractory AML with susceptible IDH1 and IDH2 mutations, respectively. Ivosidenib monotherapy (500 mg/d for ≥6 months) was assessed in a phase 1 study of patients with relapsed/refractory, IDH1-mutated AML (n = 125 evaluable). 46 The efficacy of enasidenib was assessed in an open-label, single-arm study that included 199 adult patients with relapsed/refractory IDH2-mutated AML. 47 Although promising efficacy was observed with both agents, neither of the studies evaluated outcomes in the subgroup of patients with AML-MRC.
Lower-Intensity Therapies
AML-MRC is primarily diagnosed in older adults, 4 and some may not be considered healthy enough to receive intensive induction chemotherapy because of the presence of significant comorbidities. These patients may be appropriate candidates for lower-intensity therapies, including hypomethylating agents (HMAs; azacitidine or decitabine) with or without venetoclax, or low-dose cytarabine (LDAC) with or without either venetoclax or glasdegib.
A subanalysis of results from a phase 3 study that compared clinical outcomes for 262 patients with AML-MRC who were treated with azacitidine or conventional regimens (primarily LDAC) indicated that the median OS was significantly prolonged with azacitidine vs conventional care (8.9 vs 4.9 months; HR, 0.74; 95% CI, 0.57-0.97). 29 Venetoclax has been approved in combination with either HMAs or LDAC for the treatment of newly diagnosed AML in patients who are 75 years or older or have comorbidities that preclude the use of intensive induction chemotherapy. A phase 1b study evaluated outcomes for venetoclax in combination with HMAs in 145 patients 65 years or older with untreated AML who were considered ineligible for intensive chemotherapy; the study included 36 (25%) patients with secondary AML (none with prior HMA therapy). The CR + CRi rate was 67% for patients with either de novo or secondary AML. Median OS was 12.5 months (95% CI, 10.3-24.4) for patients with de novo AML and was not reached (95% CI, 14.6 to not reached) for those with secondary AML. 48 A phase 1/2 study of venetoclax plus LDAC in 82 adults aged 60 years or older with untreated AML who were ineligible for intensive chemotherapy included 40 (49%) patients with secondary AML (24 with prior HMA exposure). CR and CR + CRi rates were lower for patients with secondary AML (CR, 5%; CR + CRi, 35%) vs de novo AML (CR, 45%; CR + CRi, 71%). Median OS was also shorter for patients with secondary AML (4.1 months; 95% CI, 2.9-10.1) vs de novo AML (13.5 months; 95% CI, 7.0-18.4). Outcomes for the 24 patients with prior HMA exposure were similar to those for the overall secondary AML subgroup, with a CR + CRi rate of 33%, including 4% who achieved CR, and a median OS of 4.1 months (95% CI, 2.9-10.1). 49 A subsequent randomized, phase 3 study in a similar population failed to meet its primary end point of improved median OS for venetoclax plus LDAC vs placebo plus LDAC (7.2 vs 4.1 months; HR, 0.75; 95% CI, 0.52-1.07) in the overall study population, which included 38% with secondary AML (primarily prior hematologic disorder) and 20% with prior HMA exposure. 50 Higher remission rates and longer median OS with the addition of venetoclax to LDAC were noted across patient subgroups, but specific data were not included with the online manuscript publication. However, a multivariable Cox regression analysis identified de novo vs secondary AML as significantly correlated with OS (HR, 0.59; 95% CI, 0.41-0.85). 50 Glasdegib is also approved in combination with LDAC for the treatment of patients with newly diagnosed AML who are 75 years or older or have comorbidities that precluded the use of intensive induction chemotherapy. This approval was based on results from the BRIGHT AML 1003, a randomized trial of LDAC with or without glasdegib in 115 patients. The addition of glasdegib improved median OS in the overall study population, 51 but no analysis of patients with AML-MRC has been performed.
Discussion
AML-MRC accounts for a substantial proportion of AML cases and includes patients with antecedent hematologic malignancies (eg, MDS) as well as those with de novo AML who have multilineage dysplasia and/or MDS-related cytogenetic abnormalities. AML-MRC primarily occurs in elderly patients and is associated with an increased probability of adverse cytogenetics and worse clinical outcomes. Therefore, there is a need for rapid karyotype analysis or, if not possible, the use of FISH panels to quickly identify the majority of patients with AML-MRC who do not have a history of MDS or multilineage dysplasia. The extra expense of such testing should be balanced with the treatment benefit afforded to the patient by making a diagnosis quickly. NPM1 and CEBPA, along with FLT3, should be evaluated by RT-PCR, as these results typically are obtained more quickly than nextgeneration sequencing panels, and the results are important for the diagnosis of AML-MRC and for informing treatment decisions.
If therapy is pursued with curative intent in patients with AML-MRC, then allogeneic HCT should be considered in the first-line treatment plan. The use of regimens capable of inducing remission in this population is critical, since patients are typically not considered eligible for allogeneic HCT unless in complete remission. Furthermore, these remissions ideally should be durable and deep, as well as not lead to excessive toxicity so as not to preclude subsequent allogeneic HCT. Recently approved chemotherapeutic regimens have been shown to induce such remissions in this patient population. Therefore, the identification of patients with AML-MRC at the time of initial diagnosis is critical to the optimal treatment of these patients.
CPX-351 was the first agent to demonstrate improved outcomes, including a higher HCT rate compared with conventional chemotherapy (7 + 3 regimen) in patients with AML-MRC, and it is currently the only agent specifically approved for this high-risk population. It is not yet understood how this regimen may compare with less-intensive strategies, such as venetoclax in combination with HMAs, or more intensive regimens such as FLAG-IDA (fludarabine, cytarabine, granulocyte colony-stimulating factor, and idarubicin). The combination of CPX-351 chemotherapy with targeted agents has also not yet been evaluated. Given the suboptimal outcomes for patients with AML-MRC, clinicians may choose to combine targeted agents with active chemotherapy regimens, such as CPX-351, instead of waiting for safety and efficacy data from combination studies. Therefore, clinical studies of these combinations are needed urgently; until such data are available, clinicians should exercise great caution when combining agents, as they may have overlapping toxicities, particularly myelosuppression. The goal of therapy (ie, curative approach vs palliative treatment) may help to inform regimen selection until randomized trial data are available in specific subpopulations of patients, including AML-MRC.
Genomics, epigenetics, and proteomics may ultimately identify more precise definitional markers in AML and eliminate the need for the current category of AML-MRC. As understanding of the biologic drivers of AML improves, it is expected that therapy selection may transition to targeting relevant biologic drivers of different AML subgroups.
|
2020-06-18T09:02:59.822Z
|
2020-08-31T00:00:00.000
|
{
"year": 2020,
"sha1": "e54611821ea5993b57ad14c1627b41d54ce06cbc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/ajcp/aqaa107",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6faa68abfa72e9ade2e1d287230ec6d292b95b81",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244182174
|
pes2o/s2orc
|
v3-fos-license
|
Constraints faced Panchayat Members in Relation to Agricultural Development Programs in Ujjain District (M.P.)
: In india Panchayati Raj now functions as a system of governance in which Gram Panchayats are the basic units of local administration. The system has three levels: Gram Panchayat (village level), Mandal Parishad or Block Samiti or Panchayat Samiti (block level), and Zila Parishad (district level). Panchayat Raj has come to be related with two broad images. First, it is a government by itself and second it is an agency of the state government. In the integrated exercise of planning for social and economic development, co-ordinate roles, the present set up is a three-tier representative structure of government where the administrators, elected leaders and local population participate in the developmental effort. In this regard the members of Panchyat faced many problems there this study confront the constrains faced by members and suggestions to overcome them with 120 sample unit of Ujjain district. The major finding was obtained in this study that the most serious constraint perceived by the Gram Panchayat Members during agriculture development programmes was Insufficient use of communication media (85.83%) as it was perceived to the highest extent by gram Panchayat Members with major suggestion of them Properly and timely use of communication media.
Introduction
Information has an important role to play in all sectors of development. In the fast changing society, the role of information has become more important. Today, GPM can access information from different media like print media, electronic media etc. such as libraries, friends, colleagues, and directly from subscribed newspapers, TV channels and Internet through computers or mobiles for their rural area development. So, the importance of information cannot be denied at the village level.
The role of Panchayat Raj institutions as instruments of rural reconstruction and development needs no emphasis. They have been reorganized with wider powers and financial resources not merely as institutions of political participation but institutions of social and economic development. Panchayat Raj has come to be associated with two broad images. First, it is a government by itself and second it is an agency of the state government. In the integrated exercise of planning for social and economic development, co-ordinate roles, the present set up is a three-tier representative structure of government where the administrators, elect leaders and local population participate in the developmental effort.
The Panchayat Raj System is playing essential role in rural development. Gram Panchayat has taken the responsibility of implementing various rural development programmes of both state and central governments. The role of Panchayat Raj Institutions as instruments of rural reconstruction and development needs no emphasis. They have been reorganized with wider powers and financial resources not merely as institutions of political participation but institutions of social and economic development. Panchayat Raj has come to be associated with two broad images. First, it is a government by itself and second it is an agency of the state government. In the integrated exercise of planning for social and economic development, co-ordinate roles, the present set up is a three-tier representative structure of government where the administrators, elected leaders and local population participate in the developmental effort. The study is needed for evaluating the role of Panchayat and the impact of the same on the development of study.
Hence, "The study on role performance of elected Panchayat Members in Ujjain district of Madhya Pradesh" was undertaken with the specific objectives.
Specific Objectives:
To know the constraints encountered by the Gram Panchayat Members in performing their roles in relation to Agriculture development programmes and suggestion given to overcome them.
Selection of the respondents:
Each Gram Panchayat is constituted of with 15-20 members. Out of these 12 members were selected randomly from each Panchayat. In this manner 120members were considered as respondents for collection of data in this study.
Constraints faced by elected Panchayat Members in agricultural
The statements related to role performance of Gram Panchayat Members were prepared with the help of experts & reviews of literature. The respondents were asked to refer their role performance in agricultural development on a three point continuum viz., completely agree, agree and slightly agree. The responses of the members were recorded by 3, 2 and 1 scores assigned for completely agree, agree and slightly agree respectively. By considering this, an attempt were made to categories the role performance of the respondents.
Result and Discussion
Constraints encountered by the Gram Panchayat Members in performing their roles in relation to Agriculture development programmes and suggestion given to overcome them
Conclusion
The most serious constraint perceived by the Gram Panchayat Members during agriculture development programmes was insufficient use of communication media (85.83%) as it was perceived to the highest extent by gram Panchayat Members.
|
2021-10-18T17:29:00.457Z
|
2021-09-30T00:00:00.000
|
{
"year": 2021,
"sha1": "020de99080e2b0e959ab29f774aa34a96f3bbe1c",
"oa_license": null,
"oa_url": "https://doi.org/10.47856/ijaast.2021.v08i9.019",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7e0fb58b74d845233fca17f7c7b7cf756f65f8d7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
119185456
|
pes2o/s2orc
|
v3-fos-license
|
de Sitter extremal surfaces
We study extremal surfaces in de Sitter space in the Poincare slicing in the upper patch, anchored on spatial subregions at the future boundary ${\cal I}^+$, restricted to constant boundary Euclidean time slices (focussing on strip subregions). We find real extremal surfaces of minimal area as the boundaries of past lightcone wedges of the subregions in question: these are null surfaces with vanishing area. We also find complex extremal surfaces as complex extrema of the area functional, and the area is not always real-valued. In $dS_4$ the area is real. The area has structural resemblance with entanglement entropy in a dual $CFT$. There are parallels with analytic continuation from the Ryu-Takayanagi expressions for holographic entanglement entropy in $AdS$. We also discuss extremal surfaces in the $dS$ black brane and the de Sitter"bluewall"studied previously. The $dS_4$ black brane complex surfaces exhibit a real finite cutoff-independent extensive piece. In the bluewall geometry, there are real surfaces that go from one asymptotic universe to the other through the Cauchy horizons.
Contents 1 Introduction
de Sitter space is fascinating for many reasons, in particular for holographic explorations towards addressing questions of cosmology and time. In this regard, some versions of dS/CF T duality [1,2,3] associate to de Sitter space a dual Euclidean CFT on the future timelike infinity I + boundary (in the Poincare slicing). A concrete realization in the context of higher spin theories appears in [4]. Further work on dS/CF T appears in e.g. [5,6,7,8,9,10,11,12,13].
In AdS/CF T , there has been considerable interest in understanding information theoretic notions in terms of geometric quantities via holography, in particular stemming from the Ryu-Takayanagi prescription [14,15] (see [16,17] for reviews) for calculating holographic entanglement entropy of a subsystem in the strongly coupled boundary field theory. This is the area of a bulk minimal surface (in Planck units) anchored at the subsystem interface and dipping inwards upto a certain maximal depth typically called the turning point. A different way to think about this appears in [18]. More generally, these are extremal surfaces [19]. In this light, one might speculate that the bulk subregion enclosed by the entangling surface and the boundary subsystem in some sense encodes bulk physics corresponding to that part of the boundary theory contained in the subsystem, although a detailed understanding of the hologram (and bulk locality) would seem more intricate.
It is interesting to consider these questions in the context of de Sitter space and dS/CF T . Assuming there is translation invariance with respect to a boundary Euclidean time direction, imagine constructing a subregion on a Euclidean time slice of the future boundary I + . Tracing out the complement of this subregion would lead to some loss of information and thereby give some associated entropy, which one might attribute to the subregion being entangled with the complement. In the bulk, intuition from the Ryu-Takayanagi prescription in AdS/CF T suggests that we study extremal surfaces in de Sitter space (in the Poincare slicing) on a constant boundary Euclidean time slice, defined as anchored on the subregion on the future (spacelike) boundary and dipping inwards (i.e. in the bulk time direction, towards the past). We find (sec. 2) that the bulk extremization problem exhibits some crucial sign differences from the AdS case. Focussing first on real surfaces, there are correspondingly some technical differences such as the absence of a natural turning point (where the surface stops dipping inward). For sufficiently symmetric subregions such as strips (with an axis of symmetry), extremal surfaces can be defined as the union of two half-extremal-surfaces joined continuously but with a sharp cusp. Upon requiring that we choose minimal area, the extremal surfaces become null surfaces with zero area. In fact these are simply the boundaries of the past lightcone wedges of the subregion in question, and are thus the analogs of causal wedges associated with causal holographic information [20] (note that these bulk causal wedges and the corresponding causal holographic information in general do not coincide with the bulk entangling subregion, and entanglement entropy). This answer -past lightcone wedges -is well-defined for arbitrary boundary subregions, even without sufficiently high symmetry, and gives vanishing area.
It is interesting to look for complex saddle points of the extremization problem, motivated by considerations in dS/CF T . Focussing again on strip subregions, we indeed find these complex extremal surfaces: they exhibit "turning points" in the interior. They should be thought of as living in some auxiliary space, and are distinct from the bulk past lightcone wedges (which define real subregions in bulk dS 4 ). The area of these surfaces is in general not real-valued. In dS 4 , we find that x(τ ) parametrizing the strip width being real-valued suggests that the bulk time τ parametrizes a complex path τ = iT . The area (in Planck units) of these surfaces is real-valued and has structural resemblance with entanglement entropy in a dual (non-unitary) 3-dim CFT, with central charge C ∼ − R 2 dS G 4 , with a leading area law divergence, which is negative, and subleading terms. There are parallels with analytic continuation from the Ryu-Takayanagi holographic entanglement expressions from AdS. In dS d+1 with d even, the nature of these complex surfaces is different and the area is pure imaginary. From the point of view of the dual Euclidean CFT living on the future boundary I + , one might formally associate a density matrix w.r.t. boundary Euclidean time evolution and a reduced density matrix to the subregion obtained by tracing out the complement. It would be interesting to explore this further, perhaps in dS/CF T as entanglement entropy in the dual Euclidean CFT.
We then discuss (sec. 3) an asymptotically de Sitter space [13] -the dS black branewhere subleading normalizable metric components are turned on: in dS 4 /CF T 3 , they have the interpretation of saddle points representing the Euclidean CFT with uniform energy-momentum density expectation value. The corresponding extremal surfaces in the dS 4 black brane exhibit a finite cutoff-independent real-valued extensive piece (again negative) with some resemblance to a thermal entropy. Finally we discuss (real) extremal surfaces in the closely related dS "bluewall" geometry, which are not obtained by analytic continuation: here there are real extremal surfaces which cross from one asymptotic universe to the other through the Cauchy horizons.
2 Extremal surfaces in de Sitter space de Sitter space dS d+1 in the Poincare slicing or planar coordinate foliation is given by the metric where half of the spacetime, e.g. the upper patch, has I + at τ = 0 and a coordinate horizon at τ = −∞. This may be obtained by analytic continuation of a Poincare slicing of AdS, where w is akin to boundary Euclidean time, continued from time in AdS.
The dual Euclidean CFT is taken as living on the future τ = 0 boundary I + . We assume translation invariance with respect to a boundary Euclidean time direction, say w, and consider a subregion on a w = const slice of I + . One might imagine that tracing out the complement of this subregion then gives entropy in some sense stemming from the information lost. In the bulk, we study de Sitter extremal surfaces on the w = const slice, analogous to the Ryu-Takayanagi prescription in AdS/CF T . Operationally these extremal surfaces begin at the interface of the subsystem (or subregion) and dip inwards (towards the past, in the bulk time direction). For simplicity, consider a strip on the w = const surface (i.e. a constant boundary Euclidean time surface): this bulk d-dim subspace has metric This is not a spacelike subspace in the bulk and it might seem that the extremal surfaces are timelike in general: however we will find that this is not the case.
Real extremal surfaces
Let us consider a strip subregion with width direction say x, the remaining x i being labelled y i . A bulk surface on the w = const slice bounding this subregion and dipping inward (towards the past) is bulk codim-2: its area functional in Planck units is We consider extremizing the action to find extremal surfaces with minimal area, along the lines of the Ryu-Takayanagi prescription for entanglement entropy in AdS. The S dS extremization gives a conserved quantity (ẋ ≡ dx dτ ) We see thatẋ 2 → 0 near the boundary τ → 0. Assuming the conserved constant satisfies B 2 > 0 makes all the expressions real-valued and meansẋ 2 > 0, withẋ 2 → 1 in the deep interior for large |τ |. For B 2 > 0, these are timelike surfaces 1 . This gives the solution (upto boundary conditions) and corresponding area integral The main difference between this case and the minimal surface in AdS stems from B 2 > 0 implying that there is no smooth "turning point" whereẋ 2 = B 2 τ 2d−2 1+B 2 τ 2d−2 → ∞. In fact B 2 > 0 meansẋ 2 is bounded, with 0 ≤ẋ 2 ≤ 1. For any finite B 2 > 0, the extremal surface in this case begins to dip inwards from one boundary of the strip subregion and (rather than turning around as in AdS) continues indefinitely, eventually approachingẋ → ±1. With a view to associating a bulk subregion with the boundary subregion in question, let us artifically cut off the inward dipping surface at some interior location τ = τ 0 , the bulk subregion then defined by the interior of the boundary strip subregion and the joined surface. So consider the half-extremal-surfaces, This gives an extremal surface made of two half-extremal-surfaces joined continuously but with a sharp cusp at τ 0 (see Figure 1). This defines the corresponding wedge-like bulk subregion, enclosed by this extremal surface and the boundary subregion. These conditions do not determine the parameters B, τ 0 uniquely, given the subregion width l. Varying B gives different extremal surfaces. By comparison, in the AdS case, the turning point τ * = 1 B is fixed by the global nature of the entangling surface as the location whereẋ 2 → ∞, the surface turning around.
To follow the Ryu-Takayanagi prescription, we would want to identify those extremal surfaces that have minimal area 2 . From (6), we see that as B increases, the area S dS decreases. Furthermore, (5) shows that as B increases,ẋ 2 increases and eventually approachesẋ 2 → 1 as B → ∞. In this limit, x(τ ) → ±τ and S dS appears to vanish. In fact this is a sensible result: in hindsight, it should have been obvious from (4) that minimal area arises when the extremal surface becomes null. This null extremal surface is in fact simply the boundary of the past bulk lightcone of the subregion. 1 One might instead want to consider spacelike surfaces withẋ 2 > 1 and therefore take, instead of (4), the area functional as S dS = We will discuss this in the next subsection. 2 Note that surfaces with maximum area correspond to minimizing B: this gives B = 0, which are disconnected An alternative argument corroborating the above conclusion is the following. Physically, the shortest length (or time) scale here is τ U V = ǫ so that in (5) when Bǫ d−1 ≫ 1 we can approximateẋ 2 ∼ 1 and so x(τ ) ∼ ±τ giving l 2 ∼ τ 0 . Thus one might estimate (6) as where the second integral can be seen to vanish as τ → 0. Now the first integral scales as V d−2 ǫ d−2 while the second integral can be expressed in terms of the hypergeometric function 2 F 1 as , using the integral representations of 2 F 1 ). As B 2 increases, this second integral is seen to scale as − . Thus when B ∼ 1 ǫ d−1 this cancels the earlier contribution and we again see the leading S dS scaling to be vanishing. In 1 + B 2 τ 2 and the boundary Analysing these vindicates the conclusions above.
It is worth noting that our construction of joining two half-extremal-surfaces appears invalid unless the subsystem has sufficiently high symmetry (in particular an axis of symmetry). Relatedly, one might look askance at the entire extremization procedure here: if we allow nonsmooth surfaces (with cusps), one might wonder if more general surfaces need to be considered, e.g. a zigzag null surface formed by joining multiple partial surfaces with multiple cusps. This would be useful to systematise more rigorously. However the final answer, the past lightcone wedge, is well-defined for an arbitrary subregion, comprising two piecewise smooth extremal surfaces joined with just a single cusp (rather than multiple cusps). The past lightcone wedge boundary is however a complicated surface: it would be interesting to understand the shape dependence here. The resulting area is of course always zero for all these null surfaces, and does not reflect entanglement structure.
From the point of view of bulk de Sitter alone, one could consider volume subregions in the full d-dim boundary I + (at τ = 0), i.e. not on the constant boundary Euclidean time slice. These would give codim-1 surfaces. For a strip subregion with width direction say x, the remaining x i being labelled y i , analysing the area integral of the bulk surface for extremization gives This has volume scaling. Again S dS decreases with increasing B, withẋ 2 → 1: the resulting extremal surfaces are null surfaces defining the past lightcone wedges of the volume subregion, with vanishing area.
Complex extremal surfaces
For what follows, it is useful to recall the dS/CF T correspondence, for de Sitter space in the Poincare slicing (1), obtained by analytic continuation (2) of Poincare AdS. A version of dS/CF T [1, 2, 3] states that quantum gravity in de Sitter space is dual to a Euclidean CFT living on the boundary I + or I − . More specifically, the CFT partition function with specified sources φ i0 ( x) coupled to operators O i is identified with the wavefunctional of the bulk theory as a functional of the boundary values of the fields dual to O i given by φ i0 ( x). In the semiclassical regime this becomes where we need to impose regularity conditions on the past cosmological horizon τ → −∞: e.g. scalar modes satisfy φ k (τ ) ∼ e ikτ , which are Hartle-Hawking (or Bunch-Davies) initial conditions. Operationally, certain dS/CF T observables can be obtained by analytic continuation (2) from AdS (see e.g. [3], as well as [5]). The Bunch-Davies initial condition itself can be thought of as analytic continuation of regularity in the AdS interior. Complex saddle points thus appear in dS/CF T . From this point of view, it is natural to ask if there are additional (perhaps complex) extrema of the area functional that could be considered in de Sitter space, with possible dS/CF T interpretations 3 . With a view to considering spacelike surfaces withẋ 2 > 1, let us take, instead of (4), the dS d+1 area functional on a w = const slice as where we are considering strip subsystems with width along x. The second expression above is the conserved quantity obtained in the extremization. This is essentially the same as (5), but with B 2 = −A 2 < 0, and A being real. One might ask if this can be interpreted as a real surfaceẋ 2 = A 2 τ 2d−2 A 2 τ 2d−2 −1 . However we need to require that the surface reaches the boundary τ → 0 from where it drops down (inward): near τ → 0, we findẋ 2 ∼ −A 2 τ 2d−2 , so thaṫ this being a complex surface in some sense. Let us focus now on dS 4 /CF T 3 for concreteness, to understand this better. The extremal surface near τ → 0 in this case is x(τ ) ∼ ±iAτ 3 + x(0). We want x(τ ) to be real-valued since it parametrizes a space direction in the dual CF T 3 : this requires that τ takes imaginary values. In more detail, near τ → 0, we have x → ± l 2 and the two ends of the surface are parametrized as The global structure of the surface shows a "turning point" at τ 4 * A 2 = 1, whereẋ 2 → ∞, very similar to the situation in AdS. From the point of view of the discussion in the previous subsection, the two half-extremal-surfaces x L , x R in this case join smoothly at the turning point τ * as in AdS, with x L (τ * ) = 0 = x R (τ * ) anḋ x L ,ẋ R matching. This gives the width The reality of ∆x = l with A real again suggests that we parametrize the τ -integral over the path τ = iT in a complex τ -plane 4 : we have then rescaled T using A to make the integration variable dimensionless (and # = 1 0 The integral can be parametrized in terms of hypergeometric functions 2 F 1 . The extremal surface x(τ ) with τ imaginary does not correspond to any real bulk subregion in dS 4 enclosed by the surface, but really lives in some auxiliary space. In a sense, the structure here is very much like analytic continuation of the AdS 4 expressions a la Ryu-Takayanagi: we will discuss this more below. From that point of view, since the analytic continuation (2) faithfully maps AdS 4 ↔ dS 4 , this is a faithful map from the subsystem to the auxiliary bulk subregion. The area now becomes In principle, we could assign ±i in the second expression, as a choice of the branch of the square root: the choice of the minus sign leads to an appropriate coefficient as we see below.
The integral itself is just as in AdS 4 , giving where c 3 = 2π 2 is a constant as in AdS, stemming the finite cutoff-independent part of the integral. Note that here we have used the relation τ U V = iǫ for the ultraviolet cutoff in the dual Euclidean field theory, suggested by previous investigations 5 in dS/CF T . Also we have rewritten the last expression in the last line in terms of the central charge C ∼ − (14) is real-valued and bears some structural resemblance to entanglement entropy in a dual CFT with central charge C ∼ − R 2 dS G 4 . The first term resembles an area law divergence [32,33], proportional to the area of the interface between the subregion and the environment, in units of the ultraviolet cutoff. It is also proportional to the central charge which represents the number of degrees of freedom in the dual CFT: in this case, C < 0 reflecting the fact that the CFT is non-unitary. The second term is a finite cutoff independent piece. Whether the expression (14) should really be thought of physically as holographic entanglement entropy in dS 4 /CF T 3 is less clear, the dual CFT being non-unitary with central charge In some sense, −S dS appears to resemble entanglement entropy in AdS/CF T , sharing various features including subadditivity. For instance, the quantity for two disjoint subsystems A, B, exhibits various properties of holographic mutual information in AdS including an analog of the disentangling transition in the classical gravity approximation [34], but with some crucial differences. For strip subregions that are sufficiently nearby but disjoint, I dS [A, B] is nonzero: e.g. using (14) for a single strip, we obtain for two parallel strips of equal width l and separation x, S dS [A ∪ B] arises from the area of the connected surface between A, B as S(2l + x) + S(x). This is similar to the structure of holographic mutual information for strips in AdS 4 , e.g. the UV divergent pieces cancel, with a cutoff-independent divergence C V 1 x as the subregions collide. The striking difference is that I dS [A, B] ≤ 0, rather than positive definite, following 5 See e.g. [3,5,4,11], which discuss this (in some cases implicitly). Heuristically, we expect that evolution in the bulk direction is encoded by renormalization group flow in the dual field theory: see e.g. [24,25,26,27,28,29] and more recently e.g. [30,31] for discussions on this in the AdS context. In the present dS case, the bulk description is time evolution ∼ 0.62. Beyond this critical value, I dS [A, B] vanishes identically and the two subregions are disconnected. This disentangling transition in the classical gravity approximation arises from the transition between the connected and disconnected surface for A ∪ B. What we are seeing is that i.e. −S dS satisfies strong subadditivity for disjoint parallel strips A, B. By comparison, using the real lightcone wedge surfaces in the previous subsection, we see that disjoint boundary subregions are always causally disconnected and thus uncorrelated for any nonzero separation. Correlation functions are nonzero: the disentangling transition above is in the classical gravity approximation, and we expect subleading terms in a large-N-like expansion of I dS [A, B] (see e.g. [35] in the AdS context).
We now discuss dS d+1 for even d (in particular dS 3 , dS 5 ) where the nature of these extremal surfaces is different. We would like to retain the relation τ U V = iǫ as following quite generally in dS/CF T from time evolution mapping to renormalization group flow. This suggests we parametrize the bulk time parameter τ along a complex path τ = iT as in dS 4 . However now with A 2 > 0 the surface (11) near τ → 0 givesẋ ∼ ±iAτ d−1 , i.e. x(τ ) ∼ ±iAτ d . Thus x cannot be made real-valued for any even d in this manner. A way out is to take the parameter A 2 → −A 2 : the surface equation now is the same as (5) but with the bulk time parametrized as τ = iT . The expressions (10), (11) then givė For even d, the (−1) d−1 gives rise to a "turning point" at T 2d−2 * A 2 = 1: the width now scales as l ∼ T * ∼ −iτ * . The integral is as in AdS, giving The leading divergence S div ǫ d−2 resembling an area law, appears independent of the shape of the subregion, expanding (10) and assuming thatẋ is small near the boundary τ U V . Unlike dS 4 , note that S dS in dS d+1 with even d is not real-valued, in particular for dS 3 , dS 5 . For instance, in dS 3 , we obtain from (16) Note that x(τ ) appears real, although the parametrization is τ = iT .
It is interesting to recall the Ryu-Takayanagi expression for entanglement entropy for an (infinitely long) strip-shaped subsystem with width along the x-direction, given as the area of the corresponding minimal surface in the bulk AdS d+1 geometry (with radius R), where the conserved quantity A in the extremization is related to the turning point as r d−1 * = 1 A . Noting that dS d+1 in Poincare slicing (3) is just the analytic continuation of the corresponding t = constant spatial slice in AdS d+1 , obtained by (2), i.e. r → −iτ, t → −iw, R → −iR dS , let us carry out this analytic continuation on the Ryu-Takayanagi expression. Indeed we see that S dS in (4) appears very much like the analytic continuation of S AdS [x(r), r], with the various factors of i conspiring to leave a single i behind, i.e.
On the analytic continuation of the extremization itself, we obtain This expression corroborates the minus sign in (16) and (13), (14). The analytic continuation essentially recovers our earlier calculations in dS 4 and dS d+1 for even d. For instance, in dS 5 (i.e. d = 4), we obtaiṅ With real A, this is as such a real extremal surface as in the previous subsection: taking A large minimises the area and we obtain the null surfaces earlier with vanishing area representing the past lightcone wedge of the subregion. However parametrizing as τ = iT , there is a turning point at τ * = i A 1/3 , and a corresponding complex surface and corresponding area given by (16). The area in (22) . The extra i can be thought of as arising from the odd powers of R dS under the analytic continuation from AdS 5 . It is interesting to note that for even d (in particular, dS 3 and dS 5 ), the expression S dS obtained by analytic continuation of the Ryu-Takayanagi entanglement prescription appears similar to (5), (6), parametrizing τ as real-valued, giving real null extremal surfaces with vanishing area. There are complex surfaces here also with τ = iT , and the corresponding area S dS is imaginary.
To summarize, we have studied bulk de Sitter codim-2 extremal surfaces. Real extremal surfaces are the boundaries of the past lightcone wedges of the boundary subregions, with vanishing area. Complex extremal surfaces have some structural resemblance with entanglement entropy in a dual CFT. In dS 4 /CF T 3 , the area is real-valued and negative: in this sense, these complex surfaces have lower area, suggesting that they are the preferred minimal surfaces. Our calculations here have been done for a strip subregion but it would appear that generalizations to other subregion shapes will exhibit similar features. For instance, the spherical subregion extremal surface presumably exhibits a logarithmic term with with interesting coefficient.
It is worth noting that this analysis of bulk extremal surfaces is different from studies of entanglement entropy of bulk fields in de Sitter space e.g. [36,37,38,39].
Extremal surfaces in the dS black brane
We now study extremal surfaces in the asymptotically dS spacetime studied in [13], i.e.
with α a complex phase and τ 0 is some real parameter of dimension length inverse. An analog of regularity in the interior for an asymptotically AdS solution is obtained here by a Wick rotation τ = il and demanding that the resulting spacetime (thought of as a saddle point in a path integral) in the interior approaches flat Euclidean space in the (l, w)-plane with no conical singularity. This makes the w-coordinate angular with fixed periodicity (and l is a radial coordinate), giving α = −(−i) d , l ≥ τ 0 , w ≃ w + 4π (d−1)τ 0 . Thus the spacetime (23) is a complex metric which satisfies Einstein's equation with a positive cosmological constant . This resulting metric satisfying regularity is equivalent to one obtained by analytically continuing the Euclidean AdS black brane where θ ∼ θ + 4π (d−1)r 0 , to the asymptotically de Sitter spacetime (23) using (2) and we identify r 0 ≡ τ 0 , giving the phase −1 (−i) d . The regularity criterion is simply the analog of regularity of the EAdS black brane. The condition l ≥ τ 0 is equivalent to the radial coordinate having the range r ≥ r 0 . We see that "normalizable" metric pieces are turned on in (23). We then expect a nonzero expectation value for the energy-momentum tensor here, as in the AdS context [40,41,28,29]. In the present case [13], we have ij is the coefficient of the normalizable τ d−2 term in the Fefferman-Graham expansion of the metric (23). This definition of T ij is natural for a CFT with partition function Z CF T , equated with Ψ: thus, most notably, the i arising from Ψ, the wavefunction of the universe, implies that the energy-momentum tensor is real only if g (d) ij is pure imaginary. In effect, this dS/CF T energy-momentum tensor can be thought of as the analytic continuation of the EAdS one. The spacetime (23) for dS 4 /CF T 3 gives real T ij , with T ww = − R 2 dS G 4 τ 3 0 with T ww + (d − 1)T ii = 0. The w-coordinate is naturally interpreted as Euclidean time from the structure of the energymomentum tensor: so let us now consider a strip subregion on a w = const surface in (23). The area functional (in Planck units) of a bulk surface bounding this strip and dipping inwards is defined so that for τ 0 = 0, this reduces to our de Sitter discussion in sec. 2.2. For the dS 4 brane (i.e. d = 3), we obtain for the extremization, Now for small width l, this is essentially similar to the previous discussion on pure dS 4 and we have i∆x where A is real. In particular, the width ∆x being real-valued suggests that τ parametrizes a complex path τ = iT with T real. As l increases however, the other denominator approaches a zero also, with τ → i τ 0 . In this limit, we thus have τ * → i √ A ∼ i τ 0 and large l ∼ −iτ * , obtained from the double zero as Note that reality of the width ∆x implies now that the range of T is restricted as T ≤ 1 τ 0 i.e. asymptotically τ → i τ 0 . This is similar to the fact that in the AdS black brane, static minimal surfaces in the IR limit (large subsystem width) wrap the horizon but do not penetrate beyond. Now the area integral exhibits a cutoff-independent piece which can be estimated from the contribution in the deep interior where τ → τ * : the contribution to the integral near the double zero thus scales as i∆x giving which resembles an extensive thermal entropy in a 3-dim CFT with central charge C ∼ − We recall that the entanglement entropy area functional for the AdS d+1 black brane from the Ryu-Takayanagi prescription is S = .
Under the analytic continuation, we obtaiṅ For generic dimension d, we see that S is not real, as in the earlier discussion with τ 0 = 0.
The de Sitter bluewall
We now explore metrics of the form (23), but with the parameter α = −1 here 6 , i.e.
The w-coordinate here has the range −∞ ≤ w ≤ ∞. This spacetime [13] has a Penrose diagram shown in Figure 2 which resembles that of the AdS black brane rotated by π 2 : there are two asymptotically dS universes (for τ 1 τ 0 ), and timelike singularities cloaked by Cauchy horizons at τ = 1 τ 0 , which "cross" at a bifurcation region. The Penrose diagram has many similarities with the interior of the Reissner-Nordstrom black hole (or wormhole). Late time infalling observers near the Cauchy horizon see incoming lightrays from early times as highly blueshifted, essentially stemming from lightrays "crowding" near the Cauchy horizon, suggesting an instability. It is unclear if this spacetime has any interpretation in dS/CF T : nevertheless, formally, one finds the energy-momentum T ij to be imaginary in dS 4 /CF T 3 , perhaps reflecting the blueshift instability here. Here we will simply look for bulk codim-2 extremal surfaces, lying either on a w = const slice or an x = const slice (from a bulk point of view alone, either could be taken as Euclidean time slices), restricting to real surfaces which may also be timelike.
The area functional for a surface in (31) bounding a subregion on a x = const slice of I + , and wrapping the other This does not correspond to any analytic continuation from the Ryu-Takayanagi formula for the AdS 4 black brane, so we analyse this directly focussing on the dS 4 bluewall. Along our earlier discussions in sec. 2, we find real extremal surfaces corresponding tȯ 6 The metric (23) with α = +(−i) d is similar to the dS black brane, except with T ij of the opposite sign, while α = +1 gives a real spacetime with a spacelike singularity at τ → ∞.
where the constant B arises from a conserved quantity in the extremization. The first equation describing the surface can be rewritten as where we are using τ * here for the "tortoise" coordinate in this bluewall geometry [13], analogous to the Schwarzschild tortoise coordinate r * . Parametrized thus, we see as in the dS 4 case that increasing B decreases the area, as long as we restrict the surface to lie within the future asymptotic universe I, i.e. f (τ ) > 0. As B 2 → ∞, these extremal surfaces become null with (w ′ * ) 2 = 1, corresponding to the past lightcone wedges of the boundary subregion, and have vanishing area. Thus extremal surfaces for a given subregion at I + can be constructed as in dS 4 ( Figure 1) by joining two half-extremal surfaces: this is the blue wedge in region I in Figure 2 (the half-surface when not cut off continues as a null surface through the Cauchy horizon into region III, represented by the dotted extension of the blue line). As the subregion grows in size, this blue wedge approaches and eventually wraps the future Cauchy horizons. One might imagine that there are timelike surfaces which are not restricted to just region I but instead start on I + in I and cross over to II ending on the past boundary I − . These can be found with the parameter B 2 > 0 being finite. In this case, we see that (w ′ * ) 2 → 0 as τ → 0 and (w ′ * ) 2 → 1 as τ → 1 τ 0 near the horizon in I. Now after the surface crosses the future Cauchy horizon, we have f (τ ) < 0 in IV . Requiring that (w ′ * ) 2 in (34) satisfies (w ′ * ) 2 ≥ 0 corresponding to real surfaces, it is possible to see (e.g. by plotting as a function of τ 0 τ ) that the parameter B 2 is bounded below by a critical value. There is a family of such surfaces: we will isolate one "critical" surface for a particular value of B, in what follows. Drawing analogies with the study of the phase transition found in [42] (although the physical context there is different), we note that (w ′ * ) 2 → ∞ when the denominator in (34) approaches a double zero (with y ≡ τ 0 τ ), i.e.
This corresponds to τ c = 4 1/3 τ 0 ∼ 1.6 τ 0 which is just a little inside the Cauchy horizon in region IV . Note that (w ′ * ) 2 | τc → ∞ here means this curve is normal to the w = const line here (these are straight spacelike lines passing through the bifurcation point and hitting the singularity in IV ), or equivalently tangent to the τ = const curve at τ c in IV . The corresponding surface from τ = 0 to τ = τ c can be drawn as a curve in the (τ, w)-plane: it can be joined smoothly at τ c with a corresponding curve from I − , resulting approximately in the red curve in Figure 2. This surface crosses the upper and lower Cauchy horizons at τ = 1 τ 0 , w = +∞ and τ = 1 τ 0 , w = −∞. The area of this surface has a leading divergence S ∼ Near the double zero, ∆w acquires a large contribution and we can estimate S ∼ R 2 dS G 4 V 1 ∆w. This surface is vaguely reminiscent of the extremal surface in [43] which goes from one timelike boundary to the other: since the dS bluewall metric itself is related to the AdS-Schwarzschild black brane by flipping minus signs, it is perhaps not surprising that there exists a similar surface here (but timelike), albeit with no obvious corresponding interpretation. In light of ER=EPR [44], it is amusing to speculate that the subregion here corresponds to copies on both I ± possibly "entangled", in some sense, thinking of the bluewall geometry as a "timelike wormhole" with the bifurcation region being the Einstein Rosen bridge. Note however that strictly speaking, all timelike geodesics go from I − to I + (unlike a shortcut in spacetime) either through the bifurcation region or through the Cauchy horizons, subject to the blueshift instability [13].
With a w = const slice, real extremal surfaces likewise havė For B 2 → ∞, these are again null extremal surfacesẋ 2 = 1 1−τ 3 0 τ 3 with vanishing area. These surfaces all lie on a w = const slice (thin black straight line from I + to I − in Figure 2).
Discussion
We have considered extremal surfaces in bulk de Sitter space (in the Poincare slicing) on constant boundary Euclidean time slices bounding subregions at future timelike infinity, motivated by the Ryu-Takayanagi prescription for entanglement entropy in AdS/CF T . Stemming from certain crucial sign differences, we have seen real extremal surfaces which are essentially the boundaries of the past lightcone wedges of the subregion: these are null surfaces with vanishing area. We have also seen complex extremal surfaces which do not always have real-valued area: this has parallels with analytically continuing from the Ryu-Takayanagi formula in AdS. In dS 4 , the area is real-valued, negative and has some structural resemblance with entanglement entropy in a dual CF T 3 . We have also studied extremal surfaces in the dS black brane (where there is a finite cutoff-independent extensive piece), and the related dS bluewall spacetime. It is worth mentioning that there may exist other extrema of the area functional: for instance, we have required that x(τ ) parametrizing the strip width be real-valued, which suggests the path τ = iT in complex τ -space. This appears consistent with possible dS/CF T interpretations and also corroborates with our discussion of the dS black brane. However this may be restrictive and more general complex extremal surfaces may be relevant in complex τ -space (see e.g. [23]). It may be interesting to understand if the analysis of [18] can be applied in this case to obtain insights into extremal surfaces.
While this analysis of bulk extremal surfaces could be regarded as simply a study of certain kinds of probes of asymptotically de Sitter spaces, it cannot pinpoint whether the corresponding area is expected to have a physical interpretation as entanglement entropy in the dual CFT, and if it should be real-valued, negative/positive and so on, although the result in dS 4 /CF T 3 does appear so. It is tempting to study this in light of the higher spin dS/CF T duality of [4]. However the presence of massless higher spin fields might suggest that extremal surfaces which are geometric gravitational objects are not accurate (see e.g. [45,46] which study entanglement entropy from Wilson lines in higher spin AdS holography). Nevertheless it is interesting to ask if these extremal surfaces have significance in some approximation where the higher spin symmetry is not exact. In this case, it would be interesting to explore the physical interpretation here more directly from a Euclidean CF T 3 point of view. One way to think about entanglement entropy in field theory (lattice models) is in terms of the eigenvalues of a correlation matrix and a corresponding von Neumann entropy (see e.g. [47] and more recently [48]). In that context, a simple model of a massless scalar field with wrong sign kinetic terms might suggest that the correlation matrix squared C 2 is related to that for an ordinary massless scalar field by a minus sign, so that C-eigenvalues λ k become iλ k . Then the associated von Neumann entropy is in general not real-valued: it would be interesting to understand this better.
|
2015-05-25T06:27:03.000Z
|
2015-01-13T00:00:00.000
|
{
"year": 2015,
"sha1": "b7b5b76525258e3a00be55abc0ad4e8675a480a6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1501.03019",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b7b5b76525258e3a00be55abc0ad4e8675a480a6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
250411946
|
pes2o/s2orc
|
v3-fos-license
|
) Sentience criteria to persuade the reasonable sceptic. Animal Sentience 32(21)
: When presented with evidence that Crump et al.’s criteria are satisfied for the animals in some taxon, a sceptic could reasonably continue to suspend judgement about whether those animals are sentient. This is because the criteria refer to abilities which are associated with sentience in humans, but it is not clear that sentience is necessary for these abilities. The criteria could be strengthed by requiring evidence of a contrast in performance between cases in which information is carried by felt and unfelt states.
1. Introduction.Crump et al.'s target article proposes a new set of criteria for pain experience, designed to be suitable for assessing evidence of sentience in invertebrates, and in this case applied to decapod crustaceans.In assessing these criteria we should bear in mind that different evidential standards may be appropriate for different purposes.For example, one of the authors has previously argued that, for the purpose of formulating animal protection legislation, the evidential bar for sentience should be set at an intentionally low level (Birch 2017).This precautionary approach is intended to reduce the risk that uncertainty will lead to inadequate protections for animals who are in fact sentient.One way to evaluate Crump et al.'s criteria would be to ask whether they set a suitable evidential bar for protection.
A different purpose for which Crump et al.'s criteria might be used is to determine whether a person should believe that the animals in some taxon are sentient.Believing something is not the same as taking it to be true for a limited practical purpose (such as formulating legislation).For a person unfamiliar with the literature on decapod behaviour and cognition, a reasonable attitude would be to suspend judgement because they would be unable to justify believing either that decapods are sentient or that they are not.So we can also evaluate Crump et al.'s criteria by asking whether such a person should change their view, and believe that decapods are sentient, if they learned that all or most of the criteria were satisfied.
According to Crump et al., high confidence that seven or eight of their eight criteria are satisfied would amount to 'very strong evidence' of sentience (p.9).This suggests that, in their view, one should believe that the animals in a taxon are sentient if they meet this standard.I am not so sure.In this commentary I will argue that it would be reasonable to continue to suspend judgement even if all eight of their criteria were met.
2. The criteria.Crump et al.'s criterion 1 is that the animal has nociceptors.Criteria 2 and 3 are that the animal has brain regions that integrate information from different sensory sources, and that neural pathways connect nociceptors with these regions.Criterion 4 concerns the modulation of the animal's behaviour in response to noxious stimuli by endogenous neurotransmitters and putative local anaesthetics, analgesics and other drugs.Criteria 5 and 6 are that the animal shows motivational trade-offs leading to flexible decisionmaking and flexible self-protective behaviour.Criterion 7 is that it shows associative learning, and criterion 8 is that it shows that it values putative analgesics or anaesthetics.
Crump et al. acknowledge that criterion 1 could easily be satisfied by a non-sentient animal.In other cases, however, their criteria are designed to avoid this feature.For example, they emphasise that criteria 5 and 6 are not satisfied by behaviour based on interaction between reflexes, since it is assumed that reflexive behaviour does not require sentience.Similarly, they emphasise that habituation and sensitisation are not sufficient to meet criterion 7, because these too are considered to be possible without sentience.This emphasis is clearly important, but we can reasonably ask for more.What is the positive argument for the view that an animal that satisfies the criteria is very likely to be sentient?Why should we think that flexible decision-making, for instance, could not be achieved by a non-sentient animal?I take it that hypotheses about how sentience facilitates cognitive functions in humans would be central to this argument (see p. 3 of the target article, and the discussion of associative learning on p. 7, for relevant comments).The criteria emphasise integration of information from different sources, flexible behaviour, and associative learning, which roughly correspond to three abilities that Birch (2022) lists as plausibly facilitated by sentience in humans. 1 The general form of the argument seems to be that if crustaceans have abilities that require sentience in humans, then they are likely to be sentient.Eight criteria for pain sentience (Crump et al., 2022, §2.2) describe their views in the language they use, but I take these terms to be closely related.'Consciousness' here refers to what philosophers call 'phenomenal consciousness'.For an animal to be sentient is for it to be capable of conscious (felt) states.Such an animal might also be called conscious, and a non-sentient animal may be described as nonconscious.Conscious processes involve conscious states, while unconscious processes do not.
In the most notable response to the form of argument just sketched, Browning and Veit (2020) emphasise the difference between unconscious processes in conscious (sentient) animals and nonconscious processes in non-sentient animals.They argue that evidence that a function cannot be performed through an unconscious process in a conscious animal does not show that a similar function cannot be performed in a nonconscious animal which has evolved or developed differently.
In addition to this, Skora et al. (2021) provide evidence that instrumental conditioning requires consciousness in humans but they argue that this does not imply that evidence of instrumental conditioning in simpler animals supports the view that they are sentient.Instead, they suggest that instrumental conditioning may be 'optimised for more complex and flexible decision-making' in humans in a way which requires consciousness, while taking a simpler form, which does not require consciousness, in some other animals.Shea and Frith (2016) argue that consciousness facilitates tasks of certain kinds, rather than being necessary for any, thus supporting Browning and Veit's view that we should be open to the possibility that nonconscious animals have ways to perform functions which are facilitated by consciousness in humans.
It could therefore be reasonable to continue to suspend judgement, despite good evidence that Crump et al.'s criteria are satisfied.Someone who took this attitude could be described as a 'reasonable sceptic'.
Criteria which would make such an attitude unreasonable might incorporate a further idea proposed in the recent literature.Birch (2022) proposes that particularly strong evidence of consciousness would come from studies showing a contrast in performance, in a cluster of consciousness-linked abilities, between cases of putatively conscious and unconscious perception of stimuli.He suggests that masking procedures analogous to those that cause unconscious perception in humans could be used; these might selectively switch off consciousness-linked abilities in animals which exhibited those abilities when the same stimuli were not masked.This would be evidence for a contrast between two kinds of states in the animal's mind which would play a similar functional role to the conscious/unconscious contrast in humans.The same method is advocated by Crump in Birch et al. (2022) and by Crump and Birch (2021).I have also previously argued that strong evidence is provided by contrasts in behaviour that are best explained by a distinction between conscious and unconscious states, with reference to studies of taste aversion conditioning and incentive learning (Butlin 2020).
At this point one might ask whether a failure to find evidence of a contrast should be considered evidence that animals in the taxon in question are not sentient.Of course, failures to find evidence will often be explained by the limitations of particular attempts, so should not immediately be taken to show that no contrast exists, or that the animals are not sentient.However, an approach to the study of non-human consciousness should ideally include a principled method to establish negative conclusions (Shevlin 2021).The possibility that consistent failures to find evidence of a contrast should be taken as a negative marker seems worthy of consideration.
Conclusion.
I have argued that the sentience criteria offered by Crump et al. are too weak to persuade a reasonable sceptic of crustacean sentience.These criteria require evidence of the presence of sentience-linked abilities which are described in fairly broad terms.The sceptic could argue that it is plausible that a non-sentient animal could possess such abilities, appealing to Browning and Veit's unconscious/nonconscious distinction.More compelling criteria would incorporate a requirement for evidence of a contrast in performance.
|
2022-07-11T15:08:00.239Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "bbfa8112a126d8afdd6a8a0f70a0bab6162c7be3",
"oa_license": "CCBYNC",
"oa_url": "https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1741&context=animsent",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2c2871c800be915b4741a92f9bbc276a40e0660",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
}
|
219174933
|
pes2o/s2orc
|
v3-fos-license
|
Novel Biallelic NSUN3 Variants Cause Early-Onset Mitochondrial Encephalomyopathy and Seizures
Epitranscriptomic systems enable post-transcriptional modifications of cellular RNA that are essential for regulating gene expression. Of the ~ 170 known RNA chemical modifications, methylation is among the most common. Loss of function mutations in NSUN3, encoding the 5-methylcytosine (m5C) methyltransferase NSun3, have been linked to multisystem mitochondrial disease associated with combined oxidative phosphorylation deficiency. Here, we report a patient with early-onset mitochondrial encephalomyopathy and seizures in whom the novel biallelic NSUN3 missense variants c.421G>C (p.A141P) and c.454T>A (p.C152S) were detected. Segregation studies and in silico functional analysis confirmed the likely pathogenic effects of both variants. These findings expand the molecular and phenotypic spectrum of NSUN3-related mitochondrial disease.
Introduction
Epitranscriptomic systems are necessary for the posttranscriptional modification of cellular RNA, RNA splicing and protein translation (Hsu et al. 2017;Rozov et al. 2016). Of the~170 RNA chemical modifications reported, methylation is among the most common (Machnicka et al. 2013), with more than 90 reactions involving tRNA molecules (Boccaletto et al. 2018;Hussain et al. 2016). Numerous RNA modification enzymes and catalytic RNA-protein complexes are necessary for the post-transcriptional modification events (Rozov et al. 2016), although many remain poorly characterised.
Human diseases linked to defects in these pathways emphasise the important role of epitranscriptomics in gene expression (Hsu et al. 2017). One recent example involves the 5methylcytosine (m5C) methyltransferase NSun3, encoded by NOP2/Sun RNA methyltransferase 3 (NSUN3). Loss of function mutations in NSUN3 cause multisystem mitochondrial disease associated with a combined oxidative phosphorylation (OXPHOS) deficiency (Haag et al. 2016), highlighting the importance of NSun3 in mitochondrial translation.
Here, we report novel biallelic NSUN3 missense variants in a South Asian patient with early-onset mitochondrial encephalomyopathy and seizures.
Patient Cohort
A cohort comprising 167 South Asian patients with suspected primary mitochondrial disease (PMD), based on clinical, pathological and biochemical findings, was included. Forty-eight patients had early-onset mitochondrial encephalomyopathy. Informed written consent for diagnostic and research studies was obtained from all subjects. The study was approved by the Institutional Ethical Committee (IEC) of CSIR-Centre for Robert D.S. Pitceathly and Kumarasamy Thangaraj are joint senior authors for this paper. Cellular and Molecular Biology, Hyderabad, India, and Nizam's Institute of Medical Sciences (NIMS), Hyderabad, India.
Mitochondrial DNA Sequencing
DNA was extracted from blood leucocytes using a standard phenol-chloroform method (Thangaraj et al. 2002), with minor modifications. Complete mtDNA was amplified using 24 sets of primers to generate overlapping amplicons, purified and directionally sequenced using BigDye terminator cycle sequencing kit and ABI3730 XL Genetic Analyzer (Rieder et al. 1998).
Results
The novel compound heterozygous NSUN3 missense variants c.421G>C (p.A141P) and c.454T>A (p.C152S) were detected in one patient with early-onset mitochondrial encephalomyopathy and seizures (Fig. 1a, II:3). The proband, an 8-monthold boy born to consanguineous parents following a normal pregnancy, presented at 4 months of age with lactic acidosis, global developmental delay, hypotonia, muscle weakness and seizures. There was a family history of lactic acidosis in the proband's elder sister (Fig. 1a, II:1). Brain magnetic resonance imaging (MRI) revealed cerebral atrophy and white matter hyperintensities involving both cerebral hemispheres, particularly the frontal and temporal lobes (Fig. 1b). Segregation studies confirmed that the proband's unaffected father (Fig. 1a, I:1) and elder brother (Fig. 1a, II:2) were carriers of the c.454T>A (p.C152S) variant, while the proband's unaffected mother (Fig. 1a, I:2) was a carrier for the c.421G>C (p.A141P) variant.
Both variants reside in exon 3 of the NSUN3 gene ( Fig. 2 a and b), within a highly conserved region of the protein (Fig. 2c), and are absent in the 1000 Genomes Project (http://www. 1000genomes.org/about), Exome Aggregation Consortium (ExAC, http://exac.broadinstitute.org/), Genome Aggregation Database (gnomAD, http://gnomad-old. broadinstitute.org/) and 485 ethnically matched healthy control subjects. In silico functional analysis confirmed that the variants have a high probability of being pathogenic (Table 1). Unfortunately, muscle tissue and cultured fibroblasts were unavailable for biochemical or functional studies.
Discussion
We report a South Asian patient with early-onset mitochondrial encephalomyopathy and seizures harbouring novel, compound heterozygous NSUN3 missense variants c.421G>C (p.A141P) and c.454T>A (p.C152S). NSUN3 NSun3 is a novel human m5C RNA methyltransferase that specifically methylates mitochondrial tRNA Met . NSUN3-knockout cells demonstrate reduced mitochondrial protein synthesis and oxygen consumption resulting in decreased mitochondrial activity (Trixl et al. 2018). Disease-associated mt-tRNA Met point mutations that impair Nsun3-mediated methylation have pathological consequences (Nakano et al. 2016), and loss of function mutations in NSUN3 are reported to cause severe multisystem mitochondrial disease associated with combined OXPHOS deficiency (Van Haute et al. 2016). In the current study, we expand the molecular and clinical spectrum of NSUN3-related mitochondrial disease to include two novel NSUN3 missense variants and seizures.
The following lines of evidence support the pathogenic effects of the c.421G>C (p.A141P) and c.454T>A (p.C152S) NSUN3 variants: (1) conservation data suggests p.A141 and p.C152 are highly conserved amino acid residues of the Nsun3 protein; (2) both variants are absent from the 1000 Genomes Project, ExAC, gnomAD and 485 ethnically matched healthy control subjects; (3) segregation studies confirm the variants are trans-acting in the proband, while both parents and unaffected sibling are heterozygous carriers; (4) in silico functional analyses Fig. 2 Schematic diagram of the NSUN3 gene with previously reported variants (red letters) and the novel variants identified in this study (black letters) annotated (a). Sequence analysis of NSUN3 in the proband confirming the c.421G>C (p.A141P) and c.454T>A (p.C152S) NSUN3 variants, arrows (b). Evolutionary conservation data for NSun3 amino acid sequence across species at positions 141 and 152, arrows (c) predict the variants to be pathogenic with high probability scores; and (5) no known pathogenic variants were detected in either mtDNA or disease-associated mitochondrial nuclear maintenance genes.
In conclusion, we report novel biallelic NSUN3 missense variants causing early-onset mitochondrial encephalomyopathy and seizures, thereby expanding the molecular and phenotypic spectrum of NSUN3-related mitochondrial disease.
|
2020-06-02T22:58:24.916Z
|
2020-06-02T00:00:00.000
|
{
"year": 2020,
"sha1": "30b8cdb2c06ca1ec6ba7c3c2a7b7b5d6bc2d19dc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12031-020-01595-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "30b89b80374bdeb4b3a20cd19261f359f4310988",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
226294850
|
pes2o/s2orc
|
v3-fos-license
|
Substances use and its association with sociodemographic, family and environment-related factors among technical and vocational education and training College Students in Ataye, Ethiopia, 2019.
Background: substance use is prevalent in Ethiopia. Substance use can have several health problems that are potentially harmful in educational performance, social issues, psychological and physical wellbeing; especially on youth and adolescents. This study aimed to determine the prevalence of lifetime and last one-month substance use and its associated factors among technical and vocational education and training College Students (TVET) in Ataye town. Methods: an institution based cross-sectional study was conducted. Participants were selected by using simple random sampling technique. Chi-square with its p-value was used to identify correlates for each substance use. Bivariate and multivariate binary logistic regression analysis was performed to identify associated factors after controlling cofounding. Odds ratio with their 95% confidence interval, and P < 0.05 was used to declare the significance of associated factors. Results: A total of 483 individuals were participated, which was a response rate of 94%. The prevalence of lifetime substance use was 32.5% (95% CI: 28.2, 36.5). The prevalence of last month's substance use was 21.9% (95% CI= 18.2, 25.5). Among lifetime users, the majority 98(25.5%) were chewed Khat; followed by alcohol drinking 75(19.5%). Cigarettes, cannabis, and cocaine users were 15.3%, 2.5%, and 7.2% respectively. Among last month's substance users, the same proportions (21.9%) chewed Khat; followed by alcohol drinking, (16.6%) and cigarette smoking, (15.3%). Of last month's substance users, 1.2% and 3.3% of students used Cannabis and cocaine respectively. Using multivariate binary logistic regression, Being male (AOR=2.2, 95%CI: 2.3, 3.84), Being from divorced family (AOR=4.1,95%CI: 1.1, 5.8), Living > 20 years in the town (AOR=3.45,95%CI: 1.18,10.1), had family member substance user (AOR=2.5, 95%CI: 1.1, 5.8), had intimate friend substance user (AOR=5.3,95%CI:
lower the prevalence.
Background
Substance use has the capability of affecting the state of the body and the mind by either depressing or stimulating the central nervous system or producing other biochemical harmful effects (1). An estimated 250 million people (around 5%) of the global adult population use drugs at least once in their lifetime as studied in 2015. The more worrisome fact is about 29.5 million drug users, or 0.6% of the global adult population suffer from drug use disorders (2).
World Drug Report (2018) revealed that more than a quarter of a billion people use drugs globally.
The actual number of people who use drugs increased by 20 million people from 2015 to 2016.
Roughly 450,000 people died as a result of drug use in 2015, according to the world health organization (WHO). Of those deaths, 167,750 were directly associated with drug use disorders (mainly overdoses), (3).
According to the 2016 Ethiopian demographic and health survey (EDHS) report, 35% of women and about ha1f of men (46%) reported drinking alcohol at some point in their lives. The percentage of women who chewed Khat for 6 days or more in the last 30 days increased from 43% in 2011 to 65% in 2016. Among men, the proportion increased from 56-64%. The use of tobacco increases with age among men and reaches a peak at age 40-44 years. There is wide regional variation in cigarette smoking ranging from less than 1% of men in Amara to 13% in Dire Dewa and 18% in Somali (4).
Drug use is associated with adverse health consequences. Problems of substance use seem to be rapidly growing concern globally. It is also a major threat among youth in college and university. Some of the researchers have shown that the use of drugs by school-going youth does not only decreases their academic performance; but also makes them vulnerable to crime. Furthermore, drug use exposes them to health risks among other numerous problems (5).
Substance use is a common phenomenon among college and university students in Ethiopia.
Generally, substance use is one of the most serious health threats that potentially harm students of college and university in many countries and the general population at large (6). The use of drugs among adolescents can be harmful leading to failure in academic performance, increased risk of acquiring HIV/AIDS, and other STIs. Substance use can lead to lethargy, hopelessness and insomnia (7). It has also been noted that family background, student pocket money, peer pressure, accessibility of substances and customs of society at large contribute to the increased rate of substance use among college students. Most students, staff at an institution of higher education and youths in the community are considered at high risk of substance use (8).
The problem of substance use has historically been linked to Ataye town due to the accessibility of substances. Substance use habit is a common phenomenon in the study area. Despite the fact that such problems are one of the top health risks among college students, there are no adequate studies conducted to explore the prevalence and associated factor of substance use in TVET college students.
The findings of this study will be useful to the education bureau and school administrations to develop strategies to mitigate students' substance use behavior. The study findings also may help curriculum developers in formulating and incorporating psycho-education programs in TVET College which address the risk of substance use. School managers also may benefit from findings so that they may come up with policies and strategies for controlling this potentially dangerous habit.
Specific objectives
To determine the prevalence of last month substance use among Ataye TVET college students To determine the prevalence of lifetime substance use among Ataye TVET college students To identify associated factors with substance use among Ataye TVET college students Methods Study Area, design and period The study was conducted in Ataye TVET College, North Showa zone which is found in Amara Regional state of Ethiopia. The study area located 272 km from Addis Ababa and 130 km from Debre Berhan town. Orthodox, Muslim, and Protestant followers are the major inhabitants found in the study area.
At Ataye TVET College, there are 1,433 students enrolled in the academic year. Out of these, 683 were male, and 750 were female students. An institution-based cross-sectional study was conducted to assess the prevalence of last month and lifetime substance use and associated factors among TVET College students from 1 st March to last May 2019.
Population
Source population: All regular Ataye TVET College students.
Study population: All regular Ataye TVET College students and who were available during the study period.
Sample size calculation and Sampling procedure The sample size was calculated using a single population proportion formula. A 15.36% rate of substance use was taken from the related study (9), with a margin of error 5%, confidence level 95%, and non-response rate 10%. The final sample size became 514 students. Ataye TVET College has 1,433 students enrolled in the study academic year. There are 9 departments from the first year to the third year. The departments are Agriculture, hotel kitchen operation, Electricity, Auto engine service, Garment, Surveying, Construction, Metalwork and Road construction. From the 9 departments, five departments were selected by using by lottery method. The total sample size was distributed proportionally to the selected department based on the total number of students in each year. The study participants were selected by using simple random sampling technique by generating numbers using Open EPI software.
Eligibility criteria
Inclusion criteria: all students at Ataye TVET College.
Exclusion criteria: Students could not be reached in 3 visits during the data collection period were excluded.
Study variables
Dependent variables: lifetime Substance use and last month Substance use Independent variables Socio-demographic characteristics: (age, sex, residence, economic status, education status, religion, ethnicity, and, living status, mother education level, father education level, and friend use of substance, lake of family supervision, family conflict, family history of substance use, family income, availability of substance, source of money, grade level, peer pressure, marital status of family.
Operational definition
Lifetime substance Use: use of one or more substances for nonmedical purposes after joining the Ataye TVET College.
Last month Substance use: use of one or more substances for nonmedical purposes in the past 30 days prior to the data collection period.
Licit substances: drugs that are not prohibited to sell, buy and use among above 18 years old persons (i.e. alcohol, Khat, and cigarette) during the study period.
Illicit drugs: drugs which are prohibited to sell, buy and use among all age groups (rather than alcohol, Khat, and cigarette) during the study period.
Data collection tools and procedures Data were collected by using a structured and pretested self-administered questionnaire. The questionnaire included variables like a habit of substance use, socio-demographic and economic variables, the practice of substance use, the reason for substance use, and family history of substance use. The questionnaires were prepared in English then translated to the local language (Amharic). Out of nine departments, five departments (55%) were selected by using the lottery method. Participants were selected by using simple random sampling technique.
Data quality control
To keep the quality of data, data collectors and supervisors were trained for one day regarding the necessary explanation about the current research. Data were collected by five trained diploma nurses with close supervision. A pre-test was conducted on 5% (n=24) students among similar study populations from non-participating departments before 2 weeks of the actual data collection period.
The collected data were reviewed and checked for completeness before data entry. The data were properly coded, and entered to Epi Info 3.5.1 and exported to SPSS V. 21 for analysis.
Ethical consideration
Ethical clearance was obtained from the Debre Berhan University ethical review committee. A cooperation letter to each study health institution was written and a permission letter was taken. Oral informed consent was taken from each study participants who were aged 18 years and above. Assent was taken from participants who were aged less than 18 years. Written informed consent was taken from participants' family/legal guardian who aged < 18 years.
Results
Sociodemographic characteristics of students A total of 483 students were participated, which was a response rate of 94%. The median age of the students was 20 years with an Interquartile range of 3, (Q1 = 18, Q3 = 21 years). Among the sociodemographic variables, the nearly equal proportion was observed in terms of sex (49.7% male and 50.3% female). The majority, 346 (71.6%) students were orthodox in religion and the remaining 137
Family-related factors
Almost half, 277 (57.4%), and 251 (52%) of student's mother and father were not attended modern education respectively. Sixty percent of student's parents were lived in an urban area, and 212 (44%) was a farmer in occupation. Nearly eighty percent of students' parents were married and nearly forty percent, 208 (43.1%) student's parent were lived for 1-5 years in the Ataye town (Table 1). reported that the reason for their substance use was due to a lack of recreational area in the town.
The minimum of students, 63 (13%) were started substance use due to peer influence. Furthermore, students reported other driving environmental factors for their lifetime substance use (Fig. 1).
Prevalence of last month substance use and its correlates Alcohol, Khat and Cigarette use in the last month and its variation across study subjects' characteristics were compared by using chi-square with its p-value. The result of this study showed that being in different age groups had contribution for prevalence of last month licit substance use.
Being from urban and rural in residency were statistically insignificant for all the three listed drugs (alcohol, Khat, and Cigarette). This means a students in these subcategories were equally practiced substances used in the last month. Living alone and living with others had a contribution to the variation of last month's prevalence of alcohol drinking (X 2 = 7.8, p-value = 0.02) and Khat chewing (X 2 = 16, p-value = 0.00); but not for cigarette smoking (X 2 = 3.3, p-value = 0.2). Other students' characteristics were statistically significant correlates with last month's substance use. This indicated that students with different predictor variables were not equally practiced drugs in the last month (Table 2). were not statistically significant in association with last month's cocaine use. This showed students in these subcategories were equally practiced cocaine use in the last month (Table 3). Table 3 Last month illicit drugs use and its correlation with students and family characteristics among Ataye Key: * p-value < 0.05, ** p-value = 0.00, X2 = chi-square Lifetime cannabis use variation was observed among students' differences in living with and family monthly income. For lifetime cocaine use, variation was observed in sex, academic level, and living with differences (Table 5). (Table 6). Table 6 Bivariate and multivariate analysis to identify associated factors with lifetime drug use among Ataye Only two studies; one from Hawassa University (10) and another study from Gondar University (11) reported similar prevalence (around 24%) of lifetime Khat chewing.
Prevalence of lifetime Khat chewing in the current study was lower than studies from Haramaya (12), and Jimma (9) universities. The possible reason might be due to the availability of Khat in Haramaya and Jimma town is more prevalent than the current study area. Especially, Haramaya dwellers linked Khat to the economy of the household and chewed Khat as a habit (13). This allows students to be easily exposed and practiced Khat. Prevalence of lifetime Khat chewing was higher than studies done among Addis Ababa University, Debre Berhan University, and Ataye town students (9,14,15). A possible explanation for the high prevalence of lifetime Khat chewing in Ataye extends to social and environmental differences. First of all, Khat is cultivated around Ataye town that could make it easy to access by students. All these factors can contribute to the practice of Khat chewing among Ataye TVET students compared to the above Ethiopian Universities.
Last month Khat chewing prevalence was lower than the study from the same study area (9) and Hawassa University (8). Except for the study from south Iran (16), both last month and lifetime drinking of alcohol was lower than the studies from Jimma, Gondar, Debre Berhan, Hawassa University, and Woreta town (14,(17)(18)(19). This may be underreported due to social desirability and increased abstinence rate in the past 30 days. And also, since the data collections were done inside the teaching classroom, those students with addiction behaviors may remain outside the classroom because of their academic and living lifestyle. This can potentially introduce selection bias and result in a lower estimate of substance use prevalence (10).
Lifetime and last month smoking of cigarette were lower than studies from southern Iran (16) and Jimma University (15). The possible explanation might be due to the health behavior of the students and the target population's age difference.
Male students were two times more likely to use a substance in a lifetime (after joining college) as compared with female students. The association is in agreement with studies done in Haramaya University (12) and Jimma University (17). In fact, male students use substances than females due to cultural and hormonal differences.
Having substance user friends and family member were more likely to be exposed for substance use in a lifetime as compared with who have no substance user friend and family. The association is supported by studies done in Addis Ababa University (14), Debre Berhan University (15), Hawassa University (10), Grate Accra metropolis (20), and Woreta Town (19). This might be due to, they may let students familiarize substances and adopt utilization thereby reducing the subjective norm and perceived risk perception of students.
Being from a divorced family also had a significant contribution to substance use as compared with students from married families. The study from Kuwait supports this association (21). There are many factors that increase a young person's likelihood of substance use. Among them, parental divorce is the main factor. Parental conflicts and lack of supervision from parents is a known factor for young's health problem.
Easily availability of the substance was a statistically significant factor for lifetime substance use as compared with students from less substance accessibility areas. The possible reason might be, increased substance availability is associated with increased use. In the study area, licit drugs are easily available; but not known about illicit drugs (cannabis and cocaine). At the study area (Ataye) and catchment areas, khat is cultivated by the community and this may put the study subjects to a greater risk of exposure.
Living 20 years and above in Ataye town was also more likely to be exposed to the substances as compared with those who lived 1-5 years. This might be due to, living more years in one town might increase the exposure of the study participants to substances. In fact, more years are needed to assimilate the community habit of substance use; even if it is two decades.
Generalizability: The external validity of the study was managed during a sample size calculation, sampling procedures, and techniques, training of data collectors and data quality control sections.
Limitations of the study
The current study did not assess substance abuse and substance use disorder. There is no validity assurance for underreporting of substance use. Students are more likely to deny their utilization behavior because of social desirability bias.
Conclusions
Prevalence of last month and lifetime substance use among Ataye TVET College students were analogous with other studies done in Ethiopia. Students who are aged less than 18 years were also reported substance use. Cannabis and cocaine are illegal drugs in Ethiopia. But, the reported proportion of these illegal drug use was significant and needs government attention
Recommendations
It is advisable if the college administrators work together with town administrators to mitigate the problem including closing substance use homes around the school. Overall, Substance use among adolescents should get further emphasis to lower the prevalence. Availability of data and material: the datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate: Ethical clearance was obtained from the Debre Berhan University ethical review committee. A permission letter was written to each study health institution and a permission letter was taken. Oral informed consent was taken from each study participant who is aged 18 years and above. Assent was taken from participants who are aged less than 18 years.
Written consent was taken from participants' family/legal guardian aged < 18 years.
Consent for publication: the manuscript did not contain individuals' person detailed data in any form.
Competing Interest: The authors declare that they have no conflicts of interest.
Funding: The research was not funded by any funder. The cost of the design of the study and data collection, analysis, and interpretation of data and writing the manuscript covered by the study authors.
Figure 1
Self-reported environmental reasons for substance use among Ataye TVET students, Ethiopia, in 2019.
|
2020-03-05T10:14:43.274Z
|
2020-03-03T00:00:00.000
|
{
"year": 2020,
"sha1": "5c3e3b3b2bbb26390a0f5769165d96247607c436",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-09797-w",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "40b4adeeb7ecfa42c67e0107fa46bca16adda606",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
227154594
|
pes2o/s2orc
|
v3-fos-license
|
Queensland Telepaediatric Service: A Review of the First 15 Years of Service
In November 2000, the Queensland Telepaediatric Service (QTS) was established in Brisbane, Australia, to support the delivery of telehealth services to patients and clinicians in regional and remote locations. The QTS was built on a centralized coordination model, where telehealth services could be effectively managed by a dedicated telehealth coordinator. In doing so, telehealth referral and consultation processes were efficient and clinicians felt better supported as they adjusted to new processes for engaging with patients. We have conducted a retrospective review of activity associated with the QTS and summarized key activities which have arisen from this extensive program of work. Telehealth service records and associated publications were used to describe the evolution of the QTS over a 15-year period. From November 2000 to March 2016, 23,054 telehealth consultations were delivered for 37 pediatric clinical specialties. The most common service areas included child and youth mental health, neurology, burns care, surgery, and ear nose and throat services. A range of different telehealth service models were developed to align with different clinical service needs and location of services. Whilst most work involved video consultation between hospitals, some services involved the delivery of telehealth services into the home, schools or community health centres. Despite its longevity, the QTS was not immune to the usual challenges associated with telehealth implementation, service redesign and sustainability. Experience reported from the QTS will be useful for other health services seeking to develop comprehensive telehealth services in a rapidly changing healthcare environment.
INTRODUCTION
Conventional models of health care in Australia require patients to travel (often great distances) to receive specialist care. Occasionally specialist teams travel to remote communities to deliver health care services; but these tend to occur on an intermittent basis. For logistical reasons, some patients do not receive the care they require because of the difficulties of having to leave their community for an appointment and/or treatment. Telehealth can be used to improve access to health services for people living in distant locations; this is important in Australia where the majority of specialist health services are based in metropolitan areas, and the distances between these hospitals and small rural hospitals may be considerable.
Over the last two decades, the use of telehealth to deliver pediatric telehealth services (telepaediatrics) has been reported by many countries (1,2). Telepaediatric service models have included multidisciplinary services operated from a centralized coordination centre; discipline-specific telehealth services for children and young people; and services in different settings (such as hospitals, community health settings, schools and in the home). The idea of providing telehealth services particularly for children and their families makes sense because of the centralization of pediatric specialist services, imposition of travel away from home, and the requirement for a child to be accompanied by a parent or caregiver for travel to and from their specialist appointment.
Despite the clear benefits of telehealth, a long-term effort was required to address the challenges of telehealth implementation and uptake in the Queensland public health service. In November 2000, a pediatric telehealth service model was established at the Royal Children's Hospital (RCH) in Queensland, Australia (3). The Queensland Telepaediatric Service (QTS) offered a convenient referral process (single point of contact) for telehealth referrals and coordination of telehealth consultations (4). The majority of telehealth consultations involved a videoconference appointment between the specialist hospital in Brisbane and a referring hospital. Other communication methods included correspondence by email or telephone. The telehealth service used the videoconferencing network operated by the state health department-comprising both hardware and software systems. In some cases, customized telehealth systems were deployed to improve child-friendliness, or where standard systems did not meet the clinical requirements, see Figure 1 (5).
The University of Queensland's Centre for Online Health (COH) was responsible for establishing and operating the QTS in partnership with the Queensland health department. Operational responsibilities were funded by a service level agreement; and an integrated research program was funded by community and corporate organizations. The aim of this review is to summarize patterns of service activity, outline specific service models, and describe the key enablers and challenges associated with the service.
METHODS
This study presents a retrospective review of QTS activity reported over a 15-year period from November 2000 to March 2016. Service activity was obtained from an operational database, which was owned and maintained by the COH. This database contained information about each consultation including specialty, duration, location and modality. This review also summaries published studies undertaken during the course of this program. All published studies reported in this review received ethical approval from the appropriate committees. Further permission and exemption from ethical review was obtained from Children's Health Queensland Hospital and Health Service Human Research Ethics Committee to publish overall service activity according to service records managed by the COH (dated 13 June 2019).
Service Activity
From November 2000-April 2015, a total of 23,054 telehealth consultations were coordinated through the QTS. The majority of these (95%) involved consultations by videoconference, whereas the remaining involved email (3%) or telephone consultations (<1%). A total of 37 clinical specialties were actively involved in the QTS, delivering services to 110 sites throughout Queensland and Northern New South Wales, see Figure 2.
The volume of telehealth activity gradually increased during the first 10 years (2000-2010), with the introduction of new specialties and expansion of services within certain clinical disciplines, see Figure 4. In 2006 (A), our mobile videoconferencing systems were used to provide pediatric support to regional hospitals (child-friendly robot ward rounds); and in 2008 (B), the mobile ear, nose and throat (ENT) surveillance service for Indigenous children was established in Cherbourg, resulting in additional ENT consultations at the RCH. From 2010 onwards, activity levels remained static or fell slightly. This mainly coincided with a significant reduction in the QTS operational budget (C); staged closure of the RCH (D); and the transfer of the QTS (E) over to the new children's hospital in Brisbane.
Referrals for telehealth consultations were made from over 270 health services, mainly hospitals throughout Queensland and northern New South Wales. The top five referring sites were Mackay (20%), Atherton (14%), Hervey Bay (7%), Mt Isa (5%), and Innisfail (5%). Almost all referrals originated from a regional hospital-or from the specialist hospital (provider site) for patient follow-up.
The QTS was primarily a clinical service; 97% of its use involved providing advice about a patient, reviewing a case, initial assessment before transfer to the specialist hospital or handover of a patient before return to regional hospital. The remaining activity (<3%) concerned the delivery of education or administrative services.
Service Models
The QTS was flexible and responsive to clinical needs; a variety of different service models were developed amongst the specialist areas, reflecting the needs of the patient and the purpose of the consultation. Some were developed for general outpatient appointments with specialists, emergency FIGURE 1 | Wireless (robot) videoconference system used for bedside consultations in regional pediatric wards.
advice for the assessment of infants with a cardiac condition, follow-up of patients receiving specialist burns care, case conferencing with regional health care teams involved in the care of children and families with mental health conditions, handover of patients to regional hospitals and home care, and community-based assessments of children with chronic ear disease. Other applications were primarily developed for education and training purposes.
General Outpatients
The most common application within the telehealth service was the delivery of outpatient appointments for children and families who would normally travel to Brisbane. Most clinical specialties were actively engaged and provided telehealth clinics on either a weekly or a monthly basis depending on demand. Common examples included clinics for diabetes, neurology, orthopedics, nephrology, rheumatology, and pediatric surgery (6)(7)(8)(9)(10)(11). Often these clinics would be run in parallel with the in-person clinics in Brisbane, and the specialist time was allocated as required to the telehealth service. For certain specialties, a telehealth clinic list was established prior to the session, and connections involved multiple patients at the same (referring site) or multiple patients in a range of different sites. Telehealth clinics all required careful coordination to ensure site preparation and collection of the necessary clinic information in advance of the consultation.
Ad-hoc and Urgent Consultations
The assessment of newborn children with suspected cardiac defects was one of the services offered by the QTS. A pediatric cardiologist was able to assess infants remotely, by instructing the remote sonographer and viewing the echocardiogram in real-time (12). This meant a timely diagnosis and management plan could be discussed with the remote pediatrician caring for the child, and an informed decision could be made whether to transfer the infant to the specialist hospital or not. In the majority of cases, transfer of infants was then avoided and the infants continued to be managed locally with remote specialist support/advice as required. This service also provided the sonographer conducting the scan with valuable training experience whilst working with the cardiologist (13).
Post-acute Burns Care
In Queensland, specialist burns care is provided by one hospital. Referral guidelines for children with a serious burn injury indicate that referral to the specialist is necessary. Once care is received, follow-up care may last for many months or years. Prior to the use of telehealth, some outpatient appointments in Brisbane lasted for only minutes, despite some travel to the hospital taking many hours. The use of telehealth for outpatient burns care has revolutionized the support for children throughout the state (14). The burns team regularly provide videoconference appointments to all throughout Queensland and northern New South Wales. Appointments often involve occupational therapists (OTs) and nurses in regional hospitals, and the specialist burns staff in Brisbane (a medical consultant, OT and Nurse). In addition to the general follow-up appointments, telehealth has also been very useful for interim advice for a burn injury-to assist with immediate treatment at the remote hospital and planning for the transfer of the patient (15).
Mental Health Services
The delivery of telehealth by the e-child and youth mental health services (e-CYMHS) demonstrates a very effective model, combining conventional outreach services (where the specialist team travel to the regional towns) with telehealth support (16,17). Most telehealth clinics involve case conferencing, where a series of cases are presented to the specialist team (psychiatrist and other mental health clinicians) via videoconference. In some cases, the patient's family would also participate in the session. As with the overall QTS, the success of the e-CYMHS was attributed to the role of the dedicated e-CYMHS telehealth coordinator. The telehealth component of the service was a very cost-effective way of keeping in contact with regional sites (see section on cost savings) (18).
Discharge Planning and Home Care
For patients receiving specialist care in Brisbane, telehealth was used to support the process of back-transfer to regional hospitals or to home care. In these cases, specialist teams could hand over important information about the cases via videoconference and ensure that regional staff and families were prepared for and understood the clinical care requirements. Anecdotally, families reported that they felt at ease knowing that the regional staff were familiar with the treatment and follow-up care. This was fairly common practice for children referred from the oncology and palliative care unit (19,20). In the case of home care, often Frontiers in Digital Health | www.frontiersin.org the home nursing services and local general practitioners were engaged in the telehealth service.
Community-Based Health Surveillance
Through consultation with the Cherbourg community health service (∼260 km from Brisbane) and specialists in Brisbane, we developed a surveillance program for Aboriginal and Torres Strait Islander children at risk of ear disease, to ensure early detection and referral for treatment. We developed a mobile telehealth-enabled ear screening service, see Figure 5, which was operated by an experienced local Aboriginal Health Worker (AHW) (21). The AHW used the mobile service to visit schools and routinely assess Indigenous children. The AHW assessments included pure-tone audiometry, tympanometry, and digital otoscopy. In cases where children failed a screening test or if the AHW had any concerns, the AHW assessments were shared asynchronously via a secure online database, and reviewed by an ENT specialist. Assessment and treatment planning would then be done by the specialist and/or referred to the local medical service. This screening service has resulted in improvements in overall screening rates and the emergence of a model of care, which is community led and culturally appropriate (22,23).
Education and Training
Whilst the majority of services delivered through the QTS were of a clinical nature, the use of videoconferencing was also important for education and training purposes. All clinical consultations had an intrinsic educational benefit because of the interaction that occurred between specialists and clinicians at the referring sites. Anecdotally, clinicians appreciated the service because of the learning opportunities it offered. Specific services were also developed to support the training requirements of regional staff responsible for children with special care needs (such as burns care and child development). When patients were being directed back to primary and secondary centres, it was important that clinicians were supported with clinical education (24,25). Students undertaking their clinical training in rural and remote hospitals were also supported by the QTS, with access to interactive lectures by videoconference-allowing participation irrespective of location (26).
Cost Savings
The majority of savings were associated with the reduced need for patient travel. Economic evaluations using cost-minimization analysis methods demonstrated the level of activity required to reach a threshold, whereby the costs of providing one service were the same as the other. The child and youth mental health service, which was responsible for almost one-third of all QTS activity, demonstrated that at the level of activity achieved in their service, it was less expensive to provide telehealth services than doing outreach (where the specialist team traveled to the regional town) or arranging for the patient and family to travel (18,27). Similar studies showed potential savings to the health service for ENT services (28).
Key Enablers
A key factor in the success of this program was the centralization of support made available by the QTS, which made the referral, consultation and documentation process convenient for clinicians. Another key factor was the integration of telehealth services on a business as usual basis, which was reflected in clinic schedules, service delivery planning and staffing allocations for each specialty. Running these clinical activities alongside a robust research program gave clinical teams the opportunity to contribute to the evaluation process and also to the planning of innovative services within the department. Clinician availability and support for both the near (provider) and far (receiver) end was very important-as was the need to train clinicians in certain skills relevant to telehealth consultation processes. The telehealth process also required new referral processes-and once these were made clear, the coordination of appointments and clinics became more straightforward.
Access to high quality telehealth facilities in a central and easily accessible location was important. The COH provided dedicated telehealth studios, which were used for most clinics. Over time, with improvements in software-based videoconferencing systems, some clinical groups were able to conduct their own telehealth work within their own department. This still required an appropriate place, which was private, and had good lighting and suitable acoustics.
World Firsts
Academically, the COH published over 75 journal articles relating to "telepaediatrics" during the 15-year period; and pioneered a number of "world firsts" which are improving access to health and support services for regional families. These included the establishment of QTS-the first fully serviced multidisciplinary pediatric telehealth service (2, 3); the first childfriendly mobile telehealth service (robots) (5,24,29); the first use of telehealth for the delivery of clown doctor outreach services (30); and the first telehealth-supported Indigenous ear screening service with online links to pediatric specialists (21,22).
Challenges
Funding to cover the cost of telehealth is a commonly reported challenge, and one faced by the QTS since establishment. Initially, QTS telehealth services were not funded, so unless the telehealth service was purely substitution of face-to-face clinic appointments, then clinicians were providing services without direct funding. In 2011-2012, new funding opportunities emerged when the Commonwealth Government introduced funding for specialist video consultations under the Medical Benefits Schedule (MBS) (31). Around the same time, the Queensland Government Statewide Telehealth Unit introduced incentive funding for telehealth, to promote the uptake of telehealth. This incentive funding was in addition to activitybased funding which includes all activity (telehealth and inperson consultations). In small rural hospitals, activity-based funding is typically not viable due to relatively lower activity, and therefore block-funding arrangements are supported by the health department. In 2020, new temporary funding was introduced by the Australian Government in response to the 2019 coronavirus pandemic . Collectively, these funding developments have resulted in substantial growth in telehealth activity across Queensland and throughout Australia (32,33).
Staff availability was also a challenge because telehealth sessions not only relied on the availability of the specialist, but also the availability of the referring clinicians and support staff at the regional hospital. We addressed this challenge by setting up clinics in advance, so that regular clinical days and times were available-either on a weekly, fortnightly, monthly or quarterly basis, depending on demand. The delivery of telehealth also changed from an ad-hoc arrangement to an appreciation that telehealth was integrated and routine. Like any telehealth operation, we did experience some staff resistance, but this was mainly related to the lack of clear processes, time constraints, and telehealth awareness. In this context, clinician acceptance and willingness to practice were important factors in the uptake of telehealth (34). Resistance transformed into interest over time as clinicians gained experience, and processes were put into place to ensure appropriateness of telehealth referrals and case preparation (case history and other relevant documentation) (35).
DISCUSSION
The QTS represents an extensive program of work conducted over a 15-year period. Examples of pediatric telehealth services have emerged as a result of different clinical requirements. The expansion of telehealth was sustained over an extensive period of time, and for a small number of mature services, we observed a willingness to conduct telehealth consultations outside of the telehealth centre and in the clinical departments. This worked particularly well when there was administrative assistance available in the department to help prepare cases, conduct test calls, send appointment details to families and help with the documentation (hospital records, investigative tests, referral notes etc.). The work done in Queensland also highlighted the importance of the role of a telehealth coordinator. This was a key requirement for the facilitation of services and an intended strategy to ensure that the referral and telehealth consultation processes were managed efficiently and without unreasonable burden on the clinician. Originally considered a superfluous resource by some health managers in Queensland, telehealth coordinator positions are now fully supported throughout the state-and recurrently funded by the health department, on a business as usual basis.
Over time, it was encouraging to see the number of specialties engaged in the QTS. It was clear that telehealth was and has continued to be used as a routine method of consultation for medical, nursing and allied health staff in the health service. The development of the service was also inspired by a variety of COH-led research projects, which helped to generate new ideas amongst clinicians when caring for children and families in remote locations. Funding for these projects was mainly derived from competitive research grants and philanthropic funding. Combining research and service delivery was a useful process because it meant that clinician engagement was strong and ideas were generated in direct response to clinical needs. The duration of the service development work also meant that information could be collected to demonstrate trends in activity and opportunities for service growth.
The work highlighted in Queensland is one of the most prominent examples of telepaediatrics reported worldwide, operating over a significant period and demonstrating a large volume of activity across many different specialties. Other successful examples exist in the USA and Canada where telehealth services have been established for emergency and intensive care support, hospital outpatients, primary care and home support (36)(37)(38)(39)(40)(41). Work in California also demonstrated cost savings and significant environmental benefits due to reduced travel requirements for patients-hence another reason for doing telehealth (42).
1. The establishment of a successful telehealth service requires time, patience and close engagement with clinicians and health service managers. 2. Effective telehealth services require dedicated administrative support services (telehealth coordination) and strong clinical leadership. 3. Integrating telehealth into existing hospital systems (such as referral and triage management, scheduling and billing processes) is important for services to become routinely adopted. 4. Clinician involvement in the planning and delivery of new telehealth-supported models of care ensures that services fundamentally address clinical requirements and patient needs. 5. The broad nature of services established through the QTS demonstrates the value of telehealth for a diverse range of clinical specialities and also highlights the importance of different service models for different clinical areas (not a one-model-fits-all approach). 6. Most potential savings attributed to the QTS were related to reduced patient travel. 7. The unique partnership between the COH (university) and the health department (service) resulted in 15 years of pioneering work and the development of a rich evidence base for telepaediatrics.
CONCLUSION
The QTS work has laid the foundations for the provision of pediatric telehealth services in Queensland, and many of the service models have been replicated in other places throughout Australia. During the operational period reported in this review, the partnership between the service provider and the university was a unique opportunity to leverage research funding and to drive innovation within the service. We encourage future reviews of the QTS to monitor progress and to demonstrate the benefits for children and families living in remote locations. It is highly likely that new telehealth-enabled models of care will continue to evolve in response to the many challenges faced by the health department, new funding arrangements, advances in communications technology and the expectations of consumers due to increased experience and raised awareness of telehealth.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article is available upon request, subject to ethics approval requirements.
ETHICS STATEMENT
Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
AS was responsible for establishing the QTS and for leading the establishment of most of the services described in the review, obtaining clearance from the appropriate ethics committee, leading the analysis of activity, and drafting the manuscript with input from all authors. AS, NA, and LC conceived and designed the review of the QTS and were responsible for data collection. All authors were involved in reviewing the manuscript and critically appraising the content and were responsible for final approval of the manuscript before submission for publication.
FUNDING
Operational telehealth services were funded by service level agreements with the public health service (Queensland Government) and research activities were funded by external government, industry, and community agencies.
|
2020-11-25T14:07:06.108Z
|
2020-11-25T00:00:00.000
|
{
"year": 2020,
"sha1": "a2c6ad27a0f034db30cff4bb0e1e6e14de99b5d5",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fdgth.2020.587452/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2c6ad27a0f034db30cff4bb0e1e6e14de99b5d5",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Business"
]
}
|
10100423
|
pes2o/s2orc
|
v3-fos-license
|
Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry
Motivation In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Results Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Availability Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.
Listing 3 TIC calculation using parallel indexed access. The following code describes our C++ implementation in OpenMS which calculates the TIC using the "random access" algorithm using an indexed mzML file and makes use of parallelization. 1 String in = "input.mzML"; Listing 7 TIC calculation using Python in-memory access. The following code describes our implementation in OpenMS which calculates the TIC using the Python "in memory" algorithm. Listing 10 TIC calculation using the Python interface The following Python code calculates the TIC using the "event-based" streaming algorithm.
1 class TICCalculator: In Figure 4 we compare the performance of the different APIs provided in OpenMS through pyOpenMS [1] in terms of performance. As expected, processing speed of Python is slightly slower than C++, however the new pyOpenMS execution times are also substantially improved over the 1.11 OpenMS kernel. Only the cached implementation in Python offered a substantial speed gain over C++, but the improvement was not as large as observed for a pure C++ implementation.
B. Considerations regarding random access reads in large files
Some algorithms need random access reads into large raw data files that cannot be easily bundled or ordered by retention time. In these cases, random access to data is necessary which precludes the algorithm from using the in memory implementation (due to system memory restrictions) and the event-driven implementation (since random access is necessary). In these cases, using the indexed data access API -which relies on the mzML idx standard -is the most straight-forward way to implement such an algorithm. The mzML idx standard stores binary offsets to the individual data tags inside the mzML file which allows a file seek to jump to the desired location and read the next XML tag (either a <spectrum> or <chromatogram> tag).
However, using the mzML idx standard has at least two main disadvantages. (i) The file needs to be in de-compressed form while reading since the indices relate to the decompressed locations and stream-based compression algorithms such as gzip do not allow random access. (ii) During each read the raw data has to be converted from a base64 string into a floating point number representation in memory which is generally the most time consuming step while reading. If many random access operations need to be performed, these two disadvantages necessitate initial de-compression of the file and then only allow relatively slow access to each spectrum. Therefore, we implemented the "cached" file format that allows fast caching of the raw data while retaining the meta data structure of mzML. The file format consists of two linked files, a cachedMzML which only contains the raw mass spectrometric data and an associated mzML file which does not contain any raw data (only the meta-data is retained in the XML data structure). By allowing for clear separation of raw data and meta-data, reading the meta-data into memory and performing search operations (for example collecting all spectra within a certain retention time window, collecting all spectra with their precursor masses in a certain range etc.) is extremely fast since the data structures are very small (generally a few MB) and no raw data needs to be loaded into memory for this operation. Once a suitable set of spectra (or a single spectrum) is found, its associated raw data can be loaded from the disk from the cachedMzML file for further processing. Loading the raw data of specific spectra from disk can be extremely fast as indicated by Figure 1 and 2 in the main text, which indicate that loading the cached raw structures can be more than 10 times faster than any other access mechanism. As we describe in the main text, we were able to process the raw data of all spectra of a 60 GB mzML file and compute the TIC on this data in less than 20 seconds using the cached access algorithms. All tests were performed on the same RHEL system also used in the main text. The "dev" versions indicate that we used OpenMS with the enhancements described in the main text.
Note that the "OpenMS" software relates to the C++ implementation (shown as comparison) while pyOpenMS relates to the Python implementation. In order to assess the performance of our implementation, we compared it to the XML parsing implementation available in the ProteoWizard software, another major open-source data access library [2,3]. We used the ProteoWizard library revision 7261 to build a custom program that calculates the TIC and compared it to the performance measured using the OpenMS implementations. The results of the measurement are shown in the main text, in Figure 1 and in Table I.
Our results indicate that the single threaded execution of ProteoWizard and OpenMS are on par in terms of processing speed (except the "cached" implementation which is an order of magnitude faster). However when using multiple threads, OpenMS is 30 % ("In Memory"), 60 % ("event-driven") or even a factor of 4 ("indexed") and 50 ("cached") faster.
We also compared our implementation to pymzML [4] which only provides a featurecomplete mzML reader. Interestingly, when run on the same machine as the other comparisons, we found pymzML to outperform the Python and C++ OpenMS 1.11 implemen- for the cached implementation. The C++ value in this graph is equivalent to the single threaded value for the "In Memory" algorithm in Figure 1 of the main text.
|
2018-04-03T00:11:28.951Z
|
2015-04-30T00:00:00.000
|
{
"year": 2015,
"sha1": "42df856fd9aa0a63e18b4ca4739032dcdea28843",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0125108&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "018184c6df3494844d5ae61b55e95135789a35bb",
"s2fieldsofstudy": [
"Chemistry",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
219556828
|
pes2o/s2orc
|
v3-fos-license
|
Antioxidant and hepatoprotective activity of Flemingia tuberosa Dalzell in CCl4-induced hepatotoxicity in rats
Flemingia tuberosa Dalzell is an endemic species of the Western Ghats. It is an ethnomedicinal plant used to cure dysentery and vaginal discharge. However, in this study, the hepatoprotective potency of the plant has been evaluated as it contains flavonoids like kaempferol. The present study was conducted to investigate the hepatoprotective activity of the ethanolic extract of aerial parts of F. tuberosa against Carbon tetrachloride (CCl4)-induced toxicity in rats. The extract was standardized using a validated HPTLC method for kaempferol content. The antioxidant activity of the ethanolic extract of aerial parts of F. tuberosa was assessed by 1,1-diphenyl-2-picrylhydrazyl method. The hepatoprotective activity was assessed against CCl4-induced liver intoxication, wherein the CCl4-induced albino Wistar rats were treated with three different doses of the standardized extract for 3 days. The results were compared with the established positive control, Silymarin. The protective effect on the liver was also monitored histopathologically using light and electron microscope and biochemically through observing the levels of biochemical enzymes. The extract showed potent hepatoprotective activity. This study confirms the protective effect of the ethanolic extract of aerial parts of F. tuberosa against CCl4-induced hepatotoxicity.
INTRODUCTION
The liver, which is a vital organ of metabolism, plays a major role in detoxification and elimination of toxicants (Saleem et al., 2018;Sultana et al., 2018). It is incessantly and extensively exposed to xenobiotics, hepatotoxins, and chemotherapeutic agents which can lead to impairment of its functions (Kumar and Veere, 2011;Preussmann, 1978). Hepatic cells are mainly damaged by hepatotoxic chemicals inducing lipid peroxidation and other oxidative damages (Chattopadhyay, 2003;Kumar and Veere, 2011). The synthetic drugs used in curing liver damages are therapeutically nonpromising and may lead to hepatotoxicity (Joshi and Shailajan, 2016). Carbon tetrachloride (CCl 4 ), as a standout among the hepatotoxins, is generally utilized to measure the efficiency of many hepatoprotective drugs (Cheng et al., 2013;Recknagel et al., 1989;Zarezade et al., 2018;Zhao et al., 2018).
Flemingia macrophylla has been reported to possess hepatoprotective effect due to the presence of flavonoids (Gahlot et al., 2011;Hsieh et al., 2011). A number of studies have proved that flavonoids are a class of compounds exhibiting hepatoprotective activity (Bratkov et al., 2016;Kondeva-Burdina et al., 2018;Krasteva et al., 2016;Mbemya et al., 2017;Pelissero et al., 1996;Pistelli, 2002). One of the most useful properties of this class of compounds is their ability to scavenge ROS and is considered more efficient than Vitamins C and E (Gao et al., 2001;Mbemya et al., 2017;Wang and Zheng, 1992). Flavonoids act by shielding membranes or by reducing their absorbency to bind with hepatotoxic substances (Gyr and Meier, 1991). Kaempferol, a flavonoid, has been reported to show strong hepatoprotective activity in CCl 4 -treated mice (Wang et al., 2015). Earlier publication from the laboratory has already shown the presence of kaempferol, in the ethanolic extract of Flemingia tuberosa . The hepatoprotective activity of the plant, however, has not been investigated scientifically.
Hence, this study aims to evaluate the hepatoprotective activity of the ethanolic extract of aerial parts of F. tuberosa against CCl 4 -induced hepatotoxicity in rats.
Collection of plant material
The aerial parts of F. tuberosa were collected from Rajapur, Maharashtra, and authenticated by Dr. M. M. Lekhak (Shivaji University, Kolhapur, Maharashtra)-AUTH 85/2016. The material was shade dried for a week and further oven-dried at 37°C ± 2°C, powdered in a mixer grinder, sieved (85-mm mesh -BSS sieve), and stored in airtight plastic containers.
Apparatus and chemicals
CCl 4 (GR grade, batch no.: IG8G580365, Merck Specialties Pvt. Ltd.) and Silybon tablets (Silymarin as silybon 70 mg, batch no. SIAD0025, Micro Labs Limited) were procured from the market. All other chemicals used were of analytical grade.
Preparation of plant extracts
The ethanolic extract of the aerial parts of F. tuberosa was prepared by adding 1000 mL of ethanol to 100 g of plant powder and agitated on a shaker for 4 hours (yield = 12.52%). The extract was standardized using a validated HPTLC method and the content of hepatoprotective marker, kaempferol (Wang et al., 2015). The kaempferol content was found to be 1.65%. The extract was further evaporated to dryness in a water-bath preset at 78°C to remove all traces of ethanol from the extract. Animals were dosed by weighing this ethanol-free dried extract accurately as per the weight of the animal and by suspending it in distilled water (total volume > 1 ml/100 g). A dose of 500 mg/kg body weight of animals was used in the study (Hsieh et al., 2011).
Chromatographic characterization
The powdered plant material (1.0 g) was extracted with ethanol (10.0 ml), and vortex mixed and sonicated for 1 and 20 minutes, respectively, followed by filtration through Whatman filter paper No. 1. The separation and quantitation of kaempferol were established using HPTLC.
Kaempferol as a biomarker
The presence of kaempferol as a biomarker in the ethanolic extract of F. tuberosa has been confirmed using a published method on HPTLC (Shailajan and Joshi, 2011;Shailajan and Mascarenhas, 2018) (Fig. 1).
Animals
Albino Wistar rats (female, 180-220 g) were procured from Bharat Serums and Vaccines Pvt. Ltd., Thane. The animals were maintained under standard laboratory conditions at an ambient temperature of 25°C ± 2°C and relative humidity of 50%-55% with 12-hour light and dark cycle in an animal house with standard facilities as per the CPCSEA guidelines (CPCSEA/315). They were fed with a commercially available standard pellet diet (AMRUT feed), and the filtered drinking water was provided ad libitum. After 1 week of acclimatization, the animals were subjected to experimental procedures. The approval from the Institutional Animal Ethics Committee of Ramnarain Ruia Autonomous College, Matunga, for the usage of animals in the experiment had been obtained as per the CPCSEA guidelines (Approval number: RM-160110-02).
Safety evaluation
The safety of the plant has been established on albino Swiss mice following the OECD guidelines (No. 420, fixed dose procedure) . The extract prepared was evaporated to dryness to remove all the traces of ethanol solvent before use in the animals.
Evaluation of antioxidant activity
The antioxidant activity of the extracts, based on the scavenging activity of the stable 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical, was determined (Joshi and Shailajan, 2016; Narendhirakannan and Rajeswari, 2010), and the percentage inhibition (I) was calculated as follows: where "A" is absorbance. EC 50 is the concentration at which DPPH radical is scavenged by 50%. The ascorbic acid was used as a reference antioxidant.
Evaluation of hepatoprotective activity
For the study, albino Wistar rats were divided randomly into five groups with six animals in each ( Table 1). The animals were kept fasting (water ad libitum) for 4 hours prior to dosing. Group I animals received an intraperitoneal injection of 0.7 ml/ Kg BW liquid paraffin/animal on the 0 th day of the study and were treated as normal controls. Animals from Groups II, III, IV, and V received an intraperitoneal injection of 0.7 ml/kg body weight of CCl 4 (Frank et al., 2012;Joshi and Shailajan, 2016) in 0.5 ml liquid paraffin/animal on the 0 th day of the study. The animals from Groups I, II, and III were dosed orally with 2 ml of distilled water (D/W) once daily. A dose of 70 mg/ kg Silymarin Shailajan, 2016, 1995) in the form of Silybon tablets suspended in 2 ml of D/W was administered orally to each animal of Group IV once daily, starting 24-hour post CCl 4 induction. The animals from Group V were given an oral dose of the ethanolic extract of aerial parts of F. tuberosa suspended in distilled water (500 mg/ kg body weight 24-hour post induction on the first day and further for next 4 days). The animals from Groups I, II, IV, and V were sacrificed on the fourth day (72 hours after first dosing), whereas Group III animals were sacrificed on seventh day of the study to evaluate the natural recovery in the study. The per diem records of body weight and consumption of food and water was maintained for each group.
Before sacrificing the animals, blood was collected from retro-orbital plexus into heparinized vials, and the plasma was separated and analyzed for biochemical parameters, such as glutamate oxaloacetate transaminase (GOT), glutamate pyruvate transaminase (GPT), alkaline phosphatase (ALP), direct bilirubin (DB) and total bilirubin (TB), cholesterol (CHO), triglycerides (TG), low-density lipoprotein (LDL), and high-density lipoprotein (HDL) content. The percentage protection for biochemical parameters was calculated as follows: A of control A of control × 100 (Joshi and Shailajan, 2016;Rao et al., 2012).
Fresh liver tissue samples were also processed for the analysis of liver glycogen.
Statistical analysis
The results were expressed as mean ± SE. The statistical analysis was carried out using Microsoft Excel and GraphPad Prism 5.0 software for a one-way analysis of variance followed by Dunnett's t-test. The p values < 0.001 were considered to be significant.
RESULTS AND DISCUSSION
A large number of plants and plant-based formulations have been claimed to cure liver disorders. The plants have been said to have hepatoprotective activity due to the presence of some phytoconstituents which have a potential antioxidant activity. The standardized ethanolic extract of F. tuberosa was found to be rich in kaempferol which has been reported to possess hepatoprotective in addition to antioxidant activity (Chen et al., 2018;Wang et al., 2018;Yin et al., 2018). The animals were administered with the dried extract using distilled water as the vehicle. The toxicity study showed that the extract is safe up to a dose of 2,000 mg/kg body weight in albino Swiss mice with no mortality or abnormality.
The antioxidant activity of the extract was assessed in DPPH model. To quantify the antioxidant activity, the IC 50 value, i.e., the concentration of sample required to decrease the absorbance of specific free radical at specific λ max by 50%, was calculated (Acharya et al., 2012;Joshi and Shailajan, 2016). The extract showed an IC 50 value of 20.95 μg/ml for DPPH. The results of the assay revealed that the free radicals were scavenged by the plant extract in a concentration-dependent manner. Figure 2 Table 2.
The extract of F. tuberosa exhibited free radical scavenging activity of 98.61% at a concentration of 60.0 μg/ml. The antioxidant activity displayed by the plant extract may be due to the presence of kaempferol. The extract of F. tuberosa showed a potential antioxidant property as observed from DPPH radical scavenging activity assay (Table 1). The antioxidant potential of the extract prevents the formation of trichloromethylperoxy radical, thereby reducing tissue damage, triggered by CCl 4 treatment.
To have a good idea about the functional state of the liver, alkaline phosphatase, TB and DB were monitored (Joshi and Shailajan, 2016;Rao et al., 2012). Hepatic toxicity caused by the administration of CCl 4 (0.7 ml/kg body weight) caused acute liver injury which was indicated by the sudden increase in the concentration of biochemical biomarkers, such as serum glutamic oxaloacetic transaminase (SGOT), serum glutamic pyruvic transaminase (SGPT), direct and total bilirubin (DB/TB), ALP, CHO, TG, LDL, and liver glycogen along with a decrease in the concentration of HDL in the induction control (IC) group. This was in comparison with normal control (NC) group. The levels of these biochemical parameters significantly came back to normal after the treatment of standardized ethanolic extract of the aerial parts of F. tuberosa (PT) and also in the animals treated with modern drug group (MD). The results are shown in Table 3, where the treatment with extract at 500 mg/Kg body weight showed a significant recovery (p < 0.001 with Dunnett's test). These results proved the ability of the extract in reversing the damage caused by CCl 4 and significantly enhanced the recovery process. This is indicated by the level of biochemical parameters in the treated group of animals (PT) (MD) as compared with those in the animals of the natural recovery group (NR).
The increase in the levels of alkaline phosphatase reflected the loss of functional integrity of cell membrane and cellular leakage, whereas the increase in both the DB and TB indicated the incidence of jaundice (Freitag et al., 2015;Joshi and Shailajan, 2016). These observations are supported by the electron microscopy results. The normalization of the biochemical marker enzymes (Table 3) after the plant treatment confirms its hepatoprotective potential. This may be attributed to its ability to expedite the regeneration of liver cells and maintain membrane integrity, whereby the leakage of marker enzymes into the bloodstream is reduced. The study also statistically affirms that the protection offered by the extract is compared to the modern drug used as a positive control, i.e., silymarin. The recovery toward normalcy of biochemical markers and that of tissue histoarchitecture caused after the treatment with plant extract was found to be less significant (p < 0.01) as that of silymarin (p < 0.001) treatment.
The hepatoprotective effect of the standardized extracts of the aerial parts of F. tuberosa was further evaluated by histopathological observations of the excised liver tissue using a light microscope and electron microscope. The administration of carbon tetrachloride in animals caused a disruption of rough endoplasmic reticulum, damage to mitochondria, centrilobular necrosis, fatty changes, and vacuolization indicating impairment of normal liver cytoarchitecture. The natural recovery group did not show significant improvement when compared to induction control group. The histoarchitecture of the liver in treated animals of plant extract group and modern drug group showed a significant recovery to normalcy in comparison to induction control group (Figs. 3 and 4). The plant-treated group showed recovery of cellular matrix at par with the positive control group. Treatment with the plant extract, however, is not as effective as that of the modern drug. This is evident in the microscopic observation (Fig. 3).
The results are confirmed in electron microscopic observations too. Lipid accumulation, ballooned mitochondria, reduction in rER activity, loss of microtubules, and membrane integrity are observed in hepatocytes after CCl 4 treatment (Fig. 4). These cytoarchitectural changes are reversed to normalcy after treatment with plant extract and modern drug (Fig. 4). The recovery after treatment with plant extract is less as compared to that observed in the hepatocytes after treatment with the modern drug. An increase in treatment period or increase in the dose of plant extract could be a possible alternative strategy.
The modern drug showed a significant protection in terms of the physical, biochemical, and histological parameters of the hepatic tissue than the plant extract. A similar kind of trend was followed in the results of biochemical assays as well. Thus,
CONCLUSION
The antioxidant activity of the plant extract in terms of the free radical scavenging activity has been established in vitro. The results of this study indicate that the standardized ethanolic extract of the aerial parts of F. tuberosa showed a significant hepatoprotective activity at the dose of 500 mg/kg body weight. The plant extract brought the levels of biochemical markers to normalcy and showed significant (p < 0.01) recovery in histopathological statistics. The results are not significant when compared to the modern drug, silymarin (p < 0.001). The hepatoprotective activity of the plant may have to be enhanced either by increasing the dosage or by extending the duration of the treatment in order to use it as an effective liver tonic. This study also provides scientific data for ethnomedicinal use of the plant and can create awareness in cultivating this plant to help its proliferation and avoid its eradication. The plant is important as it is an endemic species to the Western Ghats of Maharashtra, and this work is a preliminary step in conserving a medicinally important species, F. tuberosa Dalzell.
|
2020-06-10T17:58:40.760Z
|
2020-06-05T00:00:00.000
|
{
"year": 2020,
"sha1": "f9a4ad9c26fdfe1471fc1a4333e199c47ea293ef",
"oa_license": "CCBY",
"oa_url": "https://japsonline.com/admin/php/uploads/3144_pdf.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f9a4ad9c26fdfe1471fc1a4333e199c47ea293ef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
136050581
|
pes2o/s2orc
|
v3-fos-license
|
Wear-resistance of nanostructured coatings based on diamond-like carbon and compounds of titanium with carbon
Multilayer coatings [(TiCx/Ti/a-C)+ta-C]n with different composition of composite (TiCx/Ti/a-C) layers are studied. The dependences of abrasive wear resistance and H3/E2 ratio (H – hardness, E – elastic modulus) of multilayer coatings on the carbon content in the composite layer are determined. The phase composition of the (TiCx/Ti/a-C) layer, the ratio of volume fractions of the phases and the volume fraction of interface component, interlayer adhesion and adhesion to the substrate of the multilayer coating have a synergistic effect on the wear resistance of the testing coatings.
Introduction
Recent decades, the quality of wear-resistant coatings has been improved significantly through multicomponent nanocomposite structure. Amorphous matrix, grain size and dopants affect their physical, mechanical and tribological properties. For example, the introduction of Al, Ti, Cr improve the strength properties of diamond-like carbon coatings [1]. Creating gradient structures without sharp boundaries and multilayer structures with a large number of interfaces to prevent the cracks propagation is the way for optimizing the mechanical properties of coatings [2][3]. Multilayer structures can be created by uniformly embedding layers with different components between amorphous diamond-like carbon layers [2]. The mechanical and tribological properties of the multilayer coatings should depend on the composition and structure of the embedded layers. The development of protective coatings, including multilayer coatings, depends largely on the ability to predict their properties using the experimentally measured characteristics. For a long time, hardness is considered the most important property for wear-resistant materials. Nevertheless, the parameters combining hardness (H) and elastic modulus (E), such as H/E and H 3 /E 2 , are no less important characteristics of coatings' wear resistance [4][5]. The critical load for the beginning of the plastic yielding is scaled with H 3 /E 2 ratio. For a given load, elastic contact will be preferred, if H 3 /E 2 of coating increases.
In present work there were fabricated and investigated the multilayer coatings [(TiC x /Ti/a-C) + ta-C] n , where the composite layers (TiC x /Ti/a-C), consisting of titanium (Ti) or carbon (a-C) matrix with titanium carbide TiC x grains, are embedded between amorphous diamond-like carbon (ta-C) layers.
Materials and methods
Multilayer coatings were deposited on polished tool steel and stainless steel substrates by the PVD technique using UVNIPA-1-001 machine. The substrates were cleaned with distilled water and ethanol in an ultrasonic bath and by ion etching in vacuum chamber. Ta-C layers were deposited by arc pulse sputtering of graphite target, while the composite layers (TiC x /Ti/a-C) were deposited by simultaneous sputtering of titanium and graphite targets [6]. The titanium target was sputtered using an arc technique at a direct current of the arc source, whereas a graphite target was sputtered using a pulsed arc technique at various arc pulse frequencies (f = 1, 3, 5, 10, 15, 20 and 25 Hz). The thickness of multilayer coatings and the composition of composite layers were determined using scanning electron microscope QUANTA 200 with EDAX analyser. The hardness and elastic modulus of the multilayer coatings were evaluated using a nanoindentation method (NanoTest 600, Micro Materials Ltd, UK) with a Berkovich diamond indenter. The loading was carried out with the automatic control mode of indenter depth penetration at loads from 1 to 256 mN. There were made five nanoindentations at each load. The scratch tests were evaluated according to the following scheme. A diamond ball of 50 μm in diameter slid over the coating surface at a rate of 1 μm/s. The ball passed a distance of 20 µm without load (stage I, figure 1), then a distance of 100 µm with a constant loading rate of 5 mN/s up to load P = 500 mN (stage II, figure 1), and a distance of 80 µm at P = 500 mN (stage III, figure 1). Figure 2 shows the SEM images of scratches for various multilayer coatings. Abrasive wear resistance of multilayer coatings were tested in a jet of corundum particles of 120-150 μm in size at a speed of 20 m/s and an attack angle of 90 0 . The relative wear resistance was determined from the following relation: (I/I 0 ) -1 = (m/m c + 1). I, I 0 are the wear rate of the multilayer coating and the substrate material, respectively, mthe difference between the wear of the test substrate and the wear of substrate with completely worn multilayer coatings at the same dose of abrasive particles, m cweight of worn multilayer coating.
Results and discussion
The deposition conditions and characteristics of the multilayer coatings are presented in table 1. Figure 3 demonstrates the f = H 3 /E 2 (C C ) and f = (I/I 0 ) -1 (C C ) dependences. The increase of carbon content in the composite layer to ~25 wt. % leads to an increase of the H 3 /E 2 ratio. Further increase of C C is accompanied by a decrease and then by an increase of the H 3 /E 2 ratio. In the ranges of carbon content 6 < C C < 15 wt. % and at C C > 30 wt. % a direct relationship between relative wear resistance of figure 3). After the scratch test, multilayer coatings 1 and 6 partially are peeled off at the edges of the scratch. Numerous cracks are marked on the scratch of coating 6. The scratches of coating 3 and 7 have not visible peeling. Only small crosscut cracks formed inside the scratch of coating 7. In the range of C C from 18 to 25 wt. %, the discrepancy between (I/I 0 ) -1 and H 3 /E 2 is observed. In this range, the embedded composite layers have the maximum volume fraction of TiC x grains and the minimum volume fraction of amorphous matrix [6][7]. Moreover, in this range the phase composition of composite layer changed. The composite layer consists of titanium and titanium carbide grains (TiC x /Ti) at С С < 20 wt. %, and an amorphous carbon matrix with grains of titanium carbide (TiC x /a-C) at С С > 20 wt. %. The structural features of composite layer can affect both the wear of the embedded layer and the interlayer adhesion. After the scratch test, coating 4 (С С = 18 wt. %) has a no uniformity surface with a partial delamination of separate layers (figure 2). The theoretically calculated volume fraction of the interface component in the (TiC x /Ti/a-C) composite coatings has a minimum at С С = 20 wt. % and two maxima near the carbon content at which the multilayer coatings demonstrate the greatest relative wear resistance.
Conclusion
The phase composition of (TiC x /Ti/a-C) layer, the volume fraction of titanium carbide grains, the volume fraction of interface component, the interlayer adhesion and the adhesion of the multilayer coatings to the substrate have a synergistic effect on wear resistance of nanostructured coatings formed by layer-by-layer deposition of the ta-C and (TiC x /Ti/a-C) layers. The results have shown that two coatings demonstrated the most wear-resistance. The one is formed by layers of ta-C and (TiC x /Ti) with a carbon content of ~15 wt. %. The other is formed by layers of ta-C and (TiC x /a-C) with a carbon content of ~50 wt. %.
|
2019-04-29T13:16:59.311Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "c93e77af7de68d828eb7f704a86dbb815ddfaaf8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/857/1/012034",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "31a7bfe88eff3c4c2d52ea5fbd9237d61d1d7945",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
3037295
|
pes2o/s2orc
|
v3-fos-license
|
Insights into the Molecular Pathogenesis of Activated B-Cell-like Diffuse Large B-Cell Lymphoma and Its Therapeutic Implications
Within the last couple of years, the understanding of the molecular mechanisms that drive the pathogenesis of diffuse large B-cell lymphoma (DLBCL) has significantly improved. Large-scale gene expression profiling studies have led to the discovery of several molecularly defined subtypes that are characterized by specific oncogene addictions and significant differences in their outcome. Next generation sequencing efforts combined with RNA interference screens frequently identify crucial oncogenes that lead to constitutive activation of various signaling pathways that drive lymphomagenesis. This review summarizes our current understanding of the molecular pathogenesis of the activated B-cell-like (ABC) DLBCL subtype that is characterized by poor prognosis. A special emphasis is put on findings that might impact therapeutic strategies of affected patients.
Introduction
Diffuse large B-cell lymphoma (DLBCL) is the most common malignant lymphoma subtype in adults, accounting for roughly 40% of all cases [1]. DLBCL is characterized by remarkable heterogeneity, affecting several aspects of the disease. Morphologically diverse variants can be distinguished. The distinction of these variants potentially mirrors differences in biology and might also be clinically relevant, as for example the immunoblastic variant is associated with adverse outcome [2]. Besides these pathological differences, patients are also characterized by variable clinical presentation. Finally, there is heterogeneity with respect to the molecular events that drive DLBCL lymphomagenesis. This heterogeneity can at least partially be explained by the existence of molecularly defined subtypes identified in large gene expression profiling studies in recent years [3][4][5][6][7][8]. By applying the cell of origin classification, in which the DLBCL tumor profiles are compared to profiles of normal B-cells, several molecular subtypes can be distinguished. The germinal center B-cell-like (GCB) DLBCLs are derived from germinal center B-cells and, accordingly, they express a variety of genes that are expressed in normal germinal center B-cells. In contrast, activated B-cell-like (ABC) DLBCLs seem to originate from activated B-cells that are in the transition of being differentiated into plasma cells [3,6,9]. Primary mediastinal B-cell lymphomas (PMBLs) seem to originate from a B-cell subpopulation in the thymus and are characterized by the virtue of a specific gene expression profile [4,5,10]. Finally, approximately 15% of DLBCLs cannot be assigned to a molecular subtype and are referred to as unclassifiable DLBCL [11]. Especially, the distinction of ABC and GCB DLBCLs is not only relevant from a scientific standpoint, but additionally also has significant clinical implications, as these subtypes are characterized by differences in overall survival when treated with the current standard treatment of rituximab and CHOP chemotherapy (R-CHOP) [11]. The vast majority of patients diagnosed with GCB DLBCL respond favorably to R-CHOP, whereas R-CHOP is less effective in ABC DLBCL patients [11]. These data indicate that a better understanding of the biology of the subtypes is warranted to exploit the molecular vulnerabilities of these lymphomas. Due to the less favorable outcome of ABC DLBCL patients, this review focuses on recent insights into the molecular pathogenesis of ABC DLBCL.
Insights into the Biology of ABC DLBCL
Various studies of hundreds of tumor specimens derived from ABC DLBCL patients suggested that ABC DLBCL lymphoma cells are in the process of being differentiated into plasma cells [3,6,9]. Inactivation of BLIMP1, the master regulator of plasma cell differentiation, seems to be one of the driving events of this lymphoma subtype [12,13]. BLIMP1 mediates plasma cell differentiation by terminating the expression of a variety of B-cell differentiation genes such as Spi-B, PAX5, and Oct-2 [14]. Different molecular mechanisms have been identified that inactivate BLIMP1 [15]. Roughly 25% of ABC DLBCL patient samples are characterized by inactivating mutations of PRDM1 that encodes BLIMP1 [12,13]. PRDM1/BLIMP1 is furthermore deregulated by chromosomal translocations affecting BCL6 that are found in approximately 25% of patient samples. BCL6 functions as a repressor of PRDM1 [16]. Finally more than 25% of ABC DLBCL samples show gains/amplifications and overexpression of the transcription factor Spi-B that can also repress BLIMP1 expression [17,18]. The importance of BLIMP1 for the pathogenesis of ABC DLBCL was further underscored by different mouse models that showed that conditional knockout of BLIMP1 in mouse B-cells promotes the development of lymphoproliferative disorders that have features of human ABC DLBCL [15,19]. Collectively, these data implicate that a block in differentiation is an important event in the molecular pathogenesis of ABC DLBCL.
Another characteristic hallmark of ABC DLBCL biology is the constitutive activation of the oncogenic nuclear factor-kappa B (NF-κB) signaling pathway (Figure 1) [20]. The NF-κB transcription factor family consists of five different members termed RelA (p65), RelB, c-Rel and p50/p52 with its precursors p105/p100 [21][22][23]. In resting cells, the different NF-κB subunits are normally associated with inhibitory proteins of the IκB family (IκBα, IκBβ, IκBε) or are kept inactive by the precursors p100 and p105 [21]. Upon stimulation, the NF-κB members are released from their inhibitors and subsequently translocate to the nucleus where they activate their target genes [24,25]. Recent work from various scientific groups unraveled that the NF-κB pathway is frequently activated in ABC DLBCL by either gain-or loss-of-function mutations affecting upstream members of the NF-κB signaling cascade [26][27][28][29][30]. Mutations in the ITAM (immunoreceptor tyrosine-based activation motif) motif of the signaling molecule, CD79B, that are detectable in approximately 20% of ABC DLBCL cases, or less frequently, deletions affecting the ITAM motifs of CD79A are associated with chronic active B-cell receptor (BCR) signaling [27]. Chronic active BCR signaling should be differentiated from tonic BCR signaling. Antigen stimulation leads to chronic active BCR signaling that is characterized by immobile clusters of the BCR on the cell surface and the activation of downstream signaling pathways such as NF-κB. In contrast, tonic BCR signaling seems to be antigen independent and required for the survival of mature B-cells and leads to activation of the phosphatidylinositol 3-kinase (PI3K) pathway [27,[31][32][33]. Downstream mutations affecting the scaffolding protein CARD11 occur in roughly 10% of ABC DLBCLs and are associated with constitutive assembly of the CARD11-BCL10-MALT1 (CBM) signaling complex, leading to constitutive canonical NF-κB activation [26]. An alternative mechanism to activate the NF-κB signaling cascade is gain-of-function MYD88 mutations that are detectable in almost 40% of primary ABC DLBCL samples [30]. At last, different groups reported inactivation of the ubiquitin-modifying protein A20 that acts as a negative regulator of NF-κB signaling [28,29].
The addiction to constitutive NF-κB signaling is not only important for the understanding of the molecular pathogenesis of ABC DLBCL, but also might have significant clinical implications ( Figure 1). Inhibition of NF-κB using a dominant active form of IκBα or a dominant negative form of IKKβ is toxic to ABC DLBCL cell line models [20]. In the same lines, a small molecule inhibitor of IKKβ is toxic to ABC DLBCL cell lines, confirming that ABC DLBCLs are dependent on oncogenic NF-κB signaling [34]. However, inhibition of downstream NF-κB might be too toxic, and thus other proteins that act upstream of the NF-κB family members might be better suitable as molecular targets.
A large-scale RNA interference screen indicated the crucial role of CARD11 and MALT1 for survival of ABC DLBCL, as downregulation of these molecules induced toxicity in a variety of ABC DLBCL cell line models [35]. However, it is very challenging to target the scaffolding protein CARD11 pharmacologically. Thus, it is currently unclear, if the addiction to CARD11 signaling can be utilized therapeutically. In contrast, it has been shown that the paracaspase MALT1 that also acts as an Arg-specific protease, might potentially represent a promising target for the treatment of ABC DLBCL patients [35][36][37][38][39][40]. MALT1 promotes downstream NF-κB activation by both its scaffolding function and its proteolytic function by which negative NF-κB regulators are cleaved and inactivated [41]. Specifically, MALT1 cleaves among others A20, CYLD, as well as the NF-κB subunit RelB that can act as an inhibitor of NF-κB signaling [42][43][44]. Especially the introduction of novel small molecule inhibitors of MALT1 yielded promising in vitro and in vivo results in ABC DLBCL models [38,39]. Blocking the MALT1 protease function resulted in inhibition of NF-κB signaling and moreover induced toxicity in ABC DLBCL cell lines and xenograft mouse models [38,39]. Thus, testing the efficacy of these inhibitors in affected patients in early clinical trails might be warranted. The BCR and its downstream signaling cascade seem to be an attractive molecular target for ABC DLBCL patients. BCR signaling leads not only to activation of NF-κB signaling, but potentially also to activation of other oncogenic pathways such as the PI3K/mTOR pathway, the mitogen-activated protein kinase (MAPK) pathway as well as the Nuclear factor of activated T-cells (NFAT) pathway ( Figure 1). Interestingly, previous worked showed that the PI3K and the NF-κB signaling pathway functionally interact, as pharmacologic inhibition of PI3K significantly reduced NF-κB activity in ABC DLBCL cell lines [45]. A variety of different specific inhibitors can be utilized to inhibit these different signaling cascades (Figure 1). Recent preclinical and clinical data suggested that inhibition of chronic BCR signaling might be a promising therapeutic strategy. The protein kinase C (PKC) inhibitor sotrastaurin induced cell cycle arrest and/or cell death in ABC DLBCL cell lines and in a mouse xenograft model [46]. Similarly, the specific Bruton's tyrosine kinase (BTK) inhibitor ibrutinib was toxic to preclinical models of ABC DLBCL. In contrast GCB DLBCLs were not affected by treatment with ibrutinib [27]. These data were confirmed clinically in a recent multicenter phase-II study investigating the efficacy of The BCR and its downstream signaling cascade seem to be an attractive molecular target for ABC DLBCL patients. BCR signaling leads not only to activation of NF-κB signaling, but potentially also to activation of other oncogenic pathways such as the PI3K/mTOR pathway, the mitogen-activated protein kinase (MAPK) pathway as well as the Nuclear factor of activated T-cells (NFAT) pathway ( Figure 1). Interestingly, previous worked showed that the PI3K and the NF-κB signaling pathway functionally interact, as pharmacologic inhibition of PI3K significantly reduced NF-κB activity in ABC DLBCL cell lines [45]. A variety of different specific inhibitors can be utilized to inhibit these different signaling cascades (Figure 1). Recent preclinical and clinical data suggested that inhibition of chronic BCR signaling might be a promising therapeutic strategy. The protein kinase C (PKC) inhibitor sotrastaurin induced cell cycle arrest and/or cell death in ABC DLBCL cell lines and in a mouse xenograft model [46]. Similarly, the specific Bruton's tyrosine kinase (BTK) inhibitor ibrutinib was toxic to preclinical models of ABC DLBCL. In contrast GCB DLBCLs were not affected by treatment with ibrutinib [27]. These data were confirmed clinically in a recent multicenter phase-II study investigating the efficacy of ibrutinib as single agent. More than 40% of relapsed and refractory ABC DLBCL patients responded to this therapy, whereas only 5% of GCB DLBCL patient showed an objective response to ibrutinib [47]. However, despite these encouraging results, ibrutinib does not seem to be curative in general in these patients, as relapses occur frequently. Thus, combinations of different pathway inhibitors might potentially overcome these relapses. A recent high-throughput screen identified various compounds such as PI3K or BCL2 inhibitors that can be combined favorably with ibrutinib [48].
Mutations affecting MYD88 that encodes an adaptor protein have been shown to activate downstream NF-κB signaling [30]. Consequently, ABC DLBCL models depend in their survival on the expression of MYD88. Roughly 30% of all ABC DLBCLs harbor the MYD88 L265P mutation in the TIR domain. The MYD88 L265P mutation leads to spontaneous assembling of a protein-signaling complex consisting of MYD88, IRAK1, and IRAK4, leading to phosphorylation of IRAK1, NF-κB activation, STAT3 activation, and secretion of Il-6, IL-10, and interferon-β [30]. Intriguingly, a small-molecular inhibitor of IRAK1 and IRAK4 kinase activity was toxic specifically to ABC DLBCL models [30].
While the genetic abnormalities causing aberrant NF-κB signaling in ABC DLBCL are relatively well studied, the knowledge on the contribution of specific NF-κB family members to the control of the NF-κB gene expression network is limited. Recently, it has been shown that expression of the atypical nuclear IκB protein IκB-ζ (MAIL) is essential for survival of ABC DLBCL models as well as for nuclear NF-κB activity [49]. The expression of IκB-ζ itself is induced by NF-κB. Gene expression profiling experiments showed that IκB-ζ controls a large number of known NF-κB target genes most likely by interacting with the NF-κB subunits p50 and p52, respectively [49]. Intriguingly, IκB-ζ was not expressed in other NF-κB dependent hematologic malignancies such as Hodgkin's lymphoma or multiple myeloma, indicating that IκB-ζ plays a selective role in mediating and controlling NF-κB activity in ABC DLBCL [49].
A subset of primary ABC DLBCL patient samples is characterized by high STAT3 expression [50,51]. This STAT3-high ABC DLBCL subtype had higher NF-κB activity as well as higher proliferation and glycolosis [50]. Interestingly, pharmacologic or RNA interference mediated STAT3-inhibition induced toxicity in ABC DLBCL cell lines and xenograft mouse models [50][51][52] suggesting that inhibition of STAT3 might be a promising therapeutic approach in ABC DLBCL.
Another characteristic event in the biology of ABC DLBCLs, which is potentially associated with therapy resistance, is the deregulation of different anti-apoptotic members of the BCL2 family. BCL2 itself is frequently amplified and overexpressed in ABC DLBCL models and primary patient samples [17,53]. BCL2 overexpression can potentially be utilized therapeutically by using specific BCL2 inhibitors. ABT-199 showed promising activity in different preclinical models. Interestingly, high BCL2 expression predicted sensitivity to suggesting that this compound should be tested in clinical trials in the setting of ABC DLBCL [54]. Another member of the BCL2 family, MCL1 is also frequently deregulated in ABC DLBCL. Roughly 25% of primary ABC DLBCLs harbor MCL1 locus gains or amplifications. These gains/amplifications are associated with MCL1 overexpression [55]. Alternatively, MCL1 can be upregulated by constitutive STAT3 signaling in ABC DLBCL [55]. Functional data indicated that MCL1 expression is associated with resistance to conventional chemotherapeutic agents such as doxorubicin, etoposide or vincristine that are commonly used in DLBCL therapy. MCL1 knockdown using RNA interference was toxic to ABC DLBCL cell lines. Chemotherapy resistance in ABC DLBCL might potentially be overcome by pharmacologic inhibition of MCL1, as treatment with the BH3-mimetic obatoclax induced apoptosis in MCL1-positive ABC DLBCL models [55]. In summary, these data suggest that targeting anti-apoptotic BCL2 family members might represent a promising strategy in subsets of ABC DLBCL patients.
Collectively, ABC DLBCLs are characterized by a variety of different genetic aberrations. It is conceivable that these alterations contribute to the poor outcome of affected patients. Thus, targeting these abnormalities using specific small molecule compounds might improve prognosis of ABC DLBCL patients.
Potential Targets for the Treatment of GCB DLBCL Patients
GCB DLBCLs are derived from germinal center B-cells and they accordingly express germinal center B-cell genes such as BCL6 or LMO2 [3,9]. BCL6-positive DLBCLs are effectively killed by a small-molecule inhibitor of BCL6, suggesting that BCL6 inhibition might be successfully utilized in BCL6-positive DLBCLs [56]. GCB DLBCLs are additionally frequently characterized by addiction to the PI3K/AKT pathway that is caused by either loss of the tumor suppressor PTEN or by BCR signaling [57,58]. Thus, PI3K inhibitors might be efficacious in GCB DLBCL patients. Other novel therapeutic strategies might be inhibition of BCL2, as roughly 45% or GCB DLBCLs harbor BCL2 translocations as well as inhibition of EZH2 [9,59]. Gain-of-function EZH2 mutations are detectable in roughly 22% of GCB DLBCL patient samples and specific EZH2 inhibitors showed to be active in preclinical models of DLBCL [59][60][61].
Conclusions
The understanding of the driving molecular events of ABC DLBCL lymphomagenesis has significantly improved in the last couple of years. Various novel mutations have been identified that lead to constitutive activation of different oncogenic signaling pathways. Numerous clinical trials are currently being performed that investigate the efficacy of novel specific compounds. However, to understand responses observed in these studies, it is mandatory to incorporate scientific analyses including gene expression profiling. In the past, especially the assignment of patient samples into cell of origin subgroups was difficult due to the necessity of fresh frozen tissue. However, recently significant progress has been made to use formalin fixed paraffin embedded material to correctly diagnose the molecular subtype. Due to the introduction of novel techniques such as the NanoString technology, the classification into ABC and GCB DLBCL most likely will become standard clinical care [62,63].
Various studies have shown that cell of origin classification independent features of DLBCL are of scientific and clinical importance. Monti and colleagues identified three DLBCL subgroups termed "oxidative phosphorylation", "B-cell receptor/proliferation", and "host response" that are characterized by the virtue of specific gene expression profiles [8]. Furthermore, it has also recently been shown that copy number alterations that alter the p53 pathway are associated with inferior survival [64]. It will be challenging to integrate these different approaches. However, the combination of these datasets might eventually lead to the identification of clearly defined subgroups of patients that benefit from targeted agents leading to more specific and less toxic treatment regimens.
|
2015-09-18T23:22:04.000Z
|
2015-05-22T00:00:00.000
|
{
"year": 2015,
"sha1": "09484e43990ffcaa5ee8f18cd38369fc65a6202c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cancers7020812",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09484e43990ffcaa5ee8f18cd38369fc65a6202c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
205605730
|
pes2o/s2orc
|
v3-fos-license
|
A novel fast vector method for genetic sequence comparison
With sharp increasing in biological sequences, the traditional sequence alignment methods become unsuitable and infeasible. It motivates a surge of fast alignment-free techniques for sequence analysis. Among these methods, many sorts of feature vector methods are established and applied to reconstruction of species phylogeny. The vectors basically consist of some typical numerical features for certain biological problems. The features may come from the primary sequences, secondary or three dimensional structures of macromolecules. In this study, we propose a novel numerical vector based on only primary sequences of organism to build their phylogeny. Three chemical and physical properties of primary sequences: purine, pyrimidine and keto are also incorporated to the vector. Using each property, we convert the nucleotide sequence into a new sequence consisting of only two kinds of letters. Therefore, three sequences are constructed according to the three properties. For each letter of each sequence we calculate the number of the letter, the average position of the letter and the variation of the position of the letter appearing in the sequence. Tested on several datasets related to mammals, viruses and bacteria, this new tool is fast in speed and accurate for inferring the phylogeny of organisms.
In the past few decades, a basic but crucial task in biology is comparing genes and proteins for predicting the structure and function of these biological sequences. Many methods have been proposed to compare genetic sequences. Traditionally, most of these approaches are the widely used alignment-based methods. In these methods, molecular sequences are optimally aligned based on selected scoring systems. The alignment-based methods often give high accuracy and may reveal the relationships among sequences. Some algorithms have been established and incorporated into softwares for sequence alignments 1,2 . However, one of the main drawback of these techniques is that they are very time-consuming and expensive in memory usage. As a result, alignment-free approaches have attracted more and more attention and have been applied to biological sequence comparison as well as phylogeny analysis recently [3][4][5][6][7] . The noticeable characteristic of these methods is to incorporate some ad hoc numerical feature of sequences. These methods are fast in computational speed when comparing genes and proteins. Generally, alignment-free are categorized into four groups: methods based on k-mer or word frequency, methods based on substrings, methods based on information theory, and methods based on graphical representation. Firstly, methods based k-mer or word frequency are very popular and studied extensively. The classic k-mer method was proposed to compare genetic sequences, in which it counts the frequencies of substrings with k letters appearing in respective sequences 8 . Afterwards, a lot of k-mer based methods have been developed and applied in sequence analysis and phylogeny for viruses and bacteria. Such as feature frequency profile (FFP) 9 which counts the number of substrings with fixed length k occurring in a genome 9 and a vector consisting of these numbers is formed for each genome, return time distribution (RTD) 10 , frequency chaos game representation (FCGR) 11 , an improved complete composition vector method (ICCV) 12 which was proposed by optimizing composition vector (CV) 13 and complete composition vector (CCV) 14 . ICCV is under the assumption of a uniform and independent model to estimate sequence information. Comparing with CCV and CV method, ICCV method is more robust and efficient in performing sequence comparison. Secondly, methods based on substrings employ the similarity of substrings in a pair of sequences. These algorithms are mostly used for string processing in computer science, like average common substring (ACS) 15 , k-mismatch average common substring approach (kmacs) 16 . Thirdly, methods based on information theory include global and local characterization of sequences and estimate genome entrop to regions. Among them base-base correlation (BBC) 17 , information correlation and partial information correlation (IC-PIC) 18 are representative articles. Finally, methods based on graphical representation also play important roles in sequence comparison in alignment-free area [19][20][21] .
Although k-mer models are widely used in many biological studies, the position information and the properties of nucleotides is ignored in these methods. The second kind of methods are computationally expensive among alignment-free methods. For the third algorithms, most of them are alignment based. The graph based methods are often used to display some characteristics of sequences directly by curves. Therefore, it is better to use these information for sequences comparison. In this paper, we develop an 18 dimensional feature vector to characterize a DNA sequence. The vector includes the occurrence frequency of each of the four bases, the average position of nucleotides and the biochemical properties of nucleotides. To validate the advantages of our approach, we test it on several data sets and further make comparison with some state-of-the-art sequence analysis techniques. Our method performs as good as alignment methods in accuracy and spends far less time than them. For one viral dataset, it even provides better result than alignment methods. It also works better than k-mer based methods in accuracy.
Results
To test the effectiveness of our new method, we employ it to different datasets, which include mammalian mitochondrial genomes, virus, and bacteria genomes. Some datasets are small sized and others are medium sized. The length of sequences ranges from ten thousands to several millions base pairs. For each dataset, using our method the multiple encoding vectors of its genetic sequences are calculated. To verify the efficiency of our algorithms, the FFP method is used to compare. We use MEGA to draw the phylogeny of datasets for the two methods. Unlike FFP which applies Jensen-Shannon Divergence to compute the pairwise distance between any two vectors, our new method use Euclidean distance. In the process of using FFP method, we choose k to be the minimum integer of log 4 (n), i.e. k = floor(log 4 (n)) (n is the minimum length of sequences studied). From the results we can see the efficiency of our new method is better than FFP on these datasets.
Phylogeny of mammals. The first benchmark dataset contains 41 mammalian complete mitochondrial genomes (mtDNA) with about 16500 base pairs. In these mammals, the structure of mtDNA is circular and double-stranded. One strand of each mtDNA has rich guanine referred to as the heavy strand and the other strand includes rich cytosine referred to as the light strand. In our study, the heavy strands of mtDNA are chosen. These sequences are not highly conserved and have a fast mutation rate 22 . Using our proposed method along with UPGMA algorithm, the phylogenetic tree of 41 mitochondrial genomes is constructed. As displayed in Fig. 1, the 41 species are correctly grouped into 8 clusters: Primates (red), Cetacea (green), Artiodactyla (pink), Perissodactyla (light green), Rodentia (black), Lagomorpha (dark red), Carnivore (blue), and Erinaceomorpha (grey). According to the FFP method, length of substrings is chosen to be 7 and thus the phylogeny of the 41 species is obtained. The phylogenetic tree is provided in the supplementary file. As displayed in Fig. S1, the 8 clusters are not classified well. The four species in Perissodactyla are improperly clustered into two clades. Indus River Dolphin from Cetacea is separated from other species from Cetacea. The Carnivore, Primates and Artiodactyla clades are all mistakenly divided into more than one group. Obviously, for this dataset our new method gets a greater result. 23 . Influenza A viruses are single-stranded, negative-sense, segmented RNA viruses. The genome of flu virus contains around 13500 nucleotides and is divided into 8 segments. The subtypes of Influenza A viruses are classified according to an H number for the type of hemagglutinin (H) and an N number for the type of neuraminidase (N) in viral surfaces 24 . The viruses have 18 different H serotypes (H1-H18) and 11 different N (N1-N11) serotypes. For instance, the recently appearing H7N3 is an Influenza A strain with a type 7 hemagglutinin (H) and a type 3 neuraminidase (N). The several subtypes of Influenza A viruses are able to infect many species including wild birds, swines, dogs, horses, and even humans. It is known that H1N1, H1N2, and H3N2 subtypes can circulate among humans. In this study, we utilize Segment 6 gene encoding N (neuraminidase) of Influenza A virus to perform phylogenetic analysis. Using our method, the 38 flu viruses are correctly clustered into five groups as show in Fig. 2. Our result is consistent with previous study 25 . According to the FFP method, the choice for length of k-mer of these viruses is 5 and their phylogenetic tree is constructed. As shown in Fig. S2 (in the supplementary file), however, some of the H1N1 viruses and some H5N1 viruses are placed together. Therefor, on this dataset our method is also superior than FFP.
Phylogeny of Human rhinovirus.
To test our proposed method, we perform the phylogeny of 116 Human rhinoviruses (HRVs). HRVs are the major cause of the common colds and lead to more than one-half of cold-like diseases. HRVs are single-stranded positive sense RNA viruses of about 7200 nucleotides in length. The genome of Human rhinovirus contains a single gene encoding one polyprotein, which is then cleaved to yield 11 proteins. HRVs belong to Enterovirus genus and Picornaviridae family. They form three phylogenetically distinct clusters: HRV-A, HRV-B, HRV-C 4,26 . In the previous work, 113 complete genomes composed of the three clusters are used to explore their evolutionary relationships. In addition, 3 HEV-C genomes were also added to the dataset as outgroup. Based on multiple sequence alignment (MSA), the evaluation was very time-consuming although the classification performed well. Unlike the MSA, our new approach is very fast to finish conversion from the genome sequences to 18 dimensional vectors. As shown in Fig. 3, the three groups of 116 genomes are identified and the remaining 3 outgroup viruses are clustered together alone. According to the FFP method, the length of words chosen for this dataset is 6 and the phylogenetic tree of 116 HRVs is obtained. As shown in 27,29 .
Since the viral genomes in this dataset have about 19 kilobases, the k-mer length is thus 7. The phylogenetic relationships among these viruses using FFP method are depicted in Fig. S4 (in the supplementary file). However, the SUDV branch is not clustered with RESTV branch in this figure, which is not consistent with the confirmed taxonomy of Ebolavirus genus.
Phylogeny of coronavirus.
Coronaviruses are enveloped, single-stranded, positive-sense RNA viruses within the family Coronaviridae. Their genomes in length range from 25,000 to 32,000 nucleotides. Some subtypes of coronaviruses can also infect humans and cause many respiratory and gastrointestinal diseases with highly variable severity. Recently, the emergent Middle East respiratory syndrome coronavirus (MERS-CoV) had high case-fatality rate and caused outbreak of severe respiratory disease in Arabian Peninsula. As a result of pandemics from coronaviruses especially the SARS, the classification and evolutionary relationships among these viruses have been extensively investigated. We apply our 18 dimensional vector method to a widely studied dataset containing 30 coronaviruses and 4 extra non-coronaviruses as outgroup. The newly discovered human coronavirus NL63 was clustered together with human coronavirus 229E (group 1) 30 . At the same time, it was positioned away from HCoV-OC43 (group 2) and SARS (group 4). The new isolated HCoV-HKU1 was thought as a distinct member of group 2 31 . However, the HCoV-HKU1 was designated as a member of a new group 5 32 . Using our novel method, the phylogeny of the coronaviruses is reconstructed based on the UPGMA algorithm. As shown in Fig. 5, HCoV-NL63, and HCoV-229E are grouped into group 1 as validated in the previous research 33 . Group 2, group 4, and new group 5 are separated in the tree, which is consistent with the studies in published papers 31,32 . According to the FFP method, the length of substrings chosen for these organisms is 6 and the phylogeny is constructed. As displayed in Fig. S5 (in the supplementary file) the HCoV-229E virus from group 1 is not correctly classified into group 2. Besides, the four viruses as outgroup are not clustered together. It means for this dataset our new method still has advantage over FFP.
Phylogeny of bacteria. Bacteria as one kind of prokaryotes are abundant on the earth. The phylogeny of these species has been an crucial topic in biology. As a result, the classification of bacteria based on whole genome sequences have attracted more and more attention. Due to long genomic sequences of bacteria with more than 1 million base pairs (Mb), the tradition multiple sequence alignment methods are computationally infeasible to perform phylogeny. To test our model, we analyze a dataset consisting of 59 bacterial species. This set contains 15 families: Aeromonadaceae, Alcaligenaceae, Bacilleceae, Borreliaceae, Burkholderiaceae, Caulobacteraceae, Clostridiaceae, Desulfovibrionaceae, Enterobacteriaceae, Erwiniaceae, Lactobacillaceae, Mycoplasmataceae, Rhodobacteraceae, Staphylococcaceae, Yersiniaceae. The genome length in this dataset mainly range from 3 to 10 Mb. As displayed in Fig. 6, the 15 families are well separated using our method. Based on the 9-mer FFP method, these families are also separated as illustrated in Fig. S6 (in the supplementary file). The two results are same for the phylogeny of 59 bacteria at the family level. However, our tree has some advantages at the phylum level. In Fig. 6, the three families Lactobacillaceae, Clostridiaceae and Staphylococcaceae from phylum Bacilli are clustered together. In the tree constructed by FFP model (Fig. S6)
Discussion
Sequence comparison is crucial to understand the evolutionary relationships among organisms. Methods with alignment usually perform well if the genetic sequences are homologous. However, results obtained with alignment may be unreliable due to genetic recombination and high mutation rates. Besides, these approaches are time-consuming and thus are not suitable to align long sequences and large volume of data. According to the results mentioned above, our proposed alignment-free method shows its advantages on accuracy over alignment-based methods. Compared to a state-of-the-art method based on k-mer, the new technique still has the merit to construct correct evolutionary relationships. Our method is applicable to phylogeny construction of many kinds of species including mammals, viruses, and bacteria.
Additionally, we also compare our tool to ClustalW algorithm 2 which is one of the most popular alignment methods. All programs are implemented on a laptop with 8 GB RAM, Intel (R) Core (TM) i7-4500U CPU, and 1.80 GHz and the running time unit is minute. As listed in Table 1, our approach is very fast in computational Our method is very fast and has the potential to construct phylogeny for whole genomes such large as mammalian. Note that mammalian genomes are often divided into several chromosomes. To compute the position of nucleotides, we only choose chromosome X of mammals to do phylogenetic analysis. Our dataset includes the species: chimpanzee (Pan troglodytes), human (Homo sapiens), monkey (Macaca mulatta), gorilla (Gorilla gorilla), dog (Canis familiaris), horse (Equus caballus), mouse (Mus musculus), opossum (Monodelphis domesticus), and platypus (Ornithorhynchus anatinus). The length of these X chromosomes ranges from about 6 to 147 Mb. The accession numbers are listed in the supporting file. Based on the new multiple encoding vector, the UPGMA phylogenetic tree of nine mammals is obtained. As shown in Fig. S8 in the supporting file, the primates are clustered together. Platypus and Opossum are placed in the base of tree.
In spite of the good performance in speed and accuracy of our approach, there are still some room for improvement. The method is sensitive to length of sequences analyzed. The sequences used for phylogenetic analysis should be approximately complete. For whole genome sequences studied, the use of partial genomes may produce incorrect evolutionary relationships. If we only use a gene or segment to infer phylogeny of species, the gene sequences or segments for all organism investigated should be intact. Thus we need to carefully prepare the sequences of organisms before constructing their phylogeny. Although the multiple sequence alignment does not have this requirement, gaps are inserted to sequences to make their length same automatically, which consumes much time.
In conclusion, we formulate a novel 18 dimensional vector method to compare biological sequences. The novelty of this method is that some important chemical and physical properties of sequences are incorporated into this method. Traditionally, the alignment-based method often produces phylogeny of species with high accuracy. However, these methods are time-consuming in speed and can not deal with large datasets and long sequences. As comparison, our approach is very fast and suitable for large sized biological sequences. Most importantly, the results tested on several different datasets show that this vector method can provide correct evolutionary relationships of different kinds of species.
Methods
Let be the set of 4 bases, i.e., = A C G T { , , , } and Q = (s 1 , s 2 , ..., s n ) be a DNA sequence of length n, that is, The four bases are categorized into two groups according to three kinds of chemical and physical properties. The A and G bases are purine denoted by the letter R. The C and T bases are pyrimidine represented by Y. In another way of grouping, the A and C nucleotides are amino denoted by M and the G and T are keto denoted by K. According to the H bound, the G and C have strong H bond and they are represented by the letter S. The A and T bases contain weak H bond and they are represented by W. We define three numerical values for each of the letters R, Y, M, K, S and W. To characterize the distribution of R and Y in the sequence Q, we first replace the letters A and G by R and replace the C and T by Y in the sequence Q. Then the sequence only contains two kinds of letters: R and Y. Similarly, for Y we define w Y (⋅):{R, Y} → {0, 1} such that w Y (s i ) = 1 if s i = Y, and 0 otherwise. Then we get three characteristics for Y: n Y , μ Y and D Y 2 . For this kind of classification for nucleotides, six values are used to present the distribution of the four bases with respect to this chemical property. In the same way, we define other triplet for M, K, S, and W. Thus an 18 dimensional vector of the DNA sequence Q is defined by µ µ . n D n D ( , , , We call this multiple encoding vector because it utilizes three groups of letters to encode the DNA sequence. The construction of this vector is shown in Fig. 7. Pairwise distance among vectors derived from the new method are computed using Euclidean distance. Then the distance matrix of all biological sequences is constructed. Based on the UPGMA algorithm, the phylogenetic tree of organisms is built using MEGA 6.0 software 37,38 . Data Availability. The datasets analysed during the current study are available in the "supplementary file" and the R source code in this paper is freely available to the public upon request.
|
2018-04-03T00:15:57.834Z
|
2017-09-22T00:00:00.000
|
{
"year": 2017,
"sha1": "00c39eda574d417197b4b22e4fceddd15a169263",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-12493-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67773f5f18353fa2783c550d6c7fc52a2e10d0bc",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
15659471
|
pes2o/s2orc
|
v3-fos-license
|
Stochastic Replicator Dynamics Subject to Markovian Switching
Population dynamics are often subject to random independent changes in the environment. For the two strategy stochastic replicator dynamic, we assume that stochastic changes in the environment replace the payoffs and variance. This is modeled by a continuous time Markov chain in a finite atom space. We establish conditions for this dynamic to have an analogous characterization of the long-run behavior to that of the deterministic dynamic. To create intuition, we first consider the case when the Markov chain has two states. A very natural extension to the general finite state space of the Markov chain will be given.
Introduction
Stochastic environments where independent external forces change the dynamic of the system are common in biological and economic settings [23,14,5,19,6,13,29,15,10,22,1]. An example to illustrate a complete and state-independent change in the dynamic is sickle-cell anemia [1,2]. Being a carrier for sickle-cell lowers an individual's fitness, however, during malaria outbreaks, since sickle-cell carriers have immunity, which increases their fitness. The random event of malaria outbreaks may be described as a continuous-time Markov chain that is independent of the population dynamic, yet changes the population dynamic.
Antibiotics affecting microbial populations is another example of a population subjected to an independent stochastic environment [4]. The authors discuss the affects of antibiotics to bacteria, such as Escherichia coli and Salmonella enterica, and what persistent environment is needed to support the type that is antibiotic resistant.
Although the motivation for our model is mostly biological, there is also a relationship with economics. A commonly used tool in economics is Markovian switching [14,5,19,6]. This includes modeling business cycles and GDP growth, electricity spot price models, and interest rates, all of which are agents working in a stochastic environment. Kussell and Leibler [23] model the phenomenon of when bacteria change their phenotype to adjust to the stochastic environment. The authors assume there are n-phenotypes and linear growth where the fitness and switch to another phenotype is contingent on the current state of the environment. The stochastic environment is modeled by a k atom state continuous time Markov process, and is independent of the evolution of the population. In a current state, if a certain phenotype's fitness is comparatively small, this increases the probability of phenotype switching. The authors then derive the optimal long-term growth rate.
Markovian switching has also been applied to Lotka-Volterra and epidemiological population dynamics [13,29]. Gray et al [13] assumed a deterministic susceptible-infected-susceptible model and changed parameters according to a continuous Markov chain. The author discovered that the parameters coupled with the unique invariant measure of the Markov chain gave essentially new rates and found similar inequalities for either an endemic to occur or for the disease to become negligible. Takeuchi et al [29] analyzed the switching between two deterministic Lotka-Volterra models and showed that this system is neither permanent nor dissipative.
Fudenberg and L. A. Imhof [10] applied a simpler method where the event that switches the fitness of a population modeled by a Moran process was independent and identically distributed. The authors assumed a two state switched system, considered the mean of the fitness, and compared the switched fitnesses to derive their results.
Considering both a discrete Moran and a deterministic continuous time replicator dynamic, Harper et al [15] applied a method similar to of Fudenberg and L. A. Imhof to determine whether the mean game of the switched system was either a strategy 1 dominant (prisoner's dilemma), strategy 2 dominant (prisoner's dilemma), coordination game, or mixed strategy dominant (hawk-dove). For the continuous time replicator dynamic, the authors determined this classification by comparing the ratios of the difference between the payoffs of the two underlying games and the ratio of whether the event will occur or not. These results are different than the ones derived in this paper.
We analyze a Markovian switched stochastic replicator dynamic with two strategies and determine conditions for this "new" game to be classified in one of the four games mentioned in the previous paragraph. The times between jumps to another state for the continuous time Markov chain is assumed to have an exponential distribution. Since the switched systems are stochastic, the classifications are similar to the ones given by Fudenberg and Harris [9], in that the inequality of a payoff of pure strategy against itself and the other strategy are perturbed by half the difference of the variances (perturbation from the white noise), and the comparison of the transition from a fixed state to the other states (perturbation from the Markov chain). Since the switching indirectly perturbs the dynamic, appropriately determined constants (that are not unique) are associated to a particular state. The difference between the constant of the fixed state and another state, multiplied by this transition rate, compare this transition. The sum of these terms encompasses the entire transition comparison for this state. For example, if the dynamic switches between two states where the transition rates are equal, then the addition/subtraction of an appropriately sized constant to the inequalities derived by Fudenberg and Harris [9] determine the proper inequalities for the dynamic.
To help create intuition, we first consider a Markov chain in a two atom state space, then extend the analogous results to the general finite atom state space. To illustrate the conditions for the long-run behavior, we give an example of cooperation in a stochastic environment where defection is punished in one environment, and not punished in the other. The efficacy of punishment is then explored.
Stochastic Replicator Dynamic
Consider a two-player symmetric game, where a ij is the payoff to a player using pure strategy S i against an opponent employing strategy S j , and take A = (a ij ) as the payoff matrix. Within a population we assume that every individual is programmed to play a pure strategy. For i = 1, 2, let p i (t) be the size of the subpopulation that plays strategy S i at time t, which we denote as the i th subpopulation. Furthermore, define p(t) := (p 1 (t), p 2 (t)) T , P (t) := p 1 (t) + p 2 (t), and s(t) := (s 1 (t), s 2 (t)) T where s i (t) := p i (t)/P (t) the frequency of the i th subpopulation . When a player in the i th subpopulation is randomly matched with another player from the entire population, As(t) i is the average payoff for this individual, which we take to be the fitness of the player. We assume growth is proportional to fitness: and henceṡ This is the replicator dynamic. For uniformity of notation, we consider the payoff matrix A = a b c d .
Simplifying the dynamic above, we havė Since s 2 (t) = 1 − s 1 (t) (see [7]), we may focus on the dynamic ofṡ 3. a > c and d > b then S 1 and S 2 are the only dominant strategies where (1, 0) and (0, 1) are stable, and convergence to either point is contingent on the initial condition (coordination game); 4. and a < c and is the only evolutionary stable strategy (mixed strategy dominant).
In this paper we give conditions that correspond to one of the games listed above. We now describe the stochastic replicator dynamic and discuss the analogous characterizations. Throughout this paper, we have the complete probability space (Ω, F, P ), with the filtration {F t } t∈R + , where F 0 contains all of the null sets of F, and the filtration is right-continuous. Fudenberg and Harris [9] consider a continuous time stochastic replicator dynamic by first assuming dp i (t) = p i (t) As(t) i dt + σ i dW i (t) , for σ i ∈ R + and W i (t) a pairwise independent standard Wiener processes. For i = 1, 2 and j = 3 − i, Itô's lemma yields This is known as the stochastic replicator dynamic. The idea behind this model is that randomness comes from the aggregate shock, or population level interactions, that affects the fitness of each type. The only stationary points for this dynamic are the vertices of the simplex.
Keeping the payoff matrix as A = a b c d , we have the dynamic where W (t) is a standard Brownian motion, and σ = σ 2 1 + σ 2 2 .
Since s 2 (t) = 1 − s 1 (t), we may focus on the dynamic For simplicity we write s(t) instead of s 1 (t). Fudenberg and Harris [9] assumed the same inequalities of the payoffs as above and derived the following proposition to determine the conditions for the stability of the process.
(Mixed Strategy
In fact, the process is positive recurrent with a unique invariant measure.
In this model there is a possibility that the variances will trump an evolutionary stable strategy, e.g., , or change a dominated pure strategy dominant, e.g., b > d > b + σ 2 2 − σ 2 1 2 . Throughout this paper, the "stochastic" classifications in Proposition 2.1 will be the reasoning for a vertex being either stable or unstable. We now ready to define the switched dynamic.
For two stochastic replicator dynamics, define the payoff matrices A 1 = , and the corresponding diffusion coefficients σ 11 and σ 21 , and σ 12 and σ 22 . Assuming that r(t) is independent of the Brownian perturbation, we consider the replicator dynamic with Markovian switching given by where , and σ r(t) := σ 2 1r(t) + σ 2 2r(t) . To simplify the notation, we set S(t) = Y s(t), r(t) .
For L the infinitesimal operator, and For more information, see page 48 in [24], and page 103 in [28]. We now define the notation of stochastic stability that will be used throughout this paper. Although very similar to the definitions given in Khasminskii [20], they are very natural to the switched dynamic. We follow [21,24], and note that the definition is given for the point x = 0 and for a process evolving on the unit interval, but has a natural extension to x = 1. Although we first consider the case when r(t) is in a two atom state, we give the definition for a general finite n atom state space, which we call I := {1, 2, . . . , n}.
We define P x,i as the probability measure corresponding to S(t) when s(0) = x and r(0) = i, almost surely. Throughout this paper, the initial conditions are assumed to almost surely hold.
Definition 3.1. The stationary point x = 0 is said to be: 1. stable in probability if, for any ǫ > 0 and i ∈ I, 2. asymptotically stable in probability if it is stable in probability and for any i ∈ I, We now define the possible properties that the dynamic will hold. The definitions are given for a process evolving in the unit interval, and are the adjusted definitions for a process on the real line. Please see [24] for the general definitions.
where (x, i) is an arbitrary initial condition in D c × I. (The notation D c means the complement of the set D.) 3. If S(t) is not recurrent, it is called transient.
In this paper we apply the stochastic Lyapunov method to derive the stability or instability of each vertex, and characterize long-run behavior of the dynamic by combining the stability or instability property for each vertex and utilizing these properties. Similar to the deterministic Lyapunov method, the stochastic Lyapunov method transforms the process so that it is a positive valued supermartingale (defined below), which implies that the process is decreasing.
Analysis of the Model
Before we begin our analysis, we show an important characteristic of Equation (2). Given any strict subinterval of the unit interval, S(t) leaves this interval in finite time. Since each process of the switched dynamic has this property, the result is very natural. However, one may wonder if the switching could keep a significant number of sample paths within this interval. When reading the proof, notice that it does not depend on the size of the state space of r(t), but that the state space has a finite number of atoms. x 1 ), and τ defined to be the time S(t) leaves this interval, we have E x,i τ < ∞.
Proof. Define the positive function V (x, i) = e γ − e γx , where γ > 0. Notice that V (x, i) does not depend on i. For any i, we see that We now choose γ large enough so that −B i + A i + B i x + γ σ 2 i 2 x(1 − x) > 0 for all x ∈ (x 0 , x 1 ), and i ∈ {1, 2}. Thus, there exists a constant K > 0, such that LV (x, i) ≤ −K for any i. Dynkin's formula yields which implies that E x,i τ ∧ t ≤ V (x, i)/K. Taking t → ∞, the monotone convergence theorem tells us that that E x,i τ < ∞.
Borrowing and adjusting the Lyapunov function defined in [21,32], we give conditions for 0 and 1 to be stochastically stable or unstable. Take 0 < α < 1, and constants c 1 and c 2 . These constants are not unique and will be taken accordingly in order establish stability or instability of a particular vertex. Define the four positive Lyapunov functions as These limits in conjunction with the conditions for the dynamic to be a supermartingale will tell us whether our process is stable or unstable for the respective vertex. Conditions for the dynamic to be a supermartingale are given below.
Applying the infinitesimal generator to each function, for i = 1, 2, and take j := 3 − i, we have If there exist a neighborhood (within the simplex) around 0 or 1 such that LV ± 0 (x, i) ≤ 0 or LV ± 1 (x, i) ≤ 0 for x in the respective neighborhoods, then V S(t) is a supermartingale in these neighborhoods. To accomplish this goal, we find conditions for the functions defined in the curly brackets above to be the appropriate sign at either x = 0 or x = 1. Plugging in x = 0 or x = 1 as appropriate yields the following assumptions: Assumption 4.1. Assume there exists 0 < α < 1, c 1 , and c 2 , where one of the following inequalities holds for each i = 1, 2, and j := 3 − i: Notice that these assumptions are an extension to the inequalities derived by Fudenberg and Harris [9].
We are now ready to state a theorem about the stability or instability of the vertices. The argument is very similar to the proofs in Theorems 5.3.1, 5.4.1, and 5.4.2 in [20]. Since the proofs of these theorems only utilize the assumptions of the Markov process being continuous and the ability to apply the strong Markov property, one may see that there is natural extension to our theorem. For brevity, we omit the proof. (1) Assumption 4.1(i) holds, then x = 0 is asymptotically stable in probability; (2) Assumption 4.1(ii) holds, then x = 0 is unstable in probability; (3) Assumption 4.1(iii) holds, then x = 1 is asymptotically stable in probability; (4) Assumption 4.1(iv) holds, then x = 1 is unstable in probability.
Taking the conditions in Theorem 4.1 we establish the long-run behavior of the switched stochastic replicator dynamic. For example, if Assumption 4.1(i) (the point 0 is asymptotically stable) and Assumption 4.1(iv) (the point 1 is unstable) hold then the our dynamic will converge to 0 almost surely, which is the strategy 2 dominant game. This is Proposition 4.2. However, for the transient/recurrent properties to hold, the dynamic needs to be regular. One may see that this property does holds by Remark 2.1.
Since the process evolves on the unit interval, if the dynamics that are switched all have the same unstable vertex, then this vertex will also be unstable for the switched dynamic. This can be shown by setting c 1 = c 2 for the appropriate V − · function. The same reasoning my be applied to a stable vertex. For the proofs of the following propositions, we define the stopping times: τ ǫ as the first time the process leaves the interval (0, ǫ); andτ ǫ as the first time the process leaves the interval (1 − ǫ, 1). Furthermore, we assume that the initial conditions lie in (0, 1) × {1, 2}. (2), if Assumptions 4.1(i) and 4.1(iv) hold, then for any initial condition, S(t) converges to 0 almost surely.
Notice that the above proof utilizes the characteristic of the process near each stationary point to show almost sure convergence to the point 0. For the case when 0 is unstable and 1 is asymptotically stable, the argument to show that the dynamic converges to 1 almost sure is identical. This gives us the following corollary.
The next proposition shows that when both stationary points are asymptotically stable in probability, the process is transient and will converge to either endpoint almost surely. Due to the complexity of the process, given an initial condition, we are unable to derive probabilities of converging to 0 and converging to 1. When there is no switching between processes, the probabilities are explicit and given in Fudenberg and Harris [9]. Proof. Take an arbitrarily small ǫ > 0. By Assumptions 4.1(i) and 4.1(iii), there exists a δ > 0 such that P x,i lim t→∞ Proposition 4.1 tells us thatτ δ has finite mean. Therefore, the strong Markov property yields P Since ǫ was arbitrary, we are able to conclude the statement.
The proposition below gives conditions for the dynamic to be positive recurrent. For example, if the dynamic switches between two "stochastic" mixed strategy dominant games, taking c 1 = c 2 for V − 0 and V − 1 tells us that the condition holds.
Therefore, the process hits the set (ǫ, 1 − δ) in finite time. The strong Markov property tells us that the process is positive recurrent in (ǫ, 1 − δ). Proposition 4.1 yields that the process is positive recurrent for any strict subinterval of (0, 1).
Remark 4.1. When q 12 = q 21 , whichever process has the "stronger" stability or instability to the particular vertex will dictate the dynamic. To illustrate this, consider the case when −B 1 + −α − 1 2 σ 2 1 > 0 and Since q 12 = q 21 , to show that Assumption 4.1(ii) may hold, we will subtract a positive constant from the first term, and add the same constant to the second term. If (which means the process for r(t) = 1 has a "stronger" instability to x = 0 than the process r(t) = 2 has stability at x = 0), then we may find a k > 0 where −B 1 + α − 1 2 σ 2 1 −k > 0 and −B 2 + α − 1 2 σ 2 2 + k > 0, which tells us that S(t) is unstable at x = 0.
Example of Defectors
Taylor et al [30] analyzed a prisoner's dilemma game between cooperators C and defectors D, where defection was punished. Assuming a payoff b > 0 for cooperation and a cost c > 0, where b > c, the payoffs are , leaving the defection strategy the only dominant strategy. When punishment is taken into account, the payoffs are , where for a large enough γ, this is a coordination game. We consider the scenario where punishment for defection follows a stochastic environment. Take the payoff matrices for the two environment as 1 −1 2 0 and 1 −1 .2 0 , and the variances values are σ 2 21 = .1 = σ 2 22 , σ 2 11 = .15 = σ 2 12 , and σ 2 1 = .5 = σ 2 2 . We call the game without punishment state 1 and the game with punishment state 2. Since in both states the point (0, 1) (the population defecting) is stable, this is also true in the switched environment. This is Assumption 4.1(i), which written out is where, again, 0 < α < 1, 1 − αc 1 > 0, and 1 − αc 2 > 0. Thus, if q 21 > q 12 one may be able to find α, c 1 , and c 2 to make these inequalities hold. However, if q 12 > q 21 then it might be possible for (1, 0) to be stable, i.e., which is Assumption 4.1(iii). In particular, take q 12 = 1 and q 21 = 3. Clearly, we should be able to find constants such that the inequalities in 6 Analysis With a Markov Chain with General Finite State Space.
In this section we consider a generalized version of Equation (2), where r(t) ∈ {1, 2, . . . , n}, for some finite integer n. The analysis for the case when the state space was {1, 2} used only the characteristic of the dynamic near the vertices, and not the assumption that r(t) jumps between two atoms. Therefore, we have analogous results with identical proofs. However, since we have a general finite n state space, we need to slightly change the assumptions for stability and instability of the vertices. q ij . Furthermore for j = i, P r t + δ) = j r(t) = i = q ij δ + o(δ), and P τ k+1 − τ k ≥ T r(τ k ) = i = e q ii T for all T ≥ 0. Just as in the previous sections, we assume that the Markov chain is independent of the Brownian motion. We consider the dynamic where r(t) ∈ {1, 2, . . . , n}, and the payoff matrix A i = a i b i c i d i and variances σ 1i , σ 2i , and σ i := σ 2 1i + σ 2 2i correspond to the state r(t) = i. For L n the infinitesimal operator, and V (x, i) ∈ [0, 1] × {1, 2, . . . , n}, where for each i, V (·, i) is twice continuously differentiable, we have Applying L n to the same Lyapunov functions as in the previous sections, V ± 0 (x, i) and V ± 1 (x, i), we derive the following assumptions. Assumption 6.1. Assume there exists 0 < α < 1, c 1 , c 2 , . . ., c n−1 , and c n where one of the following inequalities holds for each i ∈ {1, 2, . . . , n}: q ij c i − c j 1 + αc i > 0; (iii) 1 + αc i > 0 and A i + −α + 1 2 q ij c i − c j 1 + αc i > 0; (iv) 1 − αc i > 0 and A i + α + 1 2 We use these assumptions to tell us the stability or instability of the process near the stationary points. Again, the arguments follow closely the arguments given in Theorems 5.3.1, 5.4.1, and 5.4.2 in [20], and so we omit the proof. to all other states. Since the switching indirectly perturbs the dynamic, each state is represented by an appropriately chosen, but not unique, constant. The difference of the constant for the fixed state and the constant of another state, multiplied by the transition rate from the fixed state to the other state, is the way we compare these two states. The sum of these terms is the comparison of the fixed state to all other states. There is an extra term involving the variance, however, this term is negligible.
Although the proofs focused on the case when the continuous Markov chain is in a two atom state space, the arguments have a very natural extension to the general finite atom state space. It may be possible to find better stochastic Lyapunov functions to obtain more precise conditions than the ones derived in this paper. However, since the inequalities are an extension of the criteria derived by Fudenberg and Harris, the conditions may be optimal.
Finally, I believe that the assumptions given are exhaustive. Consider this example in a two atom state: , and q 12 = q 21 , then we are unable to saying anything about the dynamic 0. However, the two equalities imply that − B 2 + −α − 1 2 σ 2 2 < − B 2 + α − 1 2 σ 2 2 , which is impossible. Similar reasoning for other cases is how this hypothesis was derived.
|
2014-06-10T02:33:33.000Z
|
2014-06-10T00:00:00.000
|
{
"year": 2014,
"sha1": "4ab108b1ae603c998cee8005729f15182a788854",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jmaa.2015.02.016",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "4ab108b1ae603c998cee8005729f15182a788854",
"s2fieldsofstudy": [
"Mathematics",
"Economics"
],
"extfieldsofstudy": [
"Mathematics",
"Biology"
]
}
|
245893374
|
pes2o/s2orc
|
v3-fos-license
|
A Verifiable Arbitrated Quantum Signature Scheme Based on Controlled Quantum Teleportation
In this paper, we present a verifiable arbitrated quantum signature scheme based on controlled quantum teleportation. The five-qubit entangled state functions as a quantum channel. The proposed scheme uses mutually unbiased bases particles as decoy particles and performs unitary operations on these decoy particles, applying the functional values of symmetric bivariate polynomial. As such, eavesdropping detection and identity authentication can both be executed. The security analysis shows that our scheme can neither be disavowed by the signatory nor denied by the verifier, and it cannot be forged by any malicious attacker.
Introduction
Since Bennett and Brassard [1] proposed the quantum key distribution (QKD) protocol in 1984, quantum cryptography has attracted extensive attention. Its security is guaranteed by the principles of quantum mechanics such as the Heisenberg uncertainty principle and the quantum no-cloning theorem. Quantum cryptography can provide the advantage of unconditional security, making the research of quantum cryptography increasingly important. Many important quantum cryptography branches have been developed, such as quantum key distribution [2,3], quantum signature (QS) [4][5][6], quantum teleportation (QT) [7], quantum authentication [8], and deterministic secure quantum communication [9].
Quantum signatures can be applied to verify the identity of the sender and the integrity of the information. The arbitrated quantum signature (AQS), providing many merits, has attracted much attention. In 2002, Zeng et al. [10] proposed the first arbitrated quantum signature scheme using the Green-Horne-Zeilinger (GHZ) state and the quantum one-time pad (QOTP). Based on the design of the classical arbitrated digital signature, the scheme provides a re-verification service for signatory and receiver using the online signature provided by a trusted third party arbitrator. In 2008, Curty and Lutkenhaus [11] investigated the scheme [10], and they claimed that it was not clearly described and that the safety analysis was incorrect. In response to the controversy of Curty et al., Zeng et al. [12] proved the scheme [10] in more detail. In 2009, to reduce the complexity and improve the efficiency of the protocol [10], Li et al. [13] proposed an AQS scheme based on the Bell states rather than the GHZ states and proved its advantages in terms of transmission efficiency and low complexity. Unfortunately, in 2010, Zou and Qiu [14] argued that Li's AQS scheme can be disavowed by the receiver, and they proposed an AQS protocol that uses bulletin boards and other security schemes that do not use entangled state. Their scheme further simplified the protocol of Li et al., and an improved AQS scheme was designed using single particles that can resist the denunciation of the receiver, thus reducing the difficulty of the physical implementation of AQS. However, in 2011, Gao et al. [15] conducted the first comprehensive cryptanalysis of previous AQS schemes in terms of forgery and disavowal.
They found that the existing AQS schemes based on QOTP encryption [13,14] all have some security problems. In other words, the receiver Bob can realize the existence of the forgery of a signature under the known message attack, while the sender Alice can successfully disavow any signature of hers through a simple attack. Choi et al. [16] found that most AQS protocols can be cracked through a specific existential forgery attack due to the careless taking advantage of the optimal quantum one-time pad based on Pauli operators. To overcome this weakness, they proposed a simple method to ensure the security of the signature. As Choi et al.proved, Bob could not simultaneously forge both the information and the signature to be verified by an arbitrator in the event of a dispute. In the same year, Yang et al. [17] demonstrated how to construct an arbitrated quantum signature protocol for classical messages using untrusted arbitrators. In order to solve the security problems experienced with the AQS protocol, Zhang et al. [18] analyzed the existing security problems [15,16] in 2013 and suggested some corresponding improvement strategies to counter forgery attacks. In order to solve the problem proposed by Gao et al. [15], Liu et al. [19] designed a new QOTP algorithm in 2014, which mainly relies on inserting decoy states into fixed positions, and constructed an unconditionally secure AQS scheme with fast signing and verifying using only a single particle state. In 2015, Li [20] used chained CNOT operation for encryption, instead of quantum one-time pad, to ensure the security of the protocol. To improve the efficiency of quantum bit to 100%, Yang [21] proposed an AQS scheme with the cluster state in 2016. In 2017, in order to resist forgery attacks and disavowal attacks, Zhang et al. [22] proposed a new quantum encryption based on the key-controlled chained CNOT operations (KCCC encryption), and through KCCC encryption, constructed an improved arbitrated quantum signature protocol. In 2016, Yang et al. [23] also proposed a theoretically extensible quantum digital signature with a star-like cluster state. In 2018, Shi et al. [24] proposed an arbitrated quantum signature scheme with the Hamiltonian algorithm based on blind quantum computation. Due to the application of blind quantum computation, it is not necessary to recover the original message during verification, which can improve the simplicity and operability of AQS. In the same year, Feng et al. [25] constructed an AQS scheme based on continuous variable squeezed vacuum states rather than coherent states to further improve coding efficiency and performance. In 2019, Feng et al. [26] proposed an AQS scheme with quantum walk-based teleportation, which does not require the preparation of entangled particles in advance, making the AQS protocol more flexible and practical. In 2020, Chen et al. [27] proposed an offline arbitrated semi-quantum signature scheme based on four-particle cluster states, in which the classical parties can sign with the assistance of a quantum arbitrator. Different from the typical arbitrated quantum signature schemes, the arbitrator in this protocol acts as a relay station of signature transmission and no longer interferes with the direct authentication of the signature, so that the signature receiver has completed authentication rights. There is no additional direct communication between the signatory and the receiver, which reduces the complexity of transmission. However, the above AQS scheme does not consider authentication between signatory, arbitrator, and verifier.
Quantum teleportation is a technology that uses the entangled state or cluster state to transmit information between two sides of communication. The first scheme of quantum teleportation was proposed by Bennett et al. [28] in 1993. It is a scheme of teleportation through classical channel and an EPR entangled channel. In 1998, Karlsson and Bourennane [29] proposed controlled quantum teleportation. Its basic idea is that the receiver reconstructs the unknown quantum state with the help of the controller. Until now, quantum teleportation has been studied using the GHZ states [30], W. states [31,32], cluster states [33], and other entangled states as quantum channels. In recent years, many quantum signature schemes have used entangled states as quantum channels, and methods were proposed to transmit unknown quantum states of a single particle [34] or double particles [35]. In 2005, Brown et al. [36] developed a computationally feasible entanglement measurement method based on negative bias transposition criterion, and found highly entangled four-qubit states and five-qubit states by searching. In 2008, Muralidharan and Panigrahi [37] investigated the usefulness of the five-qubit state introduced by Brown et al. [36] for quantum information applications such as quantum teleportation. The results show that this state can be used for perfect teleportation of arbitrary single-and two-qubit systems.
In this paper, we construct an arbitrated quantum signature scheme that can verify the identity of participants using five-qubit entangled states as quantum channels and controlled quantum teleportation. The security analysis result shows that our AQS scheme ensures that the signatory Alice cannot disavow, the verifier Bob cannot repudiate, and any illegal attacker can not forge. The proposed scheme uses mutually unbiased bases particles as decoy particles. It applies a pair of function values of symmetric binary polynomials to perform a unitary operation on decoy particles so that eavesdropping detection and identity verification between participants can be performed. In addition, the scheme only needs von Neumann measurement, Bell measurement, and a unitary operation to recover the single-particle qubit state. It replicates message from the signatory Alice to the verifier Bob, which is an attractive advantage for realizing an actual quantum communication network.
The scheme has the following advantages: (1) The mutually unbiased bases particles are used as decoy particles to prevent external adversaries from eavesdropping during transmission; (2) The receiver only needs to ask about the position of the decoy particles without asking what the measurement bases are in the process of eavesdropping detection; (3) The scheme provides the function of identity authentication among participants. It uses a pair of function values of symmetric binary polynomials as parameters of the unitary operation, which is used to act on the decoy particles to verify the identity of participants.
The rest of this article are organized as follows. In Section 2, the concepts of the arbitrated quantum signature, mutually unbiased bases and controlled quantum teleportation are introduced. In Section 3, the detailed process of the proposed protocol is described. In Sections 4 and 5, the verifiability analysis and safety analysis are conducted, respectively. Finally, a brief conclusion is provided in Section 6.
Preliminaries
In this section, we first briefly review some notions concerning the arbitrated quantum signature scheme and the definition of mutually unbiased bases, which is presented in [38]. Then, we introduce controlled quantum teleportation, which is used in constructing the arbitrated quantum signature scheme. Finally, an example of controlled quantum teleportation is given.
Some Notions Concerning the Arbitrated Quantum Signature
A digital signature scheme is a cryptographic primitive that provides the receiver of a message with assurance about the integrity of the data, and the identity of the sender/signatory. Furthermore, it offers unforgeable and undeniable property. Similarly, the arbitrated signature scheme is a digital signature scheme finished with the help of an arbitrator, who is a disinterested third party trusted to complete a protocol. Here "trusted" means that all people involved in the protocol accept what he says as true and what he does as correct, as well as that he will complete his part of the protocol [14]. The quantum signature is a quantum version of the classical digital signature.
Mutually Unbiased Bases
Definition 1 ([38]). We suppose that A 1 = {|ϕ i } q i=1 and A 2 = {|ψ i } q i=1 are two sets of standard orthogonal bases, which are defined over a q-dimensional complex space C q . We state that A 1 and A 2 are mutually unbiased if the following relationship is satisfied: If any two sets of standard orthogonal bases A 1 , A 2 , · · · , A m in space C q is unbiased, then this set is called an unbiased bases set. Additionally, one can find at most q + 1 mutually unbiased bases if q is an odd prime number. In particular, the computation basis is expressed as {|k |k ∈ D}, where D = {0, 1, ..., q − 1}. In addition to the computation basis, the remaining q groups of unbiased bases can be expressed as |ϕ q and j ∈ D represent the number of the mutually unbiased bases and l ∈ D list the number of vectors for the given bases. For j = j these mutually unbiased bases satisfy the following conditions: n=0 ω n |n n|, we have following operations: For the convenience of expression, X q x is denoted as U x which is a unitary operator, that is, U x |ϕ l (j) = |ϕ l+x (j) . Especially, we have U l |ϕ 0 (0) = |ϕ l (0) .
Controlled Quantum Teleportation
Our arbitrated quantum signature scheme is based on controlled quantum teleportation. The five-qubit entangled state can be used to perfect the teleportation of arbitrary singleand two-qubit systems [37], which are suitable for maximum contact teleportation and satisfy the biggest task-oriented definition of entangled state [36]. Due to the above advantages, in this section, we use the five-qubit entangled state as the quantum channel to execute controlled quantum teleportation. The design form is as follows: In the form above, |φ + , |φ − , |ψ + , |ψ − represent the four Bell states of two particles, respectively, |φ ± = 1 √ 2 (|00 ± |11 ) and |ψ ± = 1 √ 2 (|01 ± |10 ). These states exhibit true multipartite entanglement from both negative bias measurements and von Neumann measurements. Even after tracking one or two qubits from this state, entanglement is maintained in the resulting subsystem, which is therefore highly robust.
The model of controlled quantum teleportation is shown in Figure 1. The working process of the controlled quantum teleportation is described below: Step 1: Alice performs three-particle von Neumann measurements of the particles (M, 2, 3) in her possession. The three-particle von Neumann measurement basis is {|χ i } (i = 1, 2, · · · , 8), as shown in Table 1.
Step 4: Trent sends his measurement results to Bob through the classical channel.
Step 5: Following Trent and Alice's measurements, Bob performs an appropriate unitary operation U (5) and successfully reconstructs the original unknown quantum state |γ M on the particle (5).
The participants' measurement outcomes and the unitary operation U (5) are shown in Table 4, in which MO represents the measurement outcomes and all the Pauli matrices are shown below.
Based on Alice and Trent's measurement outcomes, Bod performs the corresponding unitary operation U (5) on particle (5) and his result is α|0 + β|1 . This is the original information particle state. That is, Alice successfully transmits the unknown quantum state to Bob under Trent's control.
The Proposed Verifiable Arbitrated Quantum Signature Scheme
In our scheme, Alice the signatory, Bob the verifier, and Trent the arbitrator are defined as the three participants. The arbitrator Trent should be trusted by both Alice and Bob. The detailed procedures of our scheme can be described as follows.
Initializing Phase
Step I1: Alice and Trent share secret key K A and Bob and Trent share secret key K B . The secret key distribution task can be performed using the QKD protocol, which has been proven to provide unconditional security [39,40].
Step I2: Trent selects a k − 1-order symmetric binary polynomial: F(x, y) = a 00 + a 10 x + a 01 y + a 11 xy + a 20 x 2 + a 02 y 2 + a 12 xy 2 + a 21 x 2 y + a 22 x 2 y 2 + · · · + a k−1,k−1 x k−1 y k−1 mod q, where q is a prime number, F(x, y) ∈ GF(q)[x, y], a ij ∈ F q , i, j ∈ {0, 1, · · · , k − 1}, a ij = a ji , F q is a finite field. Suppose that the public identity information for the participants Alice, Bob, and Trent is x A , x B , x T . Trent computes two share polynomials f A (y) = F(x A , y) and f B (y) = F(x B , y).
Step I3: Alice receives f A (y) and decrypts it with secret key x T ) based on Alice's and Trent's public identity information x A and x T . Due to the symmetry of the binary polynomial, Step I4: According to the value of F(x A , x B ) and F(x A , x T ), Alice executes the unitary The parameter formation process of the initializing phase is shown in Figure 2.
Step S2: Alice transforms the information qubit string |Γ into a secret qubit string |R A = M K A (|Γ ) in terms of the secret key K A . This transform method can be seen in [14].
Step S3: Alice prepares 5-particle entangled states. Alice combines each information qubit with 5-particle entangled state into the same long 6-particle qubit string. Each combinatorial state includes one information particle and five entangled particle. This 6-particle combination state can be described as follows: Step S4: Alice uses Ω A to represent the sequence of n (M, 2, 3) particles, where M represents the information particle to be signed. Ω T represents the sequence of n (1, 4) particles, and Ω B represents the sequence of n (5) particles. The decoy particles are randomly inserted in Ω T and Ω B to form Ω T and Ω B , respectively. Alice sends Ω T to Trent and Ω B to Bob.
Verification Phase
Step V1: After confirming that Bob received Ω B , Alice tells Bob the position of the decoy particles and Bob executes the unitary operation U −F(x B ,x A ) on the decoy particle |µ A,B , that is, |µ B,A = U −F(x B ,x A ) |µ A,B . Then, Bob measures the decoy particles using measurement basis {|ϕ (0) l |l ∈ q}. If |µ B,A = |ϕ (0) 0 , it implies that the identity authentication between Alice and Bob cannot be passed or the decoy particle have been eavesdropped. Finally, Bob calculates the error rate based on measurement outcomes of the decoy particles. If the error rate is less than the previously given value, they perform the next step. Otherwise, the execution of the protocol is aborted. After Bob passes the eavesdropping detection and identity authentication of Ω B , the decoy particles are removed and Ω B is restored. Similarly, after confirming that Trent received Ω T , Alice tells Trent the position of the decoy particles and then Trent executes the unitary operations U −F(x T ,x A ) on the decoy particle |µ A,T , that is, |µ T,A = U −F(x T ,x A ) |µ A,T . Then Trent measures the decoy particles using the measurement basis {|ϕ (0) l |l ∈ q}. If |µ T,A = |ϕ (0) 0 , it indicates that the identity authentication between Alice and Trent cannot be passed or that the particles are eavesdropped. Finally, Trent calculates the error rate based on measurement outcomes of the decoy particles. If the error rate is less than the previously given value, they perform the next step; otherwise, they abandon the agreement. After Trent performs the eavesdropping detection and identity authentication on Ω T , the decoy particles are removed and Ω T is restored.
Step V2: After Bob receives |S which was sent by Alice, he encrypts |S and |Γ with the secret key K B to obtain Y B = E K B (|S , |Γ ). Bob sends Y B to Trent via a quantum channel.
Step V3: After receiving Y B = E K B (|S , |Γ ), Trent decrypts it using secret key K B to obtain |S and |Γ , and decrypts |S using secret key K A to obtain |R A and δ(Ω A ). In the meantime, Trent measures Ω T with measurement basis {|φ + , |φ − , |ψ + , |ψ − } to obtain the measurement outcome δ(Ω T ). Trent uses the secret key K A to transform the information qubit string |Γ into |R A and compare |R A with |R A . If |R A = |R A , Trent sets the initial check parameter θ = 1; otherwise, he sets θ = 0. Note that this step and the subsequent comparison of the quantum states can be found in [14,42]. To ensure the integrity of the signature, Trent selects an appropriate hash function H(.) and calculates H(|S ).
Step V5: Bob decrypts Y TB to obtain |S , H(|S ), δ(Ω A ), δ(Ω T ) and θ. If θ = 0, Bob can assume that the signature was forged, he rejects the signature and exits the verification process; otherwise, Bob continues with the next verification process.
Step V6: According to the values of δ(Ω A ) and δ(Ω T ), Bob chooses the corresponding unitary operator U (5) in Table 4. Bob performs unitary operation U (5) on the particles in sequence Ω B and measures them to obtain the quantum state |Γ . Notice that |Γ is the result of executing controlled quantum teleportation. Then, he compares whether it is equal to |Γ . If |Γ = |Γ , Bob considers the signature invalid and rejects it. If |Γ = |Γ , Bob calculates H (|S ) with the same hash function and compares H (|S ) with H(|S ). If H (|S ) = H(|S ), Bob accepts |S as the signature of |Γ from Alice; otherwise, the signature is rejected.
The schematic diagram of the main steps of the arbitrated quantum signature scheme is shown in Figure 3.
Verifiability Analysis
We can prove that, in this scheme, identity authentication and eavesdropping detection can be conducted between Alice and Bob as well as between Alice and Trent according to the measurement outcomes of the decoy particles. An example for the proposed verifiable arbitrated quantum signature scheme can be seen in Appendix A.
In steps I3 and I4, according to Alice's share polynomial F(x A , y) and Bob's publicly identified information x B , Alice calculates F(x A , x B ) and creates decoy particles . According to Bob's share polynomial F(x B , y) and Alice's publicly identified information x A , Bob calculates F(x B , x A ). In step V1, after Bob receives |µ A,B , he performs the unitary operation According to the properties of symmetric binary polynomials, we have 0 . Without external eavesdropping and cheating on either side, Bob's measurement outcomes of the decoy particles should be |ϕ (0) 0 ; otherwise, it can be determined that either identity cheating on both sides or external eavesdropping are occurring. Therefore, Alice and Bob can verify whether identity cheating is occurring according to the measurement outcomes of the decoy particles. Similarly, identity verification and eavesdropping detection can also be conducted between Alice and Trent according to the measurement outcomes of the decoy particles.
Safety Analysis
A secure quantum signature scheme should be of an unforgeable and undeniable property. In other words, it should meet the following requirements: (1) The signature cannot be forged by an attacker (including external adversary Eve and malicious receiver Bob). (2) The signatory Alice cannot disavow the message and signature she sent, and the receiver Bob cannot disavow that he received the signature. (3) That can be arbitrated if the receiver Bob admits the fact of receiving the signature but disavows the integrity of the signature.
Impossibility of Forgery
If the external attacker Eve tries to forge Alice's signature |S for her own benefit, she should know the key K A . However, due to the unconditional security of quantum key distribution [39,40], this is not possible. In addition, the quantum one-time pad protocol [41] is used to improve the security. Therefore, Eve's forgery is impossible.
If the malicious receiver Bob tries to forge Alice's signature |S = E K A (|R A , δ(Ω A )) for his own benefit, he must also know Alice's secret key K A . However, for the same reason, he cannot obtain any information about the key K A . Thus, Bob cannot obtain the correct |R A . Subsequently, the initial check parameter θ used in the verifying phase will not be right, so the arbitrator Trent will discover this forgery. In a worse case, even if key K A is exposed to Eve, she still cannot forge the signature because she cannot create the appropriate |R A and δ(Ω A ) to associate with the new message. Bob uses the correlation of the Bell state to find this kind of forged file; further verification of |R A = |R A cannot be established without the correct |R A . However, if Bob knew the secret key K A , forgery would be inevitable.
We can prove that Eve, an external attacker, cannot entangle a decoy particle or an information particle with an auxiliary particle to steal secret information and forge a signature. See Appendix B for details.
Impossibility of Disavowal by the Signatory and the Verifier
A secure quantum signature scheme should have undeniable property. In other words, once the quantum signature is verified as a valid signature, the signatory cannot disavow the fact that the quantum signature is generated by them. The receiver of the signature cannot disavow the fact that he has received the quantum signature.
Impossibility of Disavowal by the Signatory Alice
Suppose Alice tries to disavow the signature |S that she has signed. As shown in Figure 4, after receiving the signature |S , Bob cannot decrypt it without the key K A . He can only encrypt |S and |Γ to obtain Y B and sends Y B to Trent. After receiving Y B , the arbitrator Trent decrypts Y B = E K B (|S , |Γ ) and |S = E K A (|R A , δ(Ω A )) with K A and K B . As the signature |S = E K A (|R A , δ(Ω A )) contains the key K A shared only by Alice and Trent, Trent can accurately confirm that the signature |S was signed by Alice. Whether |S is the signature of the message |Γ is determined by the initial check parameter θ calculated by the arbitrator Trent. Because |R A = M K A (|Γ ), |R A = M K A (|Γ ), if |R A = |R A , namely θ = 1, then the signature |S was signed by Alice for the message |Γ .
Impossibility of Disavowal by the Verifier Bob
Similarly, as long as Trent receives the Y B sent from Bob, because Y B = E K B (|S , |Γ ) contains the key K B shared only by Bob and Trent, Trent can confirm that Bob received the signature and cannot change it, that is, Bob cannot disavow the fact that he received the signature. If Alice changes signature |S to |S , her behavior will be found when Bob calculates hash value H (|S ) and compares it with H(|S ). If Bob admits to receiving the signature, but disavows the integrity of the signature, it can be arbitrated according to the hash value H(|S ) of |S .
In this scheme, the eavesdropping detection also functions as identity authentication, which can strengthen the undeniable property of Alice and Bob. In conclusion, our verifiable arbitrated quantum signature scheme has undeniable security.
Conclusions
In this paper, we proposed a verifiable arbitrated quantum signature scheme based on five-qubit entangled state. The proposed scheme uses mutually unbiased bases particles as decoy particles, and performs unitary operations on these decoy particles using the function values of symmetric binary polynomials, which can carry out not only eavesdropping detection, but also identity authentication among participants.
Due to the unconditional security of quantum key distribution and the quantum one-time pad, the external attacker Eve cannot know Alice's key K A ; she cannot forge Alice's signature |S for her own benefit. For the same reason, Bob cannot forge Alice's signature |S , either. In order to avoid Alice's disavowal, we set that when Trent receives Alice's signature |S , the hash function value H(|S ) of the signature is calculated to ensure the integrity of the signature. After Trent receives Y B and decrypts Y B and |S = E K A (|R A , δ(Ω A )), the initial check parameter θ confirms that |S is jointly generated by |Γ and K A , which proves that Alice did not cheat. At this time, since Trent had no information on parameter Ω B , he could not forge a new signature. After Bob receives Y TB = E K B (|S , H(|S ), δ(Ω A ), δ(Ω T ), θ) and decrypts it, as the information of δ(Ω A ), δ(Ω T ) and Ω B are in his grasp at this time, he can use the function of quantum teleportation to reconstruct the information qubit |Γ to judge whether to accept the quantum signature |S signed by Alice.
Different from the signature scheme in classical cryptography, the security of our scheme is guaranteed by the quantum one-time pad [41] and quantum key distribution [39,40]. Therefore, it is unconditionally secure. The five-qubit entangled state plays a key role in quantum information processing tasks and it is the threshold number of qubits required for quantum error correction [43]. The principle of five-photon entanglement and open teleportation was reported in [44] and proved that von Neumann measurement, Bell measurement, and single-particle measurement are all feasible under the current technical and experimental conditions, so the scheme has good application value. Compared with the existing arbitrated quantum signature scheme [10,13,14,17,27], our scheme has high stability and can avoid being disavowed for the integrality of signature |S . But due to the large number of qubits used in the scheme, it also experiences the problem of low quantum efficiency. Table A1. The 6-particle qubit string composed of information states and 5-particle entangled states.
Appendix A.3. Verifytion Phase
Step V1: After confirming that Bob received Ω B , Alice tells Bob the position of the decoy particles, and then Bob executes the unitary operation U −F(x B ,x A ) = U −16 on the 0 , it implies that the identity authentication between Alice and Bob cannot be passed or the particles have been eavesdropped. Finally, Bob calculates the error rate based on measurement outcomes of the decoy particles. If the error rate is less than the previously given value, they perform the next step; otherwise, the execution of the protocol is aborted. After Bob passes the eavesdropping detection and identity authentication on Ω B , the decoy particles are removed and Ω B is recovered. Similarly, after confirming that Trent received Ω T , Alice tells Trent the position of the decoy particles, and then Trent executes the unitary operation U −F(x T ,x A ) = U −5 on the decoy particle |µ A,T . That is, 0 , it implies that the identity authentication between Alice and Trent cannot be passed or that the particles are eavesdropped. Finally, Trent calculates the error rate based on measurement outcomes of the decoy particles. If the error rate is less than the previously given value, they perform the next step; otherwise, they abandon the agreement. After Trent performs the eavesdropping detection and identity authentication on Ω T , the decoy particles are removed and Ω T is restored.
Step V2: After Bob receives the |S which is sent by Alice, he encrypts |S and |Γ with secret key . Bob sends Y B to Trent via a quantum channel.
Step V5: Bob decrypts Y TB to obtain |S , H(|S ), δ(Ω A ), δ(Ω T ) and θ. If θ = 0, Bob can assume that the signature was forged, he rejects the signature and exits the verification process. Otherwise, Bob continues to carry out the next verification process.
Step V6: According to the values of δ(Ω A ) and δ(Ω T ), Bob chooses the corresponding unitary operator U (5) We can prove that the external attacker Eve cannot entangle a decoy particle with an auxiliary particle to steal secret information and forge a signature.
Suppose that Eve prepares an auxiliary quantum state |E , and she executes unitary operation U E , which can entangle the auxiliary quantum states onto the transmitted particles to steal secret information by measuring the auxiliary particles. Consider the corresponding measurement basis |ϕ k=0 ω kl |k , which is in the attack of decoy particles. According to Lemma 1, the following expression can be obtained by executing unitary operation U E .
This means that, no matter what quantum states are adopted, Eve can only obtain the same information from the auxiliary particles. Therefore, Eve fails to obtain any signature messages by conducting this kind of attack.
|
2022-01-13T16:30:52.113Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "42559e81f63fb1e0c88e88639055e12c05cb3d0a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/24/1/111/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8af099353f8be9dbb21d4937046fd4301498787",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
156863073
|
pes2o/s2orc
|
v3-fos-license
|
Organic Production Business Model and the Concept of Corporate Social Responsibility
The concept of corporate social responsibility (CSR) considering competitiveness and sustainability issues in contemporary globalizing world is closely linked to organic production being a production system applying holistic approaches in the overall management and production activities and assurance of the highest degree of conformability to sustainable use and preservation of resources. The paper examines the organic production as a business model corresponding to the spheres of CSR regarding society, environment, human capital and work conditions, knowledge and education. The focus in on sustainable business practices in organic production assessed in the study from the point of view of management, environment, human resources, public relations and business environment considered in five dimensions: economic, social, environmental, cultural and accountability.
Introduction
Contemporary global threats in relation to climate change, limitations in natural resources, challenges in quality and distribution of human resources, crises in financial an economic sphere, etc. require integrated approaches towards managerial, social and ecological factors in the creation of a long-term value in organizations and markets based on the ethical norms and responsible behavior. The corporate social responsibility is accepted as the responsibility and accountability of the organizations for the impacts of their decisions and activities on society and environment implemented in conditions of transparency and ethical behavior and contributing to sustainable development, human health and society welfare. Being integrated in the organizations at all the levels and expressed by their inner and outer interactions and relations, it considers the expectations of all the parties and is in correspondence to the legislation concerning social and ecological issues and relevant international norms in the implementation of socially responsible initiatives (Nikolova&Yordanova, 2014).
The question of Corporate social responsibility (CSR) is of an extreme importance because it gives a terminological clearness and introduces the basic positions and limitations from the point of view of the organizations and their roles in the balancing of the three pillars of sustainable development -economic growth, social development and environmental protection (Zahariev, 2013, 232). Corporate social responsibility, also called corporate conscience, citizenship, social performance, sustainable responsible business, is a form of corporate self-regulation integrated in a business model. CSR policy functions as a built-in mechanism of self-regulation by which the business controls and provides its active compliance to the law, ethical standards and international norms (Pop, Dina, & Martin, 2011).
In last few decades CSR has been developing form a narrow and often marginalized notion into a complex and multilateral concept (Cochran, 2007). Corporate social responsibility (CSR) and corporate social irresponsibility (CSI) are determined as key tasks which could lead to development of a number of international policies encouraging CSR and discouraging CSI (Windsor, 2013). CSR is a policy that organizations voluntarily accept and implement in their business strategies in a way showing their longterm engagement in sustainable development of their business, society and environment (Pop, Dina, & Martin, 2011). CSR is accepted as an activity which is not of a single performance but a sustainable process. The concept is developing into a successful business strategy for sustainable development supporting organizations in the raise of their influence on the market and also in establishing a positive public image Bakardjieva, 2009).
Social responsibility is an element of the corporate culture considering the flexibility and security (flexicurity) and management of multiformity in a number of spheres and business practices (Fig. 1). ). This means that social responsible business is that paying more attention to environment, employment and consumers. Discussing agricultural and food sector as a green-oriented one which could be social responsible through the implementation of the right policies regarding environmental protection, creation of new "green" employees and supply of goods and services (Viola, Ruggeri, & Rotondo, 2013 The examination of the CSR in current paper is in connection to the main idea of organic production ("Good for nature, good for you" as stated on the web-site: European commission. Agriculture and rural development. Organic farming: http://ec.europa.eu/agriculture/organic/) and concerns the stakeholders in the sector: state, business (producers, processors, distributors, traders), scientific and educational organizations, on one hand, and consumers, non-governmental organizations, media, on the other, taking into account that the relations between all are very complex.
Organic production as a way of achieving sustainable growth is determined in the three main aspects of sustainable development: economic sustainability -increasing competitiveness, strong market orientation and increased incomes; social sustainability -bigger responsibility towards consumers' demands, improving food quality and safety; regional development; ecologic sustainability -unified framework, effective implementation and control, standards of protection of environment and health (Arabska, 2014).
Furthermore, organic production is considered in two aspects of protection: environmental protection by using management practices that do not have adverse effects on environment, and the health of consumers -by the provision of organic products (Argyropoulos et al., 2013). Recently, cultural dimension and accountability dimension of sustainability are also added in the Best Practice Guideline for Agriculture and Value Chains (2013). Organic production is considered as a business model applying sustainable managerial and business practices which implies the CSR at the greatest extent (Fig. 2).
The study presented in the paper examines sustainable business practices in organic production making assessments of five spheres of interdependencies of organic production and corporate social responsibility: management, environment, human resources, public relations and business environment, all considered in the five sustainability dimensions: economic, social, environmental, cultural and accountability.
Methodology
Five categories of assessment criteria are chosen as described in the introduction (Fig. 2): management, environment, human resources, public relations and business environment each comprising of three sub-criteria which are assessed on the scale from 1 to 5 (1 -very low, 2 -low, 3 -satisfactory, 4good, 5 -very good) according to the conformity of organic production business model to the corporate social responsibility issues interlinked with sustainable development. Then the values of each criterion is calculated as an average of sub-criteria's assessments. That way the maximum value of each criterion could be 15, the minimum -1. The assessments were made in a round-table discussion by three experts who gave their opinions according to the presented criteria and scale and according to their knowledge and experience in the Republic of Bulgaria.
Results
Assessments of the sub-criteria of "Management" show the highest values for the ecological impact of organic agriculture and the lowest for the social impact (Fig. 3). Economic impact is assessed as high as well. The results show that the experts are not convinced in the social impact of the sector and underline the environmental dimension as the most commented one as a contribution in a number of documents and studies concerning organic production.
In the sphere of environment the assessments are with the highest values for organic production practices and training programs in the field of environmental protection and healthy human living, incl. those organized in organic farms for visitors of all ages (Fig. 4). The trust in the monitoring and control system in organic production is expressed as well.
Discussed sub-criteria for "Human resources" show good results, almost identical for recruitment policy, decrease in seasonal employment and internships (Fig. 5).
Among the public relations PR program is the best one assessed followed by the organization of events in the organic farms (Fig. 6). The satisfactory assessment of the participation in associations, networks, events, etc. is substantiated by the situation in the sector in Bulgaria and the low activity in the field of networking which influences the experts' assessments. In the sphere of business environment the situation is almost the same presenting as a whole not much than good results in innovation transfer, system of ethical rules and measures and policy for selection of providers and partners (Fig. 7).
Turning back to the five examined spheres and calculating the average assessments values (Fig. 8) environment is determined as the most important one in the link of organic production to CSR, followed by management, human resources, public relations and business environment. Current investigation's main results, although based on the specific situation of organic sector development in the Republic of Bulgaria, show that the business model of organic production fits very well to the key point of the concept of CSR being a sector implementing integrated and holistic management activities of all the processes in the whole production cycle, i.e. having "a built-in and self-regulated mechanism" working according to the requirements of the strong legislative base and standards which is expressed by the certification procedures assuring consumers' and society trust in organic.
Organic production as a sector having a great potential for making significant contributions to sustainable development (and thus put as a priority in a number of strategic and program documents on different levels, especially by the European Union and international community) leads to various positive influences on environment, society and economy. In other words, the concerns of planet, people and profit are the basic one in the essence of the production methods and the practices in management and the sector development is marked by a progress in last years and a great public attention namely because of the causes of high ethics and strong rules in balanced approaches for sustainable development, underpinned by corporate social responsibility.
Conclusion
As a result of the main discussions a definition of organic production business model is given as follows: organic production business model is a holistic production and managerial method in sustainable agriculture ensuring overall management and answering a number of requirements regarding environmental protection, food quality and safety and human health which responsibly uses, protects and restores natural resources, supports the preservation of the components of environment (soil, water, biodiversity, landscape), stabilizes ecosystems and creates opportunities for positive economic and social growth. Organic production is a way of investments in production processes creating employment in environmentally friendly activities, providing information and increased social consciousness in connection to the principles of sustainable development.
Organic production is a green oriented sector applying sustainable business practices and implying an overall management of processes and integrated approaches -the monitoring and control are not focused on the final products but on the system functioning. It is a business model implying the principles of corporate social responsibility in five main directions: management, environment, human resources, public relations and business environment, which are influenced by the specific development of countries and regions underlining the importance of relevant state policies encouraging CSR applied in different sectors and initiatives, as well as the broad community participation.
|
2019-05-18T13:06:56.191Z
|
2016-06-01T00:00:00.000
|
{
"year": 2016,
"sha1": "4e18c6fbf545f05c0eda125bdfbe48a1e02f8f9b",
"oa_license": "CCBYNCSA",
"oa_url": "http://ojs.spiruharet.ro/index.php/jedep/article/download/137/pdf",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "baeee84b1cca33a79ac5f2e5b910e91af944c7a4",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
104293940
|
pes2o/s2orc
|
v3-fos-license
|
The impact of thyroid hormone dysfunction on ischemic heart disease
Thyroid hormones have a central role in cardiovascular homeostasis. In myocardium, these hormones stimulate both diastolic myocardial relaxation and systolic myocardial contraction, have a pro-angiogenic effect and an important role in extracellular matrix maintenance. Thyroid hormones modulate cardiac mitochondrial function. Dysfunction of thyroid axis impairs myocardial bioenergetic status. Both overt and subclinical hypothyroidism are associated with a higher incidence of coronary events and an increased risk of heart failure progression. Endothelial function is also impaired in hypothyroid state, with decreased nitric oxide-mediated vascular relaxation. In heart disease, particularly in ischemic heart disease, abnormalities in thyroid hormone levels are common and are an important factor to be considered. In fact, low thyroid hormone levels should be interpreted as a cardiovascular risk factor. Regarding ischemic heart disease, during the late post-myocardial infarction period, thyroid hormones modulate left ventricular structure, function and geometry. Dysfunction of thyroid axis might even be more prevalent in the referred condition since there is an upregulation of type 3 deiodinase in myocardium, producing a state of local cardiac hypothyroidism. In this focused review, we summarize the central pathophysiological and clinical links between altered thyroid function and ischemic heart disease. Finally, we highlight the potential benefits of thyroid hormone supplementation as a therapeutic target in ischemic heart disease.
Introduction
Thyroid hormones (THs) play fundamental roles in cardiovascular homeostasis (1). Given that cardiovascular diseases are among the most prevalent illnesses worldwide, causing substantial mortality, morbidity and hospitalization, an understanding of the role of THs in the cardiovascular system is imperative (1). The main goal of this manuscript is to review the central pathophysiological and clinical links between altered thyroid function and cardiovascular diseases, particularly in ischemic heart disease.
Most T3 is generated peripherally from T4 conversion by deiodinases (Fig. 1). These enzymes are also responsible for converting THs into inactive isomers such as reverse T3 (rT3) and 3,3-diiodothyronine (T2). There are three deiodinases with different functions: (1) type 1 deiodinase (D1) is localized in the plasma membrane and is expressed in the liver, thyroid and kidney; this enzyme is mainly responsible for the peripheral conversion of T4 into T3; (2) type 2 deiodinase (D2) seems to be more efficient than D1; the major role of this enzyme is to regulate the intracellular concentration of T3, converting T4 into T3, especially in the brain, pituitary gland and skeletal muscle; and (3) type 3 deiodinase (D3) irreversibly inactivates THs generating T2 or rT3; thus, by lowering the levels of these hormones, D3 is considered an important regulator of the thyroid axis (4). Furthermore, THs are mainly active when not bound to transport proteins. Therefore, variations on binding protein levels can change the peripheral activity of THs (5). In order to perform their roles, THs must bind to thyroid hormone receptors. These receptors are intracellular DNA-binding proteins that bind as hormonereceptor complexes to thyroid hormone response elements (TREs) in the regulatory regions of target genes (6). Consequently, THs modulate essential functions in the growth, development and metabolism of a variety of tissues. There are different subtypes of receptors -TRα1, TRα2, TRβ1 and TRβ2 -which have different functions. TRα1 is the subtype most expressed in the myocardium, regulating important genes related to cell growth, contractile function and electrical activity (6,7). In fact, inhibition of TRα1 was shown to markedly depress postischemic cardiac function in mice (8). Although TRβ1 is also expressed in the myocardium, it is expressed at a lower level (7). Though TRα2 does not bind T3, it is able to bind TRE, thereby exerting a substantial negative effect on gene expression (7).
The effects of thyroid hormones on the cardiovascular system
THs exert a significant impact on the cardiovascular system via both genomic and non-genomic mechanisms. The major effects of THs on the myocardium are mediated by T3 (Table 1), which stimulates nearly all of the transporters and ion channels involved in calcium myocardial fluxes, upregulating sarcoplasmic reticulum calcium-activated ATPase 2 (SERCA2) and Na + /K + -ATPase and downregulating phospholamban (9). These changes enhance calcium uptake and release by the sarcoplasmatic reticulum, stimulating both diastolic myocardial relaxation and systolic myocardial contraction (9). The contractile apparatus of the cardiac myocyte has two subtypes of myosin heavy chains (MHCs): α-MHC and β-MHC -fast and slow myosin, respectively. T3 can upregulate α-MHC and downregulate β-MHC (10). Therefore, the myocardial hypothyroid state induces a so-called fetal gene reprogramming which increases the expression of β-MHC and decreases the expression of α-MHC and SERCA2 (11). This phenotype presents substantial implications on myocardial function and subsequent progression to heart failure (12). The fetal pattern is characterized by a preference for glucose over fatty acids as a substrate. Although such changes may lower oxygen demands, the yield of ATP per substrate also decreases, resulting in metabolic inefficiencies that decrease metabolic reserves -perhaps leading to cardiac pump dysfunction (12). Nevertheless, this phenotype seems to be reversible with the appropriate therapy (13). T3 is able to modify the activity of sodium, potassium and calcium channels, altering a variety of intracellular pathways in cardiac and vascular smooth-muscle cells. Consequently, THs can increase resting heart rate, cardiac contractility and venous tone almost immediately, increasing cardiac preload and cardiac output (14). T3 increases myocardial sensitivity to the adrenergic system by increasing the number of adrenergic membrane receptors. Additionally, T3 decreases systemic vascular resistance through vascular smooth-muscle relaxation, which in turn decreases renal perfusion and leads to renin-angiotensin-aldosterone axis activation (15,16). The relaxation of vascular smooth muscle may lead to an increase in cardiac oxygenation. T3 increases metabolic and oxygen consumption, thereby enhancing the release of vasodilatory mediators (15). THs exhibited a dosedependent direct effect in rat coronary arteries within a few seconds, indicating a non-genomic mechanism (17). Other effects of T3 that depend at least partially on nongenomic mechanisms include an increase in resting heart rate and left ventricular contractility (17). Non-genomic effects are poorly understood but are an important accessory mechanism in THs actions. These effects are receptor independent and regulate ion transporter activity in the plasma membrane (10). The non-genomic actions of THs in the myocardium involve different signaling cascades, such as cyclic adenosine monophosphate and protein kinases (18). A structural protein of the plasma membrane has been identified as a cell-surface receptor for THs. This receptor is an integrin αVβ3 that interacts with a variety of extracellular matrix proteins and induces serine-threonine kinase pathway (19). THs induce a rapid vascular relaxation that is mediated by NO produced by vascular smooth-muscle and endothelial cells. This is associated with the induction of phosphatidylinositol 3-kinase/protein (PI3K/Akt) signaling pathway. Oxidized low-density lipoprotein (LDL) may blunt the nongenomic action of THs and impair the NO production in endothelial cells (20). Furthermore, THs have a proangiogenic effect, stimulating arteriolar growth in the normal heart as well as after myocardial infarction (21). This proangiogenic effect is mediated by increased transcription of proangiogenic genes and expression of hypoxic inducible factor-1α (HIF-1α) and mediated by activation of integrin αVβ3 (19,22). T3-induced angiogenesis has been observed in several experimental rat models of ischemia, hypertension and diabetic cardiomyopathy (23).
The therapeutic targets of cardioprotection should not be limited to cardiomyocytes, but should also include other cells such as fibroblasts and endothelial cells that play important roles in preserving myocardial function. THs can modulate metalloproteinases (MMP), increasing MMP 1 and 2 as well as collagen gene expression; consequently, they may have an important impact on the extracellular matrix of the heart (24). Tissue inhibitors of MMP are downregulated by THs (24). The antifibrotic effect of T3 is suggested by evidence that early T3 replacement after ischemia/reperfusion in rats is associated with a reduction in scar size (25). Moreover, hypothyroid status is marked by an increased susceptibility to collagen deposition and cardiac fibrosis (26).
Thyroid hormone as a cardiac mitochondrial regulator
THs modulate cardiac mitochondrial function by increasing mitochondrial mass, respiration, oxidative phosphorylation, enzyme activity and mitochondrial protein synthesis such as that of cytochrome as well as phospholipid and mtDNA content (27). Changes in the levels of circulating THs may impair myocardial bioenergetic status with consequences on cardiac function (28,29). Mitochondrial dysfunction plays a central role in cardiac dysfunction and in the occurrence and progression of heart failure (30). The regulation of mitochondrial function and biogenesis by THs is an emerging mechanism in the therapeutics of cardioprotection. THs promote the upregulation of proteins that are functionally relevant to the rescue of mitochondrial function. Consequently, these hormones may reduce cardiomyocyte loss in the peri-infarct zone. Reversal of the post-ischemic decline of TH levels has been shown to downregulate tumor suppressor protein (p53) possibly via the upregulation of miRNA 30a (31). Additionally, premature activation of the c-Jun N-terminal kinase (JNK) cascade occurs minutes after myocardial infarction. JNK protein expression is associated with apoptosis in the infarction border zone, cardiac dilatation and pathological remodeling (32). Given that p53 can regulate the JNK pathway through a positive feedback loop, THs might reduce JNK levels through a p53-dependent mechanism (31). T3 treatment (14 ng/g body weight, dose given daily) for 3 days after acute myocardial infarction in rats reduced myocyte apoptosis in the border area, possibly via Akt signaling (33).
T3 administration in rats significantly increases the expression of transcription factors implicated in mitochondrial biogenesis, including nuclear regulatory factors -NRF-1 and NRF-2 which mediate the expression of HIF-1α, mitochondrial transcription factor A (mt-TFA) and peroxisome proliferator-activated receptor coactivator-1α (PPARc-1α), particularly in the peri-infarct zone (34). In fact, the overexpression of mt-TFA and HIF-1α can limit left ventricular remodeling and preserve cardiac performance after myocardial infarction (35,36,37). THs also enhance transcriptional coactivators such as peroxisome proliferator-activated receptor gamma coactivator 1-α (PGC-1α) which modulate the regulatory function of many factors (38). Additionally, PGC-1α also induces D2, which generates local THs and further enhances mitochondrial function (38). Downregulation of cytochrome c oxidase I (CcO-I) -a key enzyme of the mitochondrial respiratory chain -occurs in the border zone of infarcted tissue. THs increase mitochondrial proliferation, mitochondrial protein synthesis and cytochrome content. T3 treatment in rats attenuate the downregulation in CcO-I (27,39). Ischemic/reperfusion injury (IRI) damages the mitochondrial outer membrane, and together with the activation of the proapoptotic proteins, enhances mitochondrial outer membrane permeabilization, releasing cytochrome c, activating caspases and increasing apoptosis (40). THs increase the expression of antiapoptotic protein Bcl-2, reducing the Bax:Bcl-2 ratio and improving cardiac function in infarcted rats (41). Furthermore, THs can prevent oxidative damage in the heart by limiting ROS levels (42). T3 also protects cardiomyocytes against oxidative stress-mediated cell death by opening the protective mitochondrial ATP-dependent K+ channel (mitoKATP) in rescued mitochondria (27). T3 replacement after myocardial infarction might offer a highly successful therapeutic strategy to improve myocardial mitochondrial function and cardiac cell metabolism, limiting infarct-scar size and preventing heart failure (34).
Cardiovascular effects of hyperthyroidism
Hyperthyroidism is characterized biochemically by low TSH levels and elevated free T4, free T3 or both. The prevalence of overt hyperthyroidism in the general population is 0.5% (43). Although the effects of THs might be beneficial for cardiovascular function, when exacerbated as they are in hyperthyroidism, they can be detrimental. THs increase resting heart rate, blood volume, myocardial contractility and ejection fraction ( Table 2). The most common cardiovascular symptom in patients with hyperthyroidism is palpitation, with 20% exhibiting atrial fibrillation (44). The Rotterdam study, a crosssectional analysis including 1149 women, showed that elderly women with high/normal thyroid function (low range of normal TSH levels) have an increased risk for atrial fibrillation (45). Systolic function is consistently increased -even at rest -as is the rate of ventricular relaxation and filling (46). At the same time, pulse pressure is wider because of the increased systolic arterial pressure and decreased diastolic arterial pressure due to vasodilatation ( Table 2) (47). In patients with hyperthyroidism, cardiac output is notably elevated. Overt hyperthyroidism is associated with increased cardiovascular mortality and 16% increased risk of major cardiovascular events (48). High-output heart failure might be induced by hyperthyroidism even in patients without heart disease (48). Hyperthyroidism is also associated with pulmonary hypertension -albeit less frequently -and atrioventricular valve regurgitation, mainly of the tricuspid valve (49). Heart failure is the main cause of increased cardiovascular mortality in both overt hyperthyroidism and subclinical hyperthyroidism (50).
Subclinical hyperthyroidism is defined by normal free T4 and free T3 levels with a reduced TSH level. The prevalence of exogenous subclinical hyperthyroidism in patients prescribed with levothyroxine is 15-20% (51). Subclinical hyperthyroidism has been associated with increased all-cause mortality, coronary heart disease (CHD) events and mortality and atrial fibrillation (52,53). CHD mortality and atrial fibrillation are correlated with TSH levels, being higher with TSH under 0.10 mIU/L when compared with levels between 0.10 and 0.44 mIU/L (54). In a meta-analysis of 55 cohort studies with 1,898,314 patients, subclinical and overt hypothyroidism were associated with higher risks of cardiac mortality and allcause mortality (55).
Cardiovascular effects of hypothyroidism
Hypothyroidism is defined by high TSH levels with low THs levels. The prevalence of overt hypothyroidism in nonpregnant adults is 0.2-2.0% (43). Cardiac dysfunction observed in hypothyroidism is not solely explained by decreased inotropism and increased peripheral vascular resistance ( Table 2). The hypothyroid state results in lower heart rate and decreased myocardial contraction and relaxation, with prolonged systolic and early diastolic time intervals, culminating in advanced stages of heart failure (12). Hypothyroidism is also associated with diastolic hypertension and sinus bradycardia due to sinus node dysfunction (56,57). Diastolic dysfunction is one of the main cardiac anomalies found in patients with hypothyroidism, both at rest and with exertion. Left ventricular asynchrony is also present in hypothyroidism, which affects diastolic and systolic functions, exercise capacity, and quality of life, rendering potential heart failure patients more symptomatic and worsening their prognosis (58). CHD that occasionally coexists with hypothyroidism may be preexistent and aggravated by thyroid dysfunction. Although less common, pericarditis, pericardial effusion and cardiac tamponade are also potential manifestations of hypothyroidism (56,57).
Subclinical hypothyroidism (SCH) is defined as serum THs within reference range in the presence of elevated serum TSH levels. SCH can be classified as grade 1 (TSH >4.0 or 4.5, but <10 mIU/L) or grade 2 (TSH >10 mIU/L) (10). The prevalence of SCH in the population varies from 4 to 20%, occurring more frequently in adults older than 65 years of age (59). SCH is also associated with a higher risk of heart failure, morbidity and mortality (50,60). These patients are also at increased risk of atherosclerosis and coronary events (61). This increased risk of coronary events may be caused by increased cholesterol and homocysteine levels and decreased LDL receptors also seen in patients with overt hypothyroidism (50,61). Patients with SCH have decreased nitric oxide (NO)-mediated vascular relaxation, which contributes to an increase in vascular resistance and consequently to increased left ventricular afterload ( Fig. 2) (62). Several studies have shown that patients with hypothyroidism (both overt and subclinical) exhibit impaired endothelial function that improves with TH replacement therapy (61,63). In the Rotterdam Study mentioned above, patients with SCH had higher prevalence of aortic calcification and myocardial infarction, especially those who were positive for thyroid autoantibodies (64). Patients with SCH also have a decreased left ventricular global longitudinal strain, especially in the lateral wall and interventricular septum (65). Along with these left ventricular disturbances, neovascularization in cardiac tissue after an ischemic event may be inhibited in SCH. Consequently, this may accelerate cardiac pathological remodeling contributing to the development of heart failure (66). In addition to hypercholesterolemia, increased homocysteine, systemic vascular resistance and oxidative stress described above, SCH might also increase peripheral insulin resistance and activate prothrombotic pathways and hypercoagulability (Fig. 2) (67). SCH may constitute a potentially reversible cardiovascular risk factor, a scenario which should be taken into account as it is present in almost 12% of patients with acute myocardial infarction (10,67). The risk of cardiac events depends on the severity of hormonal dysfunction, tending to be higher when TSH is ≥7.0 mIU/L and even more evident when the value is ≥10 mIU/L (68).
Another subtype of TH dysfunction called nonthyroidal illness syndrome (NTIS) -also known as euthyroid sick syndrome or low-T3 syndrome -is characterized by a rapid decline in plasma-free THs levels (free T3 and free T4), with a marked elevation of rT3 in critically ill patients (69). TSH level is usually within the reference range, except in severe cases of this syndrome. Local and systemic bioavailability of THs is regulated by aberrant expression of deiodinases, SNPs and novel regulators of expression of deiodinases genes (70). Changes in deiodinases activity are more pronounced in conditions characterized by low tissue perfusion (71). The main mechanism seems to involve reduced activity of the deiodinases that convert T4 to T3 and an increase in deiodinase D3 that converts T4 to rT3 (72). D1 is the main pathway for rT3 clearance. The increase in rT3 is partially explained by a decrease in D1 activity (73). Peeters et al. evidenced that liver D1 activity was downregulated in critically ill patients and D3 activity in liver and skeletal muscle was induced. Skeletal muscle is an abundant tissue in humans, and it is likely that induction of D3 in this tissue also contributes to the pathophysiology of NTIS (71). As in hemangiomas that increase D3 expression and lead to 'consumptive hypothyroidism' (74). Recent studies show that expression of D3 is increased in some pathological contexts in a cell-specific manner, which are cancer, cardiac hypertrophy, myocardial infarction, chronic inflammation or critical illness (75). In NTIS, the decrease in T3-dependent gene expression is independent of circulating T3 concentration, demonstrating that after an ischemic event, there is potent and stable induction of D3 activity in cardiomyocytes, resulting in subsequent local cardiac hypothyroidism (10). This might be mediated by tissue catecholamines and inflammatory responses, via inflammatory cytokines such as tumor necrosis factor-α (TNF-α). In fact, D3 overexpression is a common mark of inflammatory response and is one of the leading events in NTIS, often seen in chronic inflammation (70). Experimental evidence showed that infusion of TNF-α, IFN-α, IL-1 and IL-6 in humans results in a decrease in serum T3, T4 and TSH levels and a rise in rT3 (76,77,78). In patients with acute myocardial infarction, the rise in IL-6 is closely linked with a decrease in T3 (79). In fact, IL-6 inhibits 5′-deiodinase activity, thereby reducing the conversion of T4 to T3 (80).
Hypoxia also contributes to decreased T3 levels, as it induces hypoxia-induced factor-1 (HIF-1α) that can activate D3 in the myocardium (81). In most conditions, NTIS seems to be an adaptive, compensatory and beneficial response, decreasing energy consumption in response to inflammation during various critical illnesses (82). Although T3 plays an important role in cardiac function, the impact of NTIS in patients with heart disease is unclear. Nevertheless, increased rT3 may be considered a predictor of both short-and long-term mortality in ischemic heart disease (10). NTIS is often associated with depressed myocardial function and is be a strong predictor of mortality in patients with heart disease, both in acute and chronic conditions (69,83). Low T3 levels are also associated with dilated cardiomyopathy; indeed, 20-30% of patients with dilated cardiomyopathy have NTIS (11).
The impact of thyroid hormone dysfunction on myocardial ischemia
Myocardial ischemia is a major cause of mortality and morbidity worldwide (84). An understanding of the mechanisms of interaction between THs and their receptors is crucial to assess their impact in myocardial ischemia. TRα1 plays a key role during post-ischemic adaptation as it appears to present dual action and may be able to convert pathologic to physiologic growth depending on its ligand availability (85). In fact, TRα1 overexpression in the nucleus of cardiomyocytes in the absence of adequate THs as ligands may induce pathological hypertrophy and fetal phenotype, with predominant β-MHC expression. In contrast, higher levels of THs stimulate an α-MHC growth pattern, enhancing more physiological growth (86).
The precise prevalence of NTIS among patients with acute coronary syndrome has not been defined, but a prevalence of 5-35% has been reported in the literature (87). Several studies demonstrated a decrease in T3 and an increase in rT3 concentration in patients after an acute coronary event (88). Some factors may predict a more pronounced decline in T3 levels, such as worsening angina pectoris preceding acute myocardial infarction, known chronic heart failure or previous myocardial infarction and diabetes mellitus (89). Low T3 levels also induce oxidative stress and increase apoptotic rate, which may worsen ventricular dysfunction (90). Therefore, THs levels are an important factor modulating left ventricular structure, function and geometry during the late post-myocardial infarction period (91). Patients with ST-elevation myocardial infarction (STEMI) and alterations in thyroid function have almost a 3.5-fold increased risk of major adverse cardiac events, including cardiogenic shock and death, compared with patients with STEMI and no thyroid disorder (92). In fact, alterations in thyroid function seem to occur more frequently in STEMI than in NSTEMI (non-ST-elevation myocardial infarction), possibly because of poorer short-term prognosis and features of the occlusive coronary thrombus typical of STEMI (88,92,93).
Recent evidence indicates that circulating T3 levels are an independent determinant of the recovery of left ventricular ejection fraction 6 months after acute myocardial infarction in humans (94). Friberg et al. found a positive correlation between rT3 levels and 1-year mortality in patients with myocardial infarction, independent of other risk factors (95). In line with these results, a recent study with patients attending a cardiac rehabilitation program after an acute coronary syndrome also reported an association between lower T3 levels and all-cause mortality (96). In patients with myocardial injury, lower T3 levels have been correlated with increased serum levels of cardiac biomarkers such as troponin T and N-terminal pro-brain natriuretic peptide and with lower left ventricular ejection fraction (97). T3 levels may represent a predictor of the potential recovery of ventricular function (94).
One of the priorities in the treatment of myocardial ischemia is the reestablishment of coronary circulation. Early reperfusion has a great impact on short-term mortality after a myocardial ischemic event (98). Coronary revascularization by either coronary bypass surgery (CABG) or percutaneous coronary intervention (PCI) constitutes the primary option in the treatment of coronary artery disease. Despite its indisputable benefits, reperfusion after a myocardial ischemic event may contribute to adverse cardiac remodeling with possible evolution to heart failure. The pathophysiology of IRI is complex; however, recent evidence suggests that mitochondrial dysfunction may be one of the major mechanisms of IRI (99). The incidence of post-ischemic heart failure remains critical, increasing the risk of both cardiac and all-cause deaths (98). After reperfusion, extracellular washout of accumulated H + ions creates a large gradient that increases the influx of sodium via the Na + /H + exchanger. This stimulates the reverse action of the Na + /Ca 2+ exchanger pump, increasing oxidative stress (98). THs improve the balance of proapoptotic and pro-survival signaling pathways which may limit IRI (100). T3 enhances the expression of HIF-1α, limiting the mitochondrial opening of permeability transition pores and thereby protecting the cardiomyocyte from reperfusion injury (101). Serum THs levels after CABG are often decreased (102). In fact, NTIS is reported in 50-75% of patients after cardiac surgery and some authors consider this as a poor prognostic factor and a predictor of mortality (28,102). Pantos et al. were the first to observe that pretreatment with THs confers protection against IRI in isolated rat hearts in a pattern similar to ischemic preconditioning (1). The interest in the role of THs in cardioprotection is increasing. In fact, THs pretreatment may confer protection against subsequent IRI by inducing pharmacological preconditioning in cardiomyocytes, mainly by enhancing heat-shock protein 27 (HSP27) and heat-shock protein 70 (HSP70) and decreasing the activation of proapoptotic p38MAPK (1,102,103,104). Recent studies using TH replacement therapy in animal models with regional or global myocardial ischemia followed by revascularization and/or reperfusion showed improved reversal of myocardial dysfunction compared with the absence of TH replacement therapy (69).
Thyroid hormones and cardiovascular risk factors
Overt and SCH are recognized as cardiovascular risk factors (59). In fact, thyroid disorders are associated with dyslipidemia, insulin resistance, hypertension, inflammation, atherosclerosis and other conditions (48). Hyperlipidemia seen in hypothyroidism is probably due to reduced cholesterol clearance, attributable to reduced hepatic LDL receptors and a decrease in 7α-hydroxylase levels in the liver (10). Elevated serum levels of total (105). A recent meta-analysis found a statistically significant decrease of 9% in total cholesterol and 14% in LDL cholesterol in patients treated with levothyroxine (doses used vary from 67.5 μg/day to 85.5 μg/day) compared with the placebo group (59). While some studies show a decrease in triglyceride levels, other studies detected lower levels of apolipoprotein B-100 in the treated group (106). The role of THs in preventing cardiac ischemia has not been extensively investigated, mainly due to the possibility that THs may increase heart rate and cardiac work (107). However, the antiischemic and cardioprotective effects of THs suggest that supplementation may be more beneficial than harmful. Thus, the correction of overt and subclinical thyroid dysfunction may reduce the risk of future CHD and protect cardiomyocytes from future ischemic stress (107). TH replacement therapy may reverse atherosclerosis, lower peripheral vascular resistance and improve myocardial perfusion in patients with hypothyroidism (61). Treatment with levothyroxine in humans, mean dose 85.5 ± 4,3 μg/day, also has an impact on other predictors of cardiovascular events, such as carotid artery intima-media thickness (106).
Thyroid hormones and the prevention of heart failure after acute coronary events
In order to improve cardiovascular outcomes, cardioprotection represents the new purpose of therapeutic interventions to minimize infarct size and prevent the progression toward heart failure after an acute ischemic event (98). TH replacement therapy shortly after an infarction (or even when delayed) may improve cardiac function, mainly through TRα1 binding (90,107). THs preserve mitochondrial function and have antifibrotic and proangiogenic effects that are crucial to the prevention of heart failure (98). As described above, after an ischemic event, THs increase MHC-α and the SERCA/PLB ratio which may improve contractile indices and energy expenditure (107,108). The known effects of THs on MMP include decreased cardiac scar area and improved left ventricular relaxation (23). THs also reduce myocyte apoptosis in the myocardial infarction border area shortly after an ischemic event (23). Rajagopalan et al. observed an improved left ventricular ejection fraction without significant changes in heart rate in adult female rats treated with approximately 6 μg/kg/day of T3 after left coronary artery ligation (109). Moreover, atrial tachyarrhythmia was reduced by 88% in the T3-supplemented group (109). In an animal model of NTIS after an ischemic event, a subcutaneous infusion of 6 μg/kg/day of T3, a physiological or near-physiological dose, improved mitochondrial function, reducing cardiomyocyte loss in the peri-infarct zone (110). Another study in dogs with induced ischemia reported that plasma T3 levels decreased significantly during the ischemic period and continued to fall after reperfusion. The deterioration of left ventricular function was worse in untreated dogs compared with dogs treated with T3, in which hemodynamic function was maintained and improved to levels higher than those in the controls (111).
It is well recognized that chronic treatment with β-blockers improves left ventricular contractility and cardiac remodeling after myocardial infarction (101). A recent study by Zhang et al. aimed to compare T3 and metoprolol treatment in rats with myocardial infarction (101). The treatment with T3 (5 μg/kg/day) or metoprolol (100 mg/kg/day) was given in drinking water immediately after surgery for 8 weeks. Both treatments enhanced left ventricular contractility compared to the placebo and decreased the incidence of atrial tachyarrhythmia (101). Compared to the placebo, histological analysis indicated a significant reduction of 19% in the infarct area in the T3-treated group, which was similar to the group treated with metoprolol (101). Furthermore, the diameter of the left atrium was significantly enlarged after myocardial infarction. While this was ameliorated by T3, the metoprolol group showed results similar to the control group (101). The reversal of fetal genes expression induced by T3 treatment was not as complete upon treatment with metoprolol, suggesting additional improvement in physiological parameters with T3 treatment compared with metoprolol treatment (101).
Thyroid hormones: a future therapeutic option?
Given that the thyroid axis plays a key role in cardiovascular regulation, a thorough understanding its functions may be useful in preventing CHD, reducing ischemic lesions and improving the outcomes of acute coronary events. Although it is known that THs levels in plasma decrease after an ischemic event, the physiological relevance of this remains uncertain and the benefits of THs treatment is controversial. Regarding SCH, current guidelines recommend that it be treated during pregnancy in symptomatic patients according to individual factors (age, TSH levels, thyroid autoantibodies, the presence of atherosclerotic cardiovascular disease, heart failure and other associated risk factors) (112). A normalized thyroid axis may improve cardiovascular outcomes after an ischemic event. Thus, treatment with levothyroxine or T3 offers a potential therapeutic approach in coronary artery disease. The use of levothyroxine requires the preservation of peripheral deiodinase activity to convert T4 into the active hormone T3. Administration of T3 may be a better option in patients with impaired conversion, based on increased D3 activity and reduced D1 and D2 (28). Another therapeutic option not yet available for clinical practice is the genetic manipulation of deiodinases, increasing the local production of T3 or the expression of thyroid receptors and enhancing hormonal signaling (113). The first option involving THs analogues has been tested in some clinical trials, but the genetic approaches remain in the experimental animal stage (11). Recent observations highlighting the cardioprotective effects of the TRα1 receptor suggest that the development of selective agonists for this receptor may increase the efficacy and safety of TH replacement treatment by circumventing the potential side effects of excess of T3 administration (100). Future studies are necessary to assess which synthetic hormone (T4 or T3), dosage and timing of administration yields the best results with the least adverse effects. A TRβ1 agonist, GC-1, with no effect on cardiac rhythm has been shown to have a proangiogenic effect dependent on MAPK signaling (114). A chemical compound named CO23 has recently been synthesized as a TRα1 selective agonist in amphibian models. However, this compound loses its selectivity in rat models. Further investigation is required in order to understand the differences in thyroid receptor expression and development (115). Additional selective TRα1 agonists are now being synthesized (116). The available evidence indicates that TH replacement treatment should be administered at a physiological dose to obtain the maximal protective effect (117). The use of supra-physiological doses may increase heart rate and contractile function, which may enhance energy expenditure and thus aggravate ischemia, masking any potentially beneficial effects (42,108). THs appears to achieve inotropic and antiapoptotic effects with acute infusion of T3 (0.8 μg/kg bolus followed by 0.113 μg/kg per hour) in humans (118). Supra-physiological doses may also be associated with increased risk of atrial and ventricular tachyarrhythmias, symptoms of hyperthyroidism and adverse extra-cardiac consequences, particularly the development or worsening of osteoporosis (119). Furthermore, the long-term effects of THs treatment on myocyte remodeling following myocardial infarction are still unknown and require further investigation. At the time of this review, clinical interventional studies involving THs supplementation are available in patients undergoing CABG surgery or with chronic heart failure (117). The randomized clinical trials known as the THIRST study (98) and ThyrAMI (120) are the first clinical trials investigating the use of THs treatment in acute myocardial infarction. The recently published THIRST study is a phase II, randomized, double-blind, placebo-controlled study involving the administration of T3 (maximum dosage 15 μg/m 2 /die) in patients with acute myocardial infarction treated with primary angioplasty with free T3 levels below the lower reference range (121). This study showed that T3 therapy is safe and improves regional dysfunction in patients with STEMI and NTIS. The results reveal the absence of major or minor side effects induced by T3 treatment such as arrhythmias and increased heart rate (121). In fact, T3-treated patients had a lower incidence of arrhythmias and slower heart rate at follow-up (98). Untreated group had significantly higher TSH compared to T3-treated group at discharge and after 1-month follow-up (121). Cardiac magnetic resonance showed a significant reduction in the global size of necrosis, while regional systolic function tended to improve (121). At follow-up, both groups experienced a significant reduction in wall motion score index, bur the difference value (discharge/follow-up) was significantly higher in treated group (121). Additionally, T3-treated group had a significantly increased stoke volume at follow-up and there was a tendency to higher increase in stroke volume compared to the untreated group (121). Given that the study imposed restrictive inclusion criteria that caused difficulties in the enrollment of patients, additional data are necessary to assess clinical targets and treatment doses using this approach (121). The results of ThyAMI and other studies will be essential to gain a better understanding of the effects of THs in ischemic heart disease and to further characterize the therapeutic potential of THs in this context.
The use of oral T3 as a pretreatment (125 µg/day for seven days prior to surgery) in patients undergoing CABG with LV dysfunction improved post-ischemic recovery of cardiac function and significantly lowered the mean inotropic requirements (122). In this study, the treated group exhibited no changes in blood pressure, heart rate, or cardiac rhythm (122). In fact, the low levels of T3 resulting from CABG can be reversed by oral T3 administration to patients with LV dysfunction. Although no differences between groups in the incidence of death and myocardial ischemia were reported, the length of stay (14). In isolated dogs' hearts, T3 treatment after reperfusion increased coronary blood flow by decreasing coronary arterial resistance (123). Additionally, T3 treatment at reperfusion in patients after CABG improved cardiac index decreased cardiac troponin I release (124). Other studies show the effect of THs supplementation on cardiac hemodynamics after acute myocardial infarction. Ojamaa et al. reported increased ejection fraction in rats after administration of 1.2 µg/day of T3 for 1 week after acute myocardial ischemia (125). Administration of 3.0 µg/day of T3 plus 12 µg/day of T4 for 2 weeks after acute myocardial ischemia in rats increases ejection fraction, decreases wall tension and improves left ventricular geometry (126). In isolated rat hearts, T3 was demonstrated to have cardioprotective properties during reperfusion after ischemia as evidenced by reduced LDH release and increased contractile recovery (127). In isolated dogs hearts, a bolus of T3 (0.2 µg/kg) after reperfusion increased coronary blood flow by decreasing coronary arterial resistance (123). The results of a double-blind, randomized placebo-controlled clinical trial suggested that 0.4 mg/kg taken orally once a day for 4 days before surgery may provide protection against myocardial IRI during cardiac surgery in children by increasing HSP70 and MHC-α expression, inducing pharmacological ischemic preconditioning (128). A study in humans that aimed to evaluate the effects of TH replacement therapy on systolic and diastolic function in SCH patients showed a statistically significantly increase in ejection fraction after five months of levothyroxine therapy (129). In this study the starting dose of levothyroxine was 25 µg, and the euthyroid state was achieved with a mean dose of 60.8 ± 19 µg (129). Another study aimed to compare the effects of long-term, low dose T3 (1.2 µg/100g/day) and T4 (4.8 µg/100g/day) hormone treatment to those provided by aerobic exercise training in rats post-myocardial infarction (130). The T3 and T4 hormone treatment enhanced cardiac function, with infarct size reduction and increased ejection fraction, without differences in heart rate, cardiac output and serum THs levels. Lowdose THs might offer a suitable treatment option after myocardial infarction in patients who are intolerant to aerobic exercise training (130). Evidence suggests that the hypothyroid tissue state may be present independent of normal circulating levels of THs. Therefore, it is important to identify a good biomarker of tissue hypothyroid-like state in order to treat patients effectively. The results are promising so far; experimental and clinical studies demonstrate that THs can limit ischemic injury, attenuate cardiac remodeling, and improve hemodynamics. However, the optimal timing for pretreatment and the treatment dose must be carefully evaluated to maximize the benefits.
Conclusions
It is now recognized that even subtle changes in TH levels can lead to adverse effects in the cardiovascular system. Experimental and clinical evidence suggests a close link between low TH levels and poor prognosis in ischemic heart disease. This condition should therefore be regarded as a cardiovascular risk factor. Accordingly, TH replacement therapy may yield improvements in lipid profiles, potentially reversing myocardial dysfunction and preventing the progression to heart failure. TH replacement treatment exhibits anti-ischemic and cardioprotective effects, acting as a promising target for ischemic heart disease. Moreover, subclinical hypothyroidism treatment and nonthyroidal illness syndrome constitute topics garnering increased interest; recent studies suggest that therapy with physiological doses of T3 are safe and provide beneficial effects on ischemic heart disease. Large clinical trials involving TH replacement treatment are necessary to evaluate the potential benefits on morbidity and mortality in patients with ischemic heart disease, as well as any potential long-term consequences.
Declaration of interest
The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of this review.
Funding
This work was supported by the project DOCnet (Grant number NORTE-01-0145-FEDER-000003), supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF).
|
2019-04-10T13:03:22.095Z
|
2019-04-02T00:00:00.000
|
{
"year": 2019,
"sha1": "a3966129516939f56ec71a38278be3a053829007",
"oa_license": "CCBYNC",
"oa_url": "https://ec.bioscientifica.com/downloadpdf/journals/ec/8/5/EC-19-0096.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f19e005f5880cf6f6d94efd4da772ad511b10f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247964223
|
pes2o/s2orc
|
v3-fos-license
|
Quantity and Distribution of Eosinophils in Esophageal Specimens of Adults: An Iranian Population-Based Study
Background & Objective: Eosinophils are normally found in different parts of the gastrointestinal tract and with less prevalence in the esophagus. Eosinophilic infiltration is increased as part of inflammatory reactions in various diseases. The aim of this study was to determine the count and distribution of eosinophils in esophageal specimens obtained for different causes. Methods: Endoscopy and pathology reports of esophageal specimens in Shahid Beheshti University related hospitals, Tehran, Iran, were extracted from 2016 to 2019. The prevalence of gastroesophageal reflux disease (GERD), malignancy, eosinophilic esophagitis, and asymptomatic patients were determined as the percentages of total resection and biopsy specimens. Each group was calculated and randomly selected according to the inclusion criteria. All data were analyzed statistically using SPSS software. Results: A total of 258 biopsy and resection specimens were evaluated in this study. Fourty three cases (16.7%) diagnosed as normal esophageal mucosa , 42 cases (16.3%) as non-specific esophagitis, 155 cases (60.1%) diagnosed as gastroesophageal reflux disease, 4 cases (1.6%) showed malignancy and other diagnoses were recorded for 14 cases (5.4%). The numbers of eosinophils in the epithelium and lamina propria in the normal group were 0.1±0.5 and 2.08±2.33, respectively. The eosinophil count in different groups and its relation to different histopathologic findings were diverse. Conclusion: The number of eosinophils within the lamina propria was significantly higher than those found within other layers. . The highest mean eosinophil count was observed in the epithelium and the lamina propria of cases diagnosed as GERD.
Introduction
Eosinophils act as effective agents in defending the body against foreign factors as well as in tissue regeneration (1). Eosinophils are usually found in different parts of the gastrointestinal (GI) tract and are less commonly met in the esophagus's superficial epithelium and lamina propria. On the other hand, the obvious eosinophilic infiltration in the epithelium or eosinophilic degranulation is always abnormal. The presence of the eosinophils in the GI system is also affected by the environmental factors such as diet and geographic area. There is a significant increase in eosinophil count in the inflammatory conditions. The esophageal eosinophils are mainly the sign of chronic mucosal damage due to gastroesophageal reflux disease (GERD), although similar findings can be seen in eosinophilic esophagitis (EOE). A small number of eosinophils in the distal part of the esophagus without evidence of mucosal damage is insufficient to making a diagnosis of esophagitis. An increase in the number of eosinophils in the esophageal mucosa is also commonly associated with drugs and less commonly with infectious esophagitis. In all these conditions, eosinophils are seen admixed with variable numbers of lymphocytes and neutrophils. However, the differential diagnosis of pure eosinophilic infiltration is somewhat limited to hypersensitivity reactions and some infections (2). For decades, eosinophils were thought to be involved in only two mechanisms of fighting parasitic infections and allergic conditions, but recent studies showed that eosinophils might be involved in controlling inflammation, maintaining the epithelial barrier, participating in tissue remodeling, and immune system response (3)(4)(5)(6). Previous mice studies have shown that eosinophils are present in the digestive system during infancy. This happens shortly after the GI tract microbiota is formed (7). In normal and disease-free regions, the numbers of the eosinophils increase orderly from esophagus to stomach, small intestine, and colon. In fact, from proximal to distal of the GI tract, the eosinophil count increases, and the maximum eosinophil count is seen in the cecum (8)(9)(10). Also, elevated eosinophil count is seen in some gastrointestinal diseases such as inflammatory bowel disease (IBD) and colorectal cancer (CRC).
Previous studies have also shown that diseases such as EOE are associated with elevated eosinophil count in different layers (1,(11)(12)(13). Evaluation of the mucosal eosinophilia plays an important role in interpreting endoscopic biopsies; For example, high eosinophil density can be the sign of allergic diseases. Despite the increasing prevalence of diseases associated with increased eosinophil count in the GI tract, there is finite information about the normal eosinophil limits, abnormal increase, and pattern of eosinophil distribution in different layers (9,10,14). Therefore, the present study aimed to determine the eosinophil count and distribution in esophagectomy and biopsy specimens obtained from the hospitals of Shahid Beheshti University of Medical Sciences from 2016 to 2019.
Material and Methods
The present diagnostic study was performed on 258 esophageal specimens. Endoscopic and pathological reports of esophageal biopsy and resection specimens were extracted from 2016 to 2018 by referring to the archives of the Pathology Departments of the hospitals of Shahid Beheshti University of Medical Sciences. According to the pathology report, a total of 258 available cases were classified in different diagnostic groups, including normal group, non-specific esophag-itis, GERD, esophageal malignancy, and miscellaneous diagnoses. The slides of each specimen were extracted and reviewed. The eosinophils in different layers of specimens were counted and other histopathological findings were recorded for each case based on the questionnaire.
Statistical Analysis
The collected data were analyzed with SPSS software (ver. 24). Also, the specialized statistical tests such as ANOVA, Kolmogorov-Smirnov test, variance, two-sample independent t-test, Pearson correlation test, and chi-square were performed for the categorical variables. P-value<0.05 was considered as the significant level in all tests.
Results
The present study was carried out on 258 adult cases with a mean age of 50.44±16.70 years. The frequencies of females and males were 154 (59.7%) and 104 cases (40.3%), respectively. Based on the histopathological findings, the cases were distributed in different diagnostic groups as follow: normal (n=43, 16.7%), non-specific esophagitis (n=42, 16.3%), GERD (n=155, 60.1%), malignancy (n=4, 1.6%), and miscellaneous diagnoses (n=14, 5.4%). Table 1 shows the eosinophil counts through different esophageal layers in different diagnostic groups. The ANOVA test showed that the average number of eosinophils in the epithelial and lamina propria was significantly different among diagnostic groups (P<0.01). The number of eosinophils in the lamina propria was significantly higher than those found in other layers in all groups. Pearson correlation test was used to investigate the association between the average number of eosinophils and neutrophils in different esophageal layers ( Table 2). The results showed a significant positive association between the number of eosinophils and neutrophils in the epithelial layer and the LP (P<0.001). There was no significant positive association between the number of eosinophils and neutrophils in the submucosa, muscularis propria, and adventitia, which affected the small number of specimens bearing these layers. Pearson correlation test was used to investigate the association between the numbers of eosinophils and LPCs in different esophageal layers ( Table 3). The findings showed a significant positive association between the number of eosinophils and LPCs in the LP (P<0.001). However, no significant positive association was found between the number of eosinophils and LPC in the epithelial, muscularis propria, submucosa, and adventitia layers.
In this study, the average numbers of eosinophils, neutrophils, and LPC in the total esophageal layers were 7.75±14.70, 5.63±20.19 and 90.10±82.27, respectively.
A two-sample independent t-test showed that the difference between the average number of eosinophils in epithelial and lamina propria layers in men and women was not statistically significant (P=0.88>0.05 and 0.60>0.05, respectively).
There was no statistically significant difference in the average numbers of eosinophils in the epithelium at different degrees of BCH (P=0.14>0.05). However, in lamina propria, the difference in the average number of eosinophils was proportionate to different degrees of BCH (P=0.02<0.05). In order to determine the average number of eosinophils in the epithelium and lamina propria in people with chronic inflammation, the ANOVA test showed a significant increase in the eosinophil infiltrate in proportion to the chronic inflammation intensity (P<0.05 in both layers).
ANOVA test also showed a significant increase in the average number of eosinophil infiltrate in the epithelium and lamina propria layers in proportion to acute inflammation intensity (P<0.01 in both layers).
In the specimens with the presence of distended pale squamous balloon cells, the average number of eosinophils in the epithelium and lamina propria layers was not statistically significant (P=0.19>0.05 and 0.22>0.05, respectively).
In the cases with the presence of intercellular edema, there was a statistically significant higher mean eosinophil number in epithelium and lamina propria layers (P<0.001 in both layers) compared to the absence of it (Tables 4 and 5). No statistically significant difference was observed in the average number of eosinophils in the epithelium and lamina propria in terms of the prese-nce/absence of surface erosion.
The two-sample independent t-test showed a statistically significant difference in the mean eosinophil number in the presence of fibrosis in lamina propria (P=0.002) ( Table 6).
Discussion
The present study was carried out on 258 cases, of which 154 (59.7%) were women and 104 (40.3%) were men. One of the main objectives of the current study was to determine the mean number of eosinophils in different layers of the esophagus, especially in normal individuals, so that we can introduce a normal limit in the Iranian population to avoid overestimation of eosinophilia. The average numbers of eosinophils were 0.10±0.50 and 2.08±2.33 in the epithelium and LP layers, respectively. The statistical analysis showed a significantly higher average eosinophil count in the LP layer than the epithelial layer. IRANIAN JOURNAL OF PATHOLOGY Matsushia et al. carried out a study on 132 normal Japanese populations to investigate the effect of race on the number of eosinophils in the GI tract and compared it with American and Caucasian races. They found that the number of mucosal eosinophils was clearly higher in all parts of the GI tract of the Asian race than Caucasian and American races. They also stated that the number of mucosal eosinophils in esophageal specimens was 0.07±0.43 (6).
However, in a study on gastrointestinal specimen biopsies in atopic and non-atopic individuals by DeBrosse, the number of eosinophils reported as 0.03±0.10 eosinophils per high-power field (hpf), and the highest eosinophil number was 1 eosinophil/hpf. They found a significant difference between the atopic and normal people in terms of eosinophils number in the biopsy specimens. In other words, the number of eosinophils in the biopsy specimens of atopic individuals was significantly different from that of non-atopic individuals (3).
In our study, the number of eosinophils in different esophageal layers was also assessed in various diagnostic groups. The highest eosinophil levels were found in the epithelium (average of 1.81±7.28) and lamina propria (average of 8.09±12.65) of GERD group, followed by non-specific esophagitis group with the average eosinophil numbers of 0.67±2.10 in the epithelium and 4.42±6.98 in the lamina propria that were clearly higher than normal group (epitheli-um=0.10±0.50 and the lamina propria=2.08±2.33).
It was impossible to compare the malignancy and miscellaneous groups due to the small number of cases in these groups. The mean eosinophil count in the epithelial and lamina propria layers of eosinophilic esophagitis was not assessable in our study due to few cases in this category; however, there are defined criteria and eosinophil count cutoff for the eosinophilic esophagitis in the textbooks.
There was no significant difference between males and females in terms of the average number of eosinophils in the epithelial and lamina propria esophageal layers.
In addition, we evaluated the relationship between the eosinophil count and other histopathological factors and the results showed a significant positive association between the number of eosinophils and the number of neutrophils in the biopsy specimens, the number of LPCs in the lamina propria layer, degrees of basal cell hyperplasia, the presence of elongation of the lamina propria papillae, the presence of intercellular edema and fibrosis in lamina propria, as well as harmonized eosinophil count with intensities of acute and chronic inflammations.
There was no significant statistical relationship between the number of eosinophils and LPCs in the epithelial layer, the number of eosinophils, the presence of distended pale squamous balloon cells, and surface erosion.
Conclusion
Finally, we concluded that the number of eosinophils would be undoubtedly higher in the lamina propria than other layers in all groups and the highest mean eosinophil count is indeed in the GERD group. Nonetheless, a consensus on the eosinophil count limit requires more comprehensive studies using adequate sample size designed in the matched clinical and pathological categories of different geographical regions .
|
2022-04-06T15:20:26.465Z
|
2022-03-08T00:00:00.000
|
{
"year": 2022,
"sha1": "125c927c0e3a74d2c375fcb645c47cfecc301148",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0a6dbd9540df0eddb38c4c5b0aa4e5e5bf2e5f9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237089729
|
pes2o/s2orc
|
v3-fos-license
|
HS‐GC‐IMS detection of volatile organic compounds in Acacia honey powders under vacuum belt drying at different temperatures
Abstract Honey is a commodity of great nutritional value, but deep‐processed honey products are uncommon. Herein, we used vacuum belt dryer to dry Acacia honey at 60°C, 70°C, and 80°C, prepared it into powder, and analyzed its volatile compound differences. We established HS‐GC‐IMS method to detect the volatile organic compounds (VOCs) of these three Acacia honey powders (AHPs). In total, 77 peaks were detected, and 23 volatile compounds were identified, including eight aldehydes, six ketones, three furans, one alcohol, one phenol, one lactone, one ester, one acid, and one nitrile. Moreover, principal component analysis (PCA) and fingerprint similarity analysis based on the Euclidean distance distinguished the three heating temperature treatments. Clearly, it was concluded that there are significant differences in volatile substances at different tested temperatures, and when the AHP was incubated at 80°C, more volatile compounds were detected.
The abbreviation of Acacia honey powder under different processes are as follows: VBD-AHP1 (after 60℃ vacuum belt drying of Acacia honey powder); VBD-AHP2 (after 70°C vacuum belt drying of Acacia honey powder); and VBD-AHP3 (after 80°C vacuum belt drying of Acacia honey powder).
| HS-GC-IMS system
Headspace sampling conditions were set as follows: 2.0 g sample was placed in a 20 mL headspace bottle and incubated at 80°C for 20 min. The centrifuge speed was 500 rpm, and the temperature of inject needle was 85°C, and 500 μL sample was injected.
| Statistical analysis
Statistical data analysis was performed by Laboratory Analytical Viewer (LAV) and GC-IMS Library Search software from different angles.
| HS-GC-IMS plots of different treatments of honey powders
In this study, HS-GC-IMS was used to analyze the VOCs of Acacia honey powders under different temperatures during vacuum belt drying processing. From Figure 1, the ordinate represents the retention time of the gas chromatography, and the abscissa represents drift time. When the drift time is between 7.92 and 7.93 ms, there is a reaction ion peak (RIP). Moreover, when the drift time is between 8.3 and 8.8 ms, there is an ethanol peak marked separately, with a higher signal response intensity.
In order to compare the differences between this three samples in more detail, the different comparison modes (Figure 2): Select the spectrum of VBD-AHP1 as the reference, and subtract the reference from the spectra of the two other samples. If the two VOCs are the same, the deducted background is white. And red means the concentration of the substance is higher than the reference, while blue means lower. The brighter the color, the higher the content, and vice versa. It can be observed from Figure 2 that there was little difference between VBD-AHP1 and VBD-AHP2. When the drift time is | 4087 FENG Et al.
between 8.0 and 9.5, the content of VOCs in VBD-AHP2 was lower than VBD-AHP1, and later, when the drift time is 10.0, VBD-AHP2 was higher, but the difference was not significant. For VBD-AHP3, the contents of volatile substances were more complicated than the former two, so further judgment was needed. However, more volatile compounds were found in VBD-AHP3, maybe that high temperature promoted some fewer volatile compounds (Plutowska et al., 2011).
| Identification of VOCs from different treatments of Acacia honey powders
The aroma components of Acacia honey contain alcohols (14.41 ng/ ml), alkanes (4.34 ng/ml), esters (3.31 ng/ml), acids (3.06 ng/ml), aldehydes (2.45 ng/ml), furan, benzene and its derivatives (0.93 ng/ ml), and ketones (0.70 ng/ml) (Pei et al., 2014). In this research, F I G U R E 1 HS-GC-IMS plot of Acacia honey powders F I G U R E 2 HS-GC-IMS plot in difference comparison mode of Acacia honey powders HS-GC-IMS was used to detect the VOCs of Acacia honey powder with vacuum belt drying at different temperatures. The qualitative analysis of volatile components in Acacia honey powder is shown in Figure 3, in which the abscissa represents the drift time (normalized) and the ordinate represents the retention time, and the numbers correspond to the compounds in Table 1. A total of 77 peaks were detected and 23 volatile compounds were identified, including eight aldehydes, six ketones, three furans, one alcohol, one phenol, one lactone, one ester, one acid, and one nitrile. Table 1 lists the qualitative results, including the compound name, CAS number, molecular weight (MW), the Retention Index (RI), the retention time (RT), and the drift time (DT).
Heat treatment can increase aldehydes and ketones (Li et al., 2012). And aldehydes were derived from the auto-oxidation of lipid, while ketones were mainly derived from the thermal oxidation or degradation of unsaturated fatty acids (Liu et al., 2020).
Surprisingly, n-Nonanal was only found in Acacia honey, and also contained its homologues heptanal and octanal, which had sweet citrus flavor (Plutowska et al., 2011). Moreover, Escriche et al. (2009) found that heat treatment significantly changed 29 compounds, 20 of which belonged to the alcohol and aldehyde family. Among them, 2,3-butanediol was detected and its content was increased by pasteurization. 2,3-Butanediol can be prepared from sugar, molasses, malt syrup, or alcohol mother liquor as raw materials through biolog- Of note was that, in this study, a large number of benzene compounds (benzaldehyde, phenol, phenylacetic acid) were also found. In addition, Acacia honey contained a lot of esters, accounting for 32.43% of aroma components (Pei et al., 2014). But a small amount of ethyl acetate was detected in Acacia honey (Wang et al., 2021), may be derived from the interaction of alcohol and free fatty acids produced by lipid oxidation in the sample . Acacia honey also contained phenylacetaldehyde (1.96 ng/ml) (Pei et al., 2014), which was heated and oxidized to generate phenylacetic acid. The detected 3-butenenitrile belonged to the category of nitriles, which may be caused by prolonged contact between packaging materials and honey powders.
| Gallery plot of different treatments of Acacia honey powders
In order to clearly compare the specific volatile substance differ-
| Cluster analysis of Acacia honey powders
Principal component analysis (PCA) is a multivariate statistical method used to examine the correlation between multiple variables, constitutes a powerful visualization tool, provides a method to reduce the dimensionality of the data, and can eliminate unnecessary information (Chen et al., 2020). In order to analyze the problem comprehensively, PCA is applied to these variables.
Generally, when the cumulative contribution rate of PC1 and PC2 reaches 60%, PCA model is considered as the preferred separation model (Wu et al., 2015). PCA had been used to distinguish honey from different floral origins, different varieties of honey, and discrimination between conventional honey and organic honey (Schuhfried et al., 2016;Schwolow et al., 2019;Wang, Yang, et al., 2019). However, less research for the processes of honey In this study, PCA was used to distinguish VBD-AHP at different temperatures. A total cumulative contribution was 91%, of which PC1 was 74% and PC2 was 17%. In addition, it can be observed which can confirm that there was no significant difference between them. However, both of them were far away from VBD-AHP3, which can prove that the components were obviously different.
Meanwhile, the analysis of fingerprint similarity based on the Euclidean distance judged the difference in the samples. This method is a cluster analysis method based on distance discrimination, which refers to the true distance between two points in space, or the natural length of the vector (i.e., the distance from the point to the origin) (Tang, 2019), reflects the degree of intimacy between the research subjects (Li et al., 2008). used the square Euclidean distance measurement method to cluster analysis on the gas-phase matching data of 38 honey samples from 4 different nectar sources, and found that these honeys could be clustered into one category, respectively. Figure 6 shows the fingerprint similarity based on Euclidean distance, and Table 2 shows the values of Euclidean distance between the three.
We can find that VBD-AHP1 and VBD-AHP2 were relatively close, and the average Euclidean distance between them was 2,507,274.071. Meanwhile, the average Euclidean distance between VBD-AHP1 and VBD-AHP3 was 12,494,019.43 and VBD-AHP2 and VBD-AHP3 was 11,772,169.19. So, the difference in VBD-AHP3 was more significant than the first two.
| D ISCUSS I ON AND CON CLUS I ON
Generally, liquid Acacia honey has a thick fluid-like shape; fresh Acacia honey is colourless and the color may be deepened when it placed; moreover, it has light fragrance of sophora flower and is not easy to crystallize, and no impurities were visible with normal vision. Meanwhile, the content of glucose and fructose is more than 60 g/100 g, sucrose is less than 5 g/100 g, and maltose is less than 3 g/100 g. From the visual analysis, brownish-yellow amorphous powder of AHP at 60, 70°C, while light yellow amorphous powder of AHP at 80°C. And AHP3 is finer and less dense than those at 60°C and 70°C. Most importantly, honey powder would retain pure honey flavor and is a natural sweetener, which is more convenient to carry than liquid honey, avoiding the waste of liquid honey and environmental hygiene problems, reducing storage space, and prolonging preservation time. Therefore, the study of AHP is crucial and essential.
Because honey powder is a deep processing product of honey, and nowadays, the processing of honey powder is monotonous, mainly freeze-dried and roller cylinder-dried. So, we considered whether vacuum belt drying can be applied in the honey industry, and we looked up the literature and found that, for materials with high stickiness, easy agglomeration, thermoplastic, and thermosensitive properties, the best option was vacuum-belt dryer.
Therefore, in order to analyze its process parameters, we considered three different temperatures of vacuum belt drying, mainly to detect the differences in VOCs between the three honey powders, and to identify differences in flavor aspects that consumers pay attention to.
The HMF in Acacia honey is mainly produced by amino acids and glucose or fructose in honey undergoing the Maillard reaction under acidic conditions (Kowalski et al., 2013). Since HMF can cause irritation to human eyes, mucous membranes, skin, etc., and can cause cell and gene mutations, excessive intake can result in poisoning and even initiate cancer (Ma et al., 2019). Therefore, HMF in international trade of honey belongs to the mandatory detection indicator, which states that its content should be ≤40 mg/ kg. Our raw material acceptance criteria indicate that the HMF is less than 10 mg/kg, and the HMF content was found to be within 4 h of heating at 80°C (Lu et al., 2006). In addition, Lu et al. (2006) found that the formation rate of HMF was buckwheat honey >jujube honey >acacia honey >locust honey, and proposed that during honey thermal processing treatment, heating temperature should be controlled within 80°C to prevent excessive HMF F I G U R E 6 Fingerprint similarity based on the Euclidean distance of VBD-AHPs content. Furthermore, Wang et al. (2018) surveyed to obtain commercially available Acacia honey in Chengdu, Sichuan Province, China, the number of detected was 22, the eligibility rate was 100%, and the mean content of HMF was 5.96 ± 2.93 mg/kg, 81.82% were in the range of 0-10 mg/kg, and 18.18% were in the range of 10.1-20 mg/kg.
In this study, HS-GC-IMS was used to detect VOCs of VBD-AHP.
A total of 77 peaks were detected and 23 volatile compounds were identified, including eight aldehydes, six ketones, three furans, one alcohol, one phenol, one lactone, one ester, one acid, and one nitrile. Since GC-IMS is not able to detect alkane compounds, 43 signal peaks were not characterized in the current study, and these unknown peaks will be worthy to continue to be identified using GC-
CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interests.
AUTH O R S CO NTR I B UTI O N S
Duo Feng investigated the study, wrote the original draft, and involved in plot analysis. Jing Wang involved in formal analysis. Yue He investigated the study. Xiao-jiao Ji validated the study. Hui Tang provided resources. Yong-mei Dong visualized the data. Wen-jie Yan wrote, reviewed, and edited the manuscript, supervised the data, acquired funding. [+] VBD-AHP1
|
2021-08-17T05:26:05.843Z
|
2021-06-24T00:00:00.000
|
{
"year": 2021,
"sha1": "1d8dfff36acbb6fb350125e9e52b0409b15a707e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.2364",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d8dfff36acbb6fb350125e9e52b0409b15a707e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250468960
|
pes2o/s2orc
|
v3-fos-license
|
THE CONDITION ASSESSMENT OF HOTEL BUSINESS DEVELOPMENT IN UKRAINE
Topicality. In this article activities of collective accommodation facilities (CAF) in Ukrainian regions are analysed, in particular, hotel enterprises (accommodation fund). In the hotel market, the competition strengthening between hotels can be observed, that is from the world hotel concerns. Therefore, hotel enterprises are trying to increase their productivity and competitiveness by implementing effective organisation and management methods. this of five-star three-star Odeska, Kyivska and Ivano-Frankivska regions. Conclusions and discussions. Thus, the study highlights that the reduced demand for hotel enterprises services in crisis circumstances is the main problem of hotel business development. It is necessary to develop infrastructure, introduce innovation and project support for the hotel enterprises development, implicate investments in the economic sectors progress. In comparison with other real estate sectors, double standards, a significant payback period of hotel enterprises hinder the hotel potential amplification in the country. scientific and theoretical elaborations of blighty and foreign scholars on the problems of quality management of hotel services, information materials of statistical and reference publications,
The urgency of the problem
The problem formulation. Focusing on European standards and some transition to them is a distinctive feature of hotel business as one of the most promising areas in Ukraine. Hotel enterprises are looking to increase their profits, and are searching for good ways and effective tools in order to achieve the desired financial results. According to world standards, the accommodation fund in Ukraine is almost 10 times less than the European average. The latter necessitates the study of hotel enterprises and similar accommodation facilities.
Unresolved issues. Studies of collective accommodation facilities have not been conducted; problems of efficient hotel enterprises functioning have not been researched in detail, as well as ways of competitiveness preferment of enterprises in modern conditions.
Aim and methods of research
The aim of the study is to analyse the activity of collective accommodation facilities, to identify problems of effective hotel business functioning in Ukraine.
The methodological basis of the study is the assessment of the state of hotel business development in Ukraine through the study of CAF, hotels and similar accommodations.
Research methods: general scientific methods are used in this research: scientific and theoretical elaborations of blighty and foreign scholars on the problems of quality management of hotel services, information materials of statistical and reference publications, analysis, synthesis, generalisation, comparison.
The object of the study is the process of assessing the state and prospects of hotel business development in Ukraine.
The subject of the study is a set of theoretical and practical aspects of the state of hotel business amplification in Ukraine.
The scientific novelty of this research is to substantiate peculiarities of hotel business development in Ukraine in order to identify problems of its efficient functioning. Based on statistical data, CAF, hotel enterprises, amounts of rooms in hotels and similar accommodation facilities for the period from 2018 to 2020 are analysed. It is found out that the number of foreign tourists staying in hotels in 2020 has decreased significantly. The main reason is considered to be the consequences of the COVID-19 pandemic. In 2020, a substantive amount of rooms is observed in hotels in Kyiv, and in Lvivska, Odeska, Kyivska, Ivano-Frankivska regions.
The informative basis of the study is legislative and normative acts, regulating hotel enterprises functioning, theoretical and methodological elaborations of blighty and foreign scientists, statistical materials of the State Statistics Service of Ukraine.
Results of the research paper
An accommodation facility, in the terminology of the World Tourism Organisation (UNWTO), is any accommodation object that provides placing for overnight stays regularly or occasionally. According to international practice developed by UNWTO experts, accommodation facilities are divided into collective and individual. CAF are accommodations that provide placing in a room or other mean. The number of places must exceed a certain minimum for groups of persons larger than one single family. All places subject to a single management and payment in accordance with established prices (State Statistics Service of Ukraine. Main Department of Statistics in Lviv region, b.d.). Ukrainian legislation specifies accommodation facilities similar to hotels, and rules for using hotels. The Law of Ukraine «On Tourism» outlines such enterprises as «… enterprises of any organisational and legal form and ownership form, consisting of rooms and providing limited hotel services, including daily bedding, cleaning rooms and bathrooms» (Verkhovna Rada of Ukraine, 1995).
Based on official data of State Service, for the period from 2018 to 2020, the statistics of similar accommodation means decreased from 4719 to 1337 un., which is -3382 un., and in percentage terms, it is 71,67 % accordinly (Table 1, Pic. 1). Accordingly, in 2018-2020, the number of rooms in collective accommodation facilities also decreased from 300 010 to 155 029 un., in other words, by -144 981 un., which is 48,33 % in percentage terms. These official data characterise the overall decrease in the number of rooms in hotels and similar accommodation means from 2018 to 2020, respectively, from 71 132 to 37 497 un. So, the decrease of -33 635 un. is observed, which is a deviation of 42,29 % (Pic. 1). A significant decrease in the quantity of hotels and similar accommodation facilities is observed in 2020 compared to 2018, respectively, 728 and 2777 un., which characterises the decrease by -2052 un. (Pic. 2).
Accordingly, the number of rooms in hotels for the period from 2018 to 2020 also decreased from 135 327 to 72 447 un., which is -62 880 un., the deviation is 46,47 %. The segment of five-star hotels has suffered more than three-and four-star hotels, due to a sharp drop in the flow of foreign tourists to Ukraine. It is interesting that in 2018-2019 the development of collective accommodation facilities belonging to natural persons-subjects of entrepreneurial activity has a positive tendency -3128 un., while CAF of legal entities are developing with a negative trend -1591 un. The reduction itself is 1537 un. The main reasons for the development of CAF are the simplification of the system of registration and taxation of natural persons-subjects of entrepreneurial activity, the necessity to take certification for the category recognition and less funding for activities.
In order to identify the competitiveness of CAF in the regions of Ukraine, the following study on their total number is conducted. In the comparative dynamics for 2018-2019, the number of CAF increased by 35 un., which is 2,2 % (Table 2). However, for the period from 2018 to 2020, there is a decrease in the number of CAF by -254 un. (16 %). The main reasons for this are the consequences of the COVID-19 pandemic and uncertainty in the further situation development, falling incomes of population as a result of economic crisis, respectively, hotel companies are suffering losses and reformatting.
Ensuring ecologically safe conditions of environment is a relevant and necessary trend in the hotel enterprises development. Adherence to the permissible limits of toxic substances influence on health is an important task of public safety (Ilyin et al., 2020). Ecological issues are at the forefront of the International Hotel and Restaurant Association (IHRA) activity. Uneven loading of CAF depends on many factors. One of the main among them is an unreasoned policy of local self-government bodies and state authorities.
The scope of room fund in Ukraine tends to increase: in 2018 and 2019 it was 39 589 and 39 634 rooms. In 2019, in Ivano-Frankivska and Lvivska regions, the total number of rooms was 2 120 and 5 677, which is 36 and 35 un. more than in 2018 (Table 3, Pic. 4). However, for the period from 2018 to 2020, a decrease in the number of rooms in hotels and similar accommodation means was observed, from 39 589 to 37 497 un., which is -2 092 un. (5,3 %).
Conclusions and discussion of results
Consequently, the study finds out that reducing the demand for services of hotel enterprises in crisis circumstances is the main problem of hotel business development. The effective enlargement of hotel enterprises requires a powerful analytical system, which should be based on official statistics for management decisions initiation.
The decrease of CAF, hotels, the number of rooms and foreign tourists who stayed in them is noticed. However, in Ukraine, the scope of room fund tends to increase, particularly, in Lvivska, Odeska, Kyivska and Ivano-Frankivska regions.
It is necessary to develop infrastructure, improve the quality of services and expand their range in accordance with consumers' requirements, and with the use of advanced technologies, as well as to introduce innovation and project support for hotel enterprises development, to attract investments in economic sectors growth, increase population solvency. Double standards, lack of significant management decisions, a considerable payback period of hotel enterprises in comparison with other real estate sectors hinder the development of the country's hotel potential. The research showed insignificant investment activity in Ukrainian hotel business development, due to the difference in non-compliance of standards observed in Ukraine with world services, the inconsistency of service quality to price paid by guests in COVID-19 pandemic circumstances. For the hotel business progress in Ukraine, it is necessary to introduce the experience of foreign countries, which will ensure the maximum level of management. It is vehemently to create appropriate conditions for the association of hotels in the network, and to form integration units. These processes will enable the search for optimal management methods and cooperation in the network, which will help to obtain the best management results in the hotel business maintenance.
Prospects for further studies are the elaboration of offers for regulatory and legal support for the activities of CAF and hotel enterprises. Issues, related to the study of the classification of economic activities types (CEAT/KVED) remain relevant, taking into account the fact that the classifier is an important source of information and analysis of capital investments which characterise the hotel business in Ukraine.
|
2022-07-13T16:32:21.633Z
|
2022-06-30T00:00:00.000
|
{
"year": 2022,
"sha1": "fc634ce8f6664b7f13d5ca6e62cb962c4d0b9451",
"oa_license": "CCBY",
"oa_url": "http://restaurant-hotel.knukim.edu.ua/article/download/260873/257441",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e3b373eba63583d8d552c8048161af488d16b829",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
}
|
290841
|
pes2o/s2orc
|
v3-fos-license
|
Exploiting Context for Biomedical Entity Recognition: From Syntax to the Web
We describe a machine learning system for the recognition of names in biomedical texts. The sys-tem makes extensive use of local and syntactic features within the text, as well as external resources including the web and gazetteers. It achieves an F-score of 70% on the Coling 2004 NLPBA/BioNLP shared task of identifying five biomedical named entities in the GENIA corpus.
Introduction
The explosion of information in the fields of molecular biology and genetics has provided a unique opportunity for natural language processing techniques to aid researchers and curators of databases in the biomedical field by providing text mining services. Yet typical natural language processing tasks such as named entity recognition, information extraction, and word sense disambiguation are particularly challenging in the biomedical domain with its highly complex and idiosyncratic language. With the increasing use of shared tasks and shared evaluation procedures (e.g., the recent BioCreative, TREC, and KDD Cup), it is rapidly becoming clear that performance in this domain is markedly lower than the field has come to expect from the standard domain of newswire. The Coling 2004 shared task focuses on the problem of Named Entity Recognition, requiring participating systems to identify the five named entities of protein, RNA, DNA, cell line, and cell type in the GENIA corpus of MEDLINE abstracts (Ohta et al., 2002). In this paper we describe a machine learning system incorporating a diverse set of features and various external resources to accomplish this task. We describe our system in detail and also discuss some sources of error.
System Description
Our system is a Maximum Entropy Markov Model, which further develops a system earlier used for the CoNLL 2003 shared task and the 2004 BioCreative critical assessment of information extraction systems, a task that involved identifying gene and protein name mentions but not distinguishing between them (Dingare et al., 2004). Unlike the above two tasks, many of the entities in the current task do not have good internal cues for distinguishing the class of entity: various systematic polysemies and the widespread use of acronyms mean that internal cues are lacking. The challenge was thus to make better use of contextual features, including local and syntactic features, and external resources in order to succeed at this task.
Local Features
We used a variety of features describing the immediate content and context of each word, including the word itself, the previous and next words, word prefixes and suffix of up to a length of 6 characters, word shapes, and features describing the named entity tags assigned to the previous words. Word shapes refer to a mapping of each word onto equivalence classes that encodes attributes such as length, capitalization, numerals, greek letters, and so on. For instance, "Varicella-zoster" would become Xxxxx, "mRNA" would become xXXX, and "CPA1" would become XXXd. We also incorporated part-ofspeech tagging, using the TnT tagger (Brants, 2000) retrained on the GENIA corpus gold standard partof-speech tagging. We also used various interaction terms (conjunctions) of these base-level features in various ways. The full set of local features is outlined in Table 1.
External Resources
We made use of a number of external resources, including gazetteers, web-querying, use of the surrounding abstract, and frequency counts from the British National Corpus.
Word Features w i , w i−1 , w i+1 Disjunction of 5 prev words Disjunction of 5 next words TnT POS Up to a length of 6 Abbreviations abbr i abbr i−1 + abbr i abbr i + abbr i+1 abbr i−1 + abbr i + abbr i+1 Word Shape shape i , shape i−1 , shape i+1 shape i−1 + shape i shape i + shape i+1 Signals when one parenthesis in a pair has been assigned a different tag than the other in a window of 4 words
Frequency
Many entries in gazetteers are ambiguous words, occasionally used in the sense that the gazetteer seeks to represent, but at least as frequently not. So while the information that a token was seen in a gazetteer is an unreliable indicator of whether it is an entity, less frequent words are less likely to be ambiguous than more frequent ones. Additionally, more frequent words are likely to have been seen often in the training data and the system should be better at classifying them, while less frequent words are a common source of error and their classification is more likely to benefit from the use of external resources. We assigned each word in the training and testing data a frequency category corresponding to its frequency in the British National Corpus, a 100 million word balanced corpus, and used conjunctions of this category and certain other features.
Gazetteers
Our gazetteer contained only gene names and was compiled from lists from biomedical websites (such as LocusLink) as well as from the Gene Ontology and the data provided for the BioCreative 2004 tasks. The final gazetteer contained 1,731,496 entries. Because it contained only gene names, and for the reasons discussed earlier, we suspect that it was not terribly useful for identifying the presences of entities, but rather that it mainly helped to establish the exact beginning and ending point of multi-word entities recognized mainly through other features.
Web
For each of the named entity classes, we built indicative contexts, such as "X mRNA" for RNA, or "X ligation" for protein. For each entity X which had a frequency lower than 10 in the British National Corpus, we submitted instantiations of each pattern to the web, using the Google API, and obtained the number of hits. The pattern that returned the highest number of hits determined the feature value (e.g., "web-protein", or "web-RNA"). If no hits were returned by any pattern, a value "O-web" was assigned. This value was also assigned to all words whose frequency was higher than 10 (using yet another value for words with higher frequency did not improve the tagger's performance).
Abstracts
A number of NER systems have made effective use of how the same token was tagged in different parts of the same document (see (Curran and Clark, 2003) and (Mikheev et al., 1999)). A token which appears in an unindicative context in one sentence may appear in a very obvious context in another sentence in the same abstract. To leverage this we tagged each abstract twice, providing for each token a feature indicating whether it was tagged as an entity elsewhere in the abstract. This information was only useful when combined with information on frequency.
Deeper Syntactic Features
While the local features discussed earlier are all fairly surface level, our system also makes use of deeper syntactic features. We fully parsed the training and testing data using the Stanford Parser of operating on the TnT part-of-speech tagging -we believe that the unlexicalized nature of this parser makes it a particularly suitable statistical parser to use when there is a large domain mismatch between the training material (Wall Street Journal text) and the target domain, but have not yet carefully evaluated this. Then, for each word in the sentence which is inside a noun phrase, the head and governor of the noun phrase are extracted. These features are not very useful when identifying only two classes (such as GENE and OTHER in the BioCreative task), but they were quite useful for this task because of the large number of classes which the system needed to distinguish between. Because the classifier is now choosing between classes where members can look very similar, longer distance information can provide a better representation of the context in which the word appears. For instance, the word phosphorylation occurs in the training corpus 492 times, 482 of which it is was classified as other. However, it is the governor of 738 words, of which 443 are protein, 292 are other and only 3 are cell line.
We also made use of abbreviation matching to help ensure consistency of labels. Abbreviations and long forms were extracted from the data using the method of (Schwartz and Hearst, 2003). This data was combined with a list of other abbreviations and long forms extracted from the BioCreative 2004 task. Then all occurrences of either the long or short forms in the data was labeled. These labels were included in the system as features and helped to improve boundary detection.
Adjacent Entities
When training our classifier, we merged the B-and I-labels for each class, so it did not learn how to differentiate between the first word of a class and internal word. There were several motivations for doing this. Foremost was memory concerns; our final system trained on just the six classes had 1.5 million features -we just did not have the resources to train it over more classes without giving up many of our features. Our second motivation was that by merging the beginning and internal labels for a particular class, the classifier would see more examples of that class and learn better how to identify it. The drawback of this move is that when two entities belonging to the same class are adjacent, our classifier will automatically merge them into one entity. We did attempt to split them back up using NP chunks, but this severely reduced performance.
Results and Discussion
Our results on the evaluation data and a confusion matrix are shown in Tables 2 and 4. Table 4 suggests areas for further work. Collapsing the B-and I-tags does cost us quite a bit. Otherwise confusions between some named entity and being nothing are most of the errors, although protein/DNA and cellline/cell-type confusions are also noticeable.
Analysis of performance in biomedical Named Entity Recognition tends to be dominated by the perceived poorness of the results, stemming from the twin beliefs that performance of roughly ninety percent is the state-of-the-art and that performance of 100% (or close to that) is possible and the goal to be aimed for. Both of these beliefs are questionable, as the top MUC 7 performance of 93.39% (Mikheev et al., 1998) in the domain of newswire text used an easier performance metric where incorrect boundaries were given partial credit, while both the biomedical NER shared tasks to date have used an exact match criterion where one is doubly penalized (both as a FP and as a FN) for incorrect boundaries. However, the difference in metric clearly cannot account entirely for the performance discrepancy between newswire NER and biomedical NER. Biomedical NER appears to be a harder task due to the widespread ambiguity of terms out of context, the complexity of medical language, and the apparent need for expert domain knowledge. These are problems that more sophisticated machine learning systems using resources such as ontologies and deep processing might be able to overcome. However, one should also consider the inherent "fuzziness" of the classification task. The few existing studies of inter-annotator agreement for biomedical named entities have measured agreement between 87% (Hirschman, 2003) and 89% (Demetrious and Gaizauskas, 2003). As far as we know there are no inter-annotator agreement results for the GE-NIA corpus, and it is necessary to have such results before properly evaluating the performance of systems. In particular, the fact that BioNLP sought to distinguish between gene and protein names, when these are known to be systematically ambiguous, and when in fact in the GENIA corpus many entities were doubly classified as "protein molecule or DNA RNA cell line cell type protein gold\ans B-I-B-I-B-I-B-I-B-I-O B-DNA 723 39 0 0 1 0 0 0 154 1 138 I-DNA 52 1390 0 0 0 0 0 0 19 71 257 B-RNA 1 0 89 3 0 0 0 0 14 0 11 I-RNA 0 1 5 164 0 0 0 0 2 0 15 B-cell line 3 0 0 0 319 41 37 5 12 1 82 I-cell line 0 6 0 0 24 713 5 104 0 14 123 B-cell type 1 0 0 0 164 22 1228 90 31 5 380 I-cell type 0 0 0 0 13 383 88 2101 8 27 371 B-protein 48 5 10 3 20 1 19 3 Table 4: Examples of annotation inconsistencies region" and "DNA molecule or region", suggests that inter-annotator agreement could be low, and that many entities in fact have more than one classification. One area where GENIA appears inconsistent is in the labeling of preceding adjectives. The data was selected by querying for the term human, yet the term is labeled inconsistently, as is shown in Table 4. Of the 1790 times the term human occurred before or at the beginning of an entity in the training data, it was not classified as part of the entity 110 times. In the test data, there is only on instance (out of 130) where the term is excluded. Adjectives are excluded approximately 25% of the time in both the training and evaluation data. There are also inconsistencies when two entities are separated by the word and.
|
2014-07-01T00:00:00.000Z
|
2004-08-28T00:00:00.000
|
{
"year": 2004,
"sha1": "1cd61ce9ac9b63ba6ffa1f4732926d4b153aa344",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1567614&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "799f7426bc2d0786650a908c7e82aac44ee88f36",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
146103177
|
pes2o/s2orc
|
v3-fos-license
|
Outcomes of radiotherapy in advanced external auditory canal cancer
Abstract External auditory canal cancer (EACC) is a rare malignant tumor. In the present study, we retrospectively evaluated the treatment results in patients with advanced EACC who were treated using external-beam radiotherapy (EBRT) combined with chemotherapy or radical surgery. Overall, 21 patients with Stage III (n = 8) or Stage IV (n = 13) EACC who underwent initial treatment at our hospital between 2003 and 2016 were enrolled in this study. The 2-year overall survival (OS) and locoregional control (LRC) rates of all patients were 62% and 71%, respectively. The 2-year OS and LRC rates in patients who had received EBRT and concurrent chemotherapy with docetaxel, cisplatin and 5-fluorouracil (TPF, n = 6) were 100%. These results were higher than the 2-year OS and LRC rates of 62% and 69%, respectively, in patients who had received radical surgery and EBRT (n = 13). The rates were 0% in those who had neither received TPF nor undergone surgery in addition to EBRT (n = 2). Grade 3 bone or soft tissue necrosis was observed in 2 patients who had undergone surgery and postoperative EBRT. Our data suggest that the combination therapy of EBRT and surgery and/or chemotherapy may be the most effective treatment options for advanced EACC, and EBRT with concurrent chemotherapy with TPF is potentially the most acceptable.
INTRODUCTION
External auditory canal cancer (EACC) is a rare malignant tumor that affects 1-6 per 1 million population-years [1,2]. The low numbers of patients in studies to date have led to an appropriate treatment strategy for EACC remaining controversial. Several clinical reviews and meta-analyses have concluded that radical primary surgical treatment combined with postoperative radiotherapy should be recommended as the standard of care for advanced EACC [3][4][5]. However, radical surgery, including subtotal temporal bone resection and total temporal bone resection, can cause severe complications, such as meningitis, cerebrospinal fluid leak, encephalitis, brain infarction, deafness, and facial palsy [6].
Although certain study groups have reported the results of chemoradiotherapy as a non-surgical treatment, and Shinoyama et al. [7] have reported a better prognosis with chemoradiotherapy than with other modalities, such as surgery with/without external-beam radiotherapy (EBRT), EBRT alone, or proton therapy, the number of studies are still limited.
At our institute we have treated patients with EACC with EBRT and radical surgery and/or concurrent chemotherapy. In the present study, we aimed to retrospectively evaluate the treatments and compare their outcomes.
Patients
We reviewed the irradiation records and extracted the data of patients who received initial definitive treatment including EBRT for EACC between 2003 and 2016. The medical records were reviewed to calculate survival duration and locoregional control (LRC) and to evaluate toxicity after EBRT. In the present study, the EACCs of all enrolled patients were pathologically diagnosed as squamous cell carcinoma, and they were restaged according to the Pittsburgh staging system [8], based on clinical examination, computed tomography (CT), magnetic resonance imaging (MRI), and/ or positron emission tomography (PET) at the initial diagnosis.
This study was approved by our institutional review board in February 2017 (Approval No. M2016-272).
Therapy
All patients enrolled in the present study received EBRT, which was planned by the radiation therapy planning system based on CT images, and 3D conformal radiation therapy (3D-CRT) or intensity-modulated radiation therapy (IMRT) was performed. Definitive EBRT without surgery or EBRT before planned surgery (preoperative EBRT) and/or after surgery (postoperative EBRT) were performed.
In cases of preoperative EBRT or definitive EBRT, the primary tumor visualized on CT, MRI and/or PET-CT images was delineated as the gross tumor volume (GTV); the external auditory canal (EAC) including the GTV and surrounding tissues excluding risk organs (brain, brain stem, etc.) were delineated as the clinical target volume (CTV), and the CTV plus a margin of 5 mm was delineated as the planning target volume (PTV). For patients who received postoperative EBRT, the tumor bed as visualized on CT, MRI and/or PET-CT images obtained before surgery and the region of tumor infiltration diagnosed pathologically were delineated as the CTV, and the CTV plus a margin of 5 mm was delineated as the PTV. The lymph node area was not included in the target volume of patients without lymph node metastasis or of patients receiving preoperative EBRT. In cases of definitive EBRT or postoperative EBRT, and patients with lymph node metastasis, the CTV included the lymph node level of metastasis and neighboring lymph nodes. The CTV of the lymph node area plus 5 mm was included in the PTV.
Whether chemotherapy was administered at all and what chemotherapeutic regimen was employed were determined according to the patient's general condition and the treatment policy at that time: while chemotherapy was not routinely used for EACC before 2014, chemotherapy with the combination regimen of docetaxel, cisplatin and 5-fluorouracil (TPF) was introduced for concurrent administration with definitive EBRT for head-and-neck cancer, including EACC, at our hospital in 2014.
As a surgery, subtotal temporal resection or lateral temporal bone resection with reconstruction was performed, and neck dissection was performed for patients who had cervical lymph node metastasis.
Toxicity
Toxicity can occur as a result of surgery, chemotherapy and EBRT. The occurrence of facial paralysis is usually as a sequela of surgery, and myelosuppression and electrolyte imbalance is usually caused by chemotherapy. In the present study, we extracted the late toxicities associated with EBRT, and classified them according to the Common Terminology Criteria for Adverse Events version 4.0 [9].
Follow-up and analysis
Following the completion of EBRT, the patients were examined by the head-and-neck surgeon, plastic surgeon, and/or radiation oncologist. The follow-up interval varied among the patients depending upon the condition of each. Recurrence in the head-andneck region, including the EAC, surrounding tissues, and lymph nodes, was defined as locoregional recurrence. Recurrence was diagnosed by imaging techniques, such as CT, MRI and/or PET-CT, and divided into in-field recurrence that occurred in the PTV and out-of-field recurrence that occurred outside the PTV.
The duration of overall survival (OS) and LRC from the start of treatment was calculated by the Kaplan-Meier method and evaluated according to the clinical factors and treatment factors using the log-rank test. Differences were considered statistically significant for P < 0.05. Statistical analysis was performed with IBM SPSS Statistics.
Patient data
Overall, 23 patients with EACC received EBRT during the specific period; however, 2 patients were excluded because they had been treated with Cyberknife to prevent the recurrence of EACC; the remaining 21 patients were enrolled in the present study. There were 13 women and 8 men, and their age ranged from 39 to 71 years, with a median of 57 years. Among the patients, 95% (20/21) had T3 or T4 EACC, and 29% (6/21) had cervical lymph node metastasis. Eight patients were in Stage III, and the other 13 patients were in Stage IV ( Table 1).
The treatment methods are summarized in Table 2. Surgery (namely, lateral or subtotal temporal bone resection) was performed in 13 patients.
EBRT using 3D-CRT was employed in 19 patients, including using the orthogonal beam technique in 14 patients, rotation beam technique in 3 patients, three-field technique in 1 patient, and onefield technique in 1 patient; the remaining 2 patients received IMRT. The energy of the irradiating X-ray beam was 4 MV in 18 patients and 6 MV in 3 patients. The radiation dose for preoperative EBRT was 30 Gy (n = 3) or 40 Gy (n = 3), and that for the postoperative EBRT dose was 20 Gy (n = 2) or 30 Gy (n = 1) in patients who had received preoperative EBRT and 50 Gy (n = 6) or 60 Gy (n = 1) in patients who had not received preoperative EBRT. Postoperative EBRT was administered in 10 patients on the basis of the pathological findings: parotid gland infiltration in 6 patients, mandibular bone infiltration in 2 patients, and positive surgical margin in 2 patients. The definitive radiation dose for EBRT in the remaining 8 patients who did not undergo surgery ranged from 40 to 70 Gy (median 60 Gy), including 40 Gy in 1 patient who discontinued EBRT because of tumor progression detected even during the EBRT, 60 Gy in 5 patients, and 70 Gy in the remaining 2 patients.
The follow-up duration of all patients ranged from 7 to 158 months (median 23 months), whereas the duration excluding deceased patients ranged from 11 to 158 months (median 52 months).
Outcomes
The 2-year OS and LRC rates of all patients were 62% [95% confidence interval (CI), 40-85%] and 71% (95% CI, 51-91%), respectively (Fig. 1). No clinical factors (age, sex, T-stage, N-stage, or clinical stage) were significantly associated with OS or LRC rates, although OS and LRC rates of younger patients were lower than those of the older patients, and LRC rate decreased with increasing cT-stage (Table 3).
According to the treatment modality, no significant difference was found in OS or LRC rates between patients who received surgery (n = 13) and patients who did not receive surgery (n = 8), between patients who received chemotherapy (n = 12) and patients who did not receive chemotherapy (n = 9), or between patients who received an EBRT dose of >50 Gy (n = 11) and patients who received EBRT dose of ≤50 Gy (n = 10). Regarding the influence of the chemotherapy regimen used, the OS and LRC rates of patients who received TPF (n = 6) were longer than those who received chemotherapy with other chemotherapeutic regimens (n = 6) or no chemotherapy (n = 9), although the differences were not significant (Table 3).
In the patients who received a combination of chemotherapy and EBRT, the 2-year OS and LRC rates of patients who received TPF and EBRT (n = 6) were 100% and 100%, respectively. These results were better than those of patients who received surgery and EBRT with or without chemotherapy [n = 13; the 2-year OS and LRC rates were 62% (95% CI, 35-88%) and 69% (95% CI, 44-94%), respectively], and also better than those of patients who did not receive TPF or surgery (n = 2; the 2-year OS and LRC rates were 0% and 0%, respectively). There was a significant difference in the OS and LRC between the three groups [OS, P < 0.001 (Fig. 2a); LRC, P < 0.001 (Fig. 2b)]; however, the differences in the OS and LRC between the patients who underwent surgery and EBRT and those who underwent TPF and EBRT was not significant (OS, P = 0.2; LRC, P = 0.2). The EBRT dose differed significantly between the above two groups, and there was a borderline difference in the follow-up duration (Table 4).
Recurrences
Local recurrence was found in six patients, two recurrences occurred in-field and the other four recurrences occurred out-of-field: two of the latter recurrences in the parotid, one at the skull base, and one in the masticator space. One in-field recurrence and three out-offield recurrences were found in patients who had received surgery, and one in-field and one out-of-field recurrences were seen in patients who did not receive surgery. The two patients with in-field recurrences received chemotherapy, followed by best supportive care, and the patients with out-of-field recurrences received chemotherapy (n = 2), stereotactic radiotherapy (n = 1), or stereotactic radiotherapy and additional surgery (n = 1). However, all patients died between 4.7 and 16 (median, 5.4) months after the find of recurrences.
Radiation-induced toxicity
All patients who underwent surgery suffered from facial nerve disorder, and three of these patients underwent repeated cosmetic surgery, including eyebrow lift. Moreover, 57% of patients who received chemotherapy suffered from neutropenia and leukocytopenia of Grade 3.
Eight radiation-induced late toxicities were seen in seven patients. Bone or soft tissue necrosis around the ear was noted in three patients, osteonecrosis of the jaw in three, and stenosis of the EAC in two ( Table 5).
All three patients who suffered from bone or soft tissue necrosis had Stage IV EACC. One of them had received combined treatment with 60 Gy of EBRT and TPF, while the other two patients underwent surgery and 50 Gy of postoperative EBRT. One of two patients with Grade 3 soft tissue and bone necrosis had undergone CyberKnife radiotherapy for recurrence following surgery and postoperative EBRT. The remaining patient suffered from Grade 3 soft tissue necrosis after surgery and postoperative EBRT, and underwent skin grafting.
DISCUSSION
In the present study, the 2-year OS rate of all EACC patients who received EBRT was 62% and that of Stage III and IV patients was 71% and 56%, respectively; these rates were similar to those reported in prior studies or a little better than them. A metaanalysis showed that the 5-year OS rates of Stage III and IV EACC patients were 75.4% and 35.8%, respectively, and that the 5-year OS rate of patients who did not undergo surgery (EBRT alone, chemotherapy alone, or chemoradiotherapy) was only 29% [6]. A multiinstitutional retrospective review in Japan showed that the 5-year disease-free survival rate for T3 EACC patients was 25%, and the rates of patients who received surgery and EBRT and EBRT alone were 46% and 0%, respectively [4]. Comparing our 2-year OS rate with Ogawa et al.'s 5-year OS rate may be difficult, but long-term outcomes by Mazzoni et al. [10] and Leong et al. [11] showed there was no remarkable change in the survival curve for EACC patients after 2 years from treatment.
There was an impressive result in our study: the 2-year OS rate of the eight patients who did not receive surgery was 73% (which was higher than the rate found in past studies) [4,6]. Among patients who did not receive surgery, the prognosis for patients who received a combination therapy of EBRT and concurrent chemotherapy with TPF was better than that of other patients. Only a few studies have reported treatment results of combination therapy of EBRT and concurrent chemotherapy with TPF for EACC patients. Shiga et al. [12] reported the results for nine Stage IV EACC patients who received EBRT and TPF, and the 2-year OS rate was 57%. Shinoyama et al. [7] treated 10 Stages III and IV EACC patients who received EBRT and TPF and reported that the 2-year OS rate was 70%. In the current study, the 2-year OS rate, and also the 2-year LRC rate, for the six patients who received EBRT and TPF was 100%, higher than the rates for 13 patients who received surgery and EBRT with/without chemotherapy involving chemotherapeutic drugs other than TPF. However, the difference was not statistically significant, and it might have been attributable to the significant difference in the EBRT dose and the borderline significant difference in the observation period between the patients who received EBRT and TPF (for whom the medians of the EBRT dose and the observation period were 60 Gy and 18 months, respectively) and those who received EBRT and surgery (for whom the medians were 50 Gy and 61 months, respectively).
Katori et al. [13] have compared induction chemotherapy with TPF followed by EBRT and concurrent chemoradiotherapy with TPF in patients with advanced squamous cell carcinoma of the head and neck and found that with respect to OS, concurrent chemoradiotherapy with TPF was better than induction chemotherapy with TPF followed by EBRT, but in the group who received concurrent chemotherapy with TPF, Grade 3 or 4 mucositis was observed in 79% (40% in the group that received induction TPF followed by EBRT), and Grade 3 or 4 leukocytopenia was observed in 53% (40% in the group who received induction TPF followed by EBRT), although their study included no EACC patients. In our study of EACC, 4 out of 6 (67%) patients who received EBRT and TPF suffered from five late toxicities induced by irradiation. One patient, with Stage III EACC, received concurrent EBRT at 70 Gy and TPF and suffered from Grade 1 EAC stenosis. The second patient, with Stage IV EACC, received EBRT at 60 Gy and TPF and suffered from Grade 2 soft tissue necrosis and Grade 1 EAC stenosis. The remaining two patients with Stage IV EACC who received EBRT at 60 Gy and TPF developed Grade 1 osteoradionecrosis. Although administration of TPF chemotherapy requires experience in delivery of the drugs and provision of sufficient supportive care to avoid hematological toxicity [13], the severe mucositis that often occurs in the irradiation field in patients with other head-and-neck cancers may not occur in patients with EACC, and radiation-induced toxicity in patients with EACC is considered to be within acceptable limits.
Although brain injury has been seen with radical radiotherapy doses of 66 Gy in 2 Gy per fraction, Pemberton et al. [14] have reported that doses of ≥66 Gy for primary radical radiotherapy for middle ear cancer and EACC are deemed necessary because lower doses are associated with increased recurrence. In the present study, EBRT at ≤50 Gy was administered for patients following radical surgery. Our findings suggest that a dose of 60 Gy or more may be necessary, despite surgery, to improve the OS and LRC rates, but attention needs to be paid to the EBRT field. We observed out-offield recurrence in 23% (3/13) of patients who received EBRT following radical surgery, of which 2 patients had negative surgical margins, even though Yin et al. [15] have reported in a multiinstitutional study that a free resection margin improves the survival rate. It is difficult to delineate the target after surgery in spite of the available data of images of tumor spread before surgery, the surgical record, and the pathological report. A wide field that would include surrounding tissues while avoiding the organs at risk may be necessary with use of the IMRT technique, which has obvious benefits in target volume conformation. Because this present study was retrospective and the treatment policy has changed with time, it appears that there could have been a bias in patient selection. Moreover, the observation period of patients who received EBRT and TPF was shorter than that of the patients who underwent EBRT and surgery, and further long-term follow-up may be necessary for a reasonable comparison between the two treatment strategies.
In conclusion, our data suggest that the combination therapy of EBRT and surgery and/or chemotherapy may be the most effective treatment options for advanced EACC, and EBRT with concurrent chemotherapy with TPF is potentially the most acceptable. However, further investigation is needed to determine the optimal dose and the appropriate field for EBRT in patients with EACC. Furthermore, because of the limited number of patients with this cancer, a multicenter study would be desirable.
|
2019-05-07T13:41:05.809Z
|
2019-04-11T00:00:00.000
|
{
"year": 2019,
"sha1": "ed2346af3dacad450fca09bb0868dbdb101a2c2d",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jrr/article-pdf/60/3/380/28693442/rrz010.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed2346af3dacad450fca09bb0868dbdb101a2c2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9216762
|
pes2o/s2orc
|
v3-fos-license
|
Does Virulence Assessment of Vibrio anguillarum Using Sea Bass (Dicentrarchus labrax) Larvae Correspond with Genotypic and Phenotypic Characterization?
Background Vibriosis is one of the most ubiquitous fish diseases caused by bacteria belonging to the genus Vibrio such as Vibrio (Listonella) anguillarum. Despite a lot of research efforts, the virulence factors and mechanism of V. anguillarum are still insufficiently known, in part because of the lack of standardized virulence assays. Methodology/Principal Findings We investigated and compared the virulence of 15 V. anguillarum strains obtained from different hosts or non-host niches using a standardized gnotobiotic bioassay with European sea bass (Dicentrarchus labrax L.) larvae as model hosts. In addition, to assess potential relationships between virulence and genotypic and phenotypic characteristics, the strains were characterized by random amplified polymorphic DNA (RAPD) and repetitive extragenic palindromic PCR (rep-PCR) analyses, as well as by phenotypic analyses using Biolog’s Phenotype MicroArray™ technology and some virulence factor assays. Conclusions/Significance Virulence testing revealed ten virulent and five avirulent strains. While some relation could be established between serotype, genotype and phenotype, no relation was found between virulence and genotypic or phenotypic characteristics, illustrating the complexity of V. anguillarum virulence. Moreover, the standardized gnotobiotic system used in this study has proven its strength as a model to assess and compare the virulence of different V. anguillarum strains in vivo. In this way, the bioassay contributes to the study of mechanisms underlying virulence in V. anguillarum.
Introduction
Over the last five decades, the world aquaculture industry has grown considerably and according to the Food and Agriculture Organization (FAO), aquaculture represents the fastest-growing animal-food-producing sector [1]. This development has been accompanied by a transition to more intensive farming methods supporting an increased profitability. A side-effect of these intensified production systems is an increased threat by diseases caused by a variety of microorganisms, including bacteria, fungi, viruses and protozoa. Despite all research activity that has been carried out in order to develop rapid diagnostic tests and effective disease prevention strategies [2], large-scale disease outbreaks are still causing considerable economical losses [1]. Vibriosis, caused by Vibrio (Listonella) anguillarum, has been reported as one of the most important infectious diseases affecting many economically important fish, bivalves and crustaceans [3][4][5]. Symptoms are red spots on the ventral and lateral areas of the fish and swollen and dark skin lesions that can ulcerate and bleed. Furthermore, the eyes are often infected, initially resulting in opacity, and later on in ulceration and exophthalmia. Internally, the intestines may be distended and filled with a clear, viscous liquid. Outbreaks of this disease often result in high mortality rates of infected fish. Moreover, in acute epizootics, infection spreads so rapidly that the majority of infected fish die without showing any clinical signs [5][6][7]. Because of this high morbidity and mortality rate, the disease is responsible for severe economic losses in both larviculture and aquaculture worldwide. Disease outbreaks can be influenced by water quality and temperature, the amount of stress imposed upon the fish, and the strain and virulence of the bacteria. Out of 23 identified O-serotypes (O1-O23), only serotypes O1 and O2, and to a lesser extent serotype O3, have been linked to vibriosis in fish [8]. The other V. anguillarum serotypes represent environmental isolates from sediment, plankton or seawater that are mainly non-pathogenic. Over the last couple of years, knowledge of the virulence of this bacterium has been largely increased by the use of molecular and biochemical analysis approaches [9,10]. In general, virulence factors in V. anguillarum have been classified into those that are necessary for chemotaxis and motility, for adhesion and invasion, including proteases [11,12], hemolysins [13][14][15][16], lipopolysaccharides (LPS) [17][18][19], and those that are required for bacterial proliferation and persistence. Regarding the latter, it is for example known that siderophore-mediated iron-sequestering systems, enabling growth in iron-limiting conditions, contribute significantly to the virulence of this pathogen. Two different siderophore-mediated systems have been described in V. anguillarum strains. In most pathogenic strains of serotype O1, the system is mediated by the 65 kb virulence plasmid pJM1, harbouring genes for the biosynthesis of the siderophore anguibactin and its cognate transport system [20]. In contrast, serotype O2 strains and some plasmidless serotype O1 strains produce a chromosomally encoded siderophore vanchrobactin [21][22][23][24]. Nevertheless, despite these studies, the exact role and contribution of many of these virulence factors in full V. anguillarum virulence remain still largely unknown, e.g. by the lack of epidemiological studies and standardized virulence assays.
Recently, a gnotobiotic sea bass (Dicentrarchus labrax) larvae model system has been developed and optimized for studying hostpathogen interactions and virulence assessment of opportunistic pathogens such as Aeromonas hydrophila and V. anguillarum [25]. The European sea bass was chosen as host because of its high economic importance for larviculture and aquaculture [26]. Briefly, in this system, gnotobiotic larvae are challenged via immersion in inoculated water or encapsulated Artemia sp., followed by an assessment of the survival of the sea bass larvae. In this paper, we investigated and compared the virulence of 15 V. anguillarum strains obtained from different hosts or non-host niches using this standardized gnotobiotic bioassay. Here, the assay has been used for the first time to evaluate the virulence of a comprehensive set of different V. anguillarum strains. In addition, to assess potential relationships between virulence and genotypic and phenotypic characteristics, the strains were characterized by two complementary PCR-based genotyping methods, including repetitive extragenic palindromic PCR (rep-PCR) and random amplified polymorphic DNA (RAPD) PCR, and phenotypic analysis using Biolog's Phenotype MicroArray TM technology [27].
Ethics Statement
All necessary permits were obtained for the described virulence studies. The experiment was approved by the ethical committee of Ghent University (no. EC2005/95).
Bacterial Strains
Fifteen V. anguillarum strains, representing the major pathogenic serotypes O1, O2 and O3, were used in this study (Table 1). Fourteen strains have been isolated from different species of fish, including sea bass, rainbow trout (Oncorhynchus mykiss) and cod (Gadus morhua L.), while one isolate was recovered from sediment. Furthermore, the isolates were originating from six geographical regions, including Denmark, Norway, Finland, Greece, France and UK. In previous studies, most isolates have been assessed for the presence or absence of the pJM1 virulence plasmid (Table 1) [8,[28][29][30][31][32][33]. Nevertheless, as bacterial strains may lose plasmids over time, presence of the virulence plasmid has been confirmed in this study by isolation and sequencing. In addition, bacterial identities were confirmed as V. anguillarum by 16S rRNA gene sequencing, followed by BLAST analysis against GenBank (see phylogenetic analysis). Strains were stored in trypticase soy broth (TSB; Oxoid, Erembodegem, Belgium) containing 1% NaCl and 15% (v/v) glycerol at 280uC.
Virulence Testing
The 15 selected isolates were made rifampicin (Rif) resistant by inoculating the strains at a final density of 10 7 CFU mL 21 in marine broth with addition of an increasing concentration (1-100 mg L 21 ) of Rif. After 24 hrs culturing on a shaker at 28uC, 1% of the old culture is used as inoculum for the next culture untill a resistance for 100 mg L 21 Rif by natural mutation is reached. Next, the 15 isolates were subjected to the virulence assay previously developed by Dierckens et al. [25], consisting of a standardized gnotobiotic model system with axenic European sea bass (D. labrax L.) larvae as hosts. To this end, D. labrax eggs were obtained from natural spawning at the hatchery of Ecloserie Marine de Gravelines (France). Following egg disinfection [25], germ-free eggs were allowed to hatch for 60 h. Subsequently, 12 freshly hatched sea bass larvae were aseptically transferred one by one into a transparent sterile screw cap vial with 10 mL filtered (0.2 mm), autoclaved sea water (FASW) containing 10 mg L 21 Rif and sterile fish homogenate equal to three dead sea bass larvae, providing some nutrients to the bacteria in a gnotobiotic environment. Vials were rotated at 4 rpm with an axis tangential to the axis of the vials, providing aeration and avoiding sedimentation awaiting bacterial inoculation. Five days after hatching (DAH 5), sea bass larvae were counted for the first time. In addition, bacterial strains were grown in 10% marine broth containing NaCl (resulting in the same salinity as the water in the sea bass larvae experiment, i.e. 36 g L 21 ), and incubated on a horizontal shaker at 120 rpm at 1660,5uC for two days. On DAH 7, the gnotobiotic sea bass larvae were challenged with a suspension of approximately 10 5 cfu mL 21 V. anguillarum (as determined by a spectrophotometer at 550 nm). On DAH 9, 11, and 13 (i.e. 2-6 days post-exposure), survival of the sea bass larvae was monitored by microscopic analysis, i.e. by counting the number of living sea bass larvae in relation to the situation at DAH 7. For each strain, 10 replicates were evaluated. Non-challenged larvae, kept under similar conditions, were used as a control. In order to verify the gnotobiotic status of the assay, axenity was tested on DAH 3 by plating fish larvae homogenates and water samples as described by Dierckens et al. [25]. No bacteria could be detected after 72 h incubation. In addition, after the larvae survival experiment, homogenized fish larvae were plated on 10% marine agar (MA; Difco Laboratories, Detroit, USA) to check for microbial contamination as well as to verify the identity of the inoculated strains by DNA analysis (RAPD fingerprinting; see further). Throughout the entire experiment, eggs and (challenged) larvae were kept at a salinity of 36 g L 21 in a temperaturecontrolled room at 1660.5uC in constant dim light (10 candela steradian m 22 ). Statistical analysis of the larval survival data was performed by means of R v2.12.1. Survival was reported as mean values 6 standard error of the mean (SEM). Data were tested for normality and subjected to non-parametric tests. Kruskal-Wallis one-way analysis was used to compare the survival of the sea bass larvae. Bonferroni test was used for multiple comparisons among means in case of non-homogeneity. Significance was accepted at p,0.05.
Phylogenetic Analysis
Genomic DNA was extracted following overnight incubation on marine agar at 28uC, using the phenol/chloroform extraction method as described by Lievens et al. [34]. Subsequently, the 16S ribosomal RNA (rRNA) gene, as well as the genes encoding an Nacetylmuramoyl-L-alanine amidase involved in the separation of daughter cells after cell division (amiB) [35] and an extracellular metalloprotease involved in virulence in V. anguillarum (empA) [11] were partially amplified with the primers 63F and 1492R, ami8 and ami417, and empAF and empAR, respectively [36][37][38][39]. PCR amplification was performed in a total volume of 20 ml containing 0.3 mM of each primer, 0.3 mM of each deoxynucleoside triphosphate (Invitrogen, Merelbeke, Belgium), 2.0 U Taq DNA polymerase (Bioké, Leiden, The Netherlands), 10X ThermoPol Reaction Buffer (Bioké), and 1 ng genomic DNA (as measured by a Nanodrop spectrophotometer). Before amplification, DNA samples were denatured at 94uC for 2 min. Subsequently, 35 cycles of the following steps were run: 45 s at 94uC, 45 s at 59uC (16S rRNA gene, empA) or 54uC (amiB), and 45 s at 72uC, followed by a final extension step at 72uC for 10 min. Sequencing of purified PCR products was performed using the forward primer used for DNA amplification for the empA and amiB amplicons and using the forward and reverse primer for the 16S rRNA gene amplicons. For the latter, forward and reverse sequences were individually trimmed for quality based on the obtained electropherogram, using a Phred-score of .20 (i.e. 0.01% error rate) as a cut-off value. Paired sequences were then aligned using the ClustalW algorithm within the MEGA5 software package [40], followed by manual sequence editing based on the paired electropherograms, leading to an accurate consensus sequence. Subsequently, a phylogenetic analysis was performed based on the in silico concatenated nucleotide sequences comprising 16S rRNA, amiB and empA. To this end, following multiple sequence alignment performed using ClustalW a Maximum Likelihood tree was constructed using the MEGA5 software. The sequences obtained in this study were deposited in GenBank under the accession numbers KF150774 to KF150818.
DNA Fingerprinting
DNA extracted from all isolates studied was subjected to two fingerprinting techniques, including RAPD and rep-PCR. With regard to the RAPD analysis, first 20 decamer oligonucleotides, randomly chosen from the Operon primer kits (Operon Technologies Inc, Alameda, CA, USA), were screened on a subset of five bacterial strains from Table 1 to select the most discriminative RAPD primers. Two primers, OPV-12 (59-ACCCCCCACT-39) and OPN-08 (59-ACCTCAGCTC-39), resulted in a clear and discriminating fingerprint, and these were selected for further experiments. Likewise, for the rep-PCR analysis, two primers and one primer set were first tested on the same set of five isolates, including the BOXA1R primer (59-CTACGGCAAGGC-GACGCTGACG-39), the (GTG)5 primer (59-GTGGTGGTGGTGGTG-39) and the primer pair REP1R-I (59-IIIICGICGICGICATCIGGC-39) and REP2-I (59-ICGICT-TATCIGGCCTAC-39) [41]. As the BOXA1R and (GTG)5 primers generated only very weak bands or an insufficient number of fragments, only REP1R-I and REP2-I, yielding a clear and discriminating fingerprint, was maintained for analysis of the whole collection. All amplifications were performed using a Bio-Rad T100 thermal cycler in a reaction volume of 20 ml containing 0.5 mM of each primer, 0.15 mM of each deoxynucleoside triphosphate (Invitrogen), 2.0 U Taq DNA polymerase (Bioké), 10X ThermoPol Reaction Buffer (Bioké), and 1 ng genomic DNA (as measured by a Nanodrop spectrophotometer). The reaction mixture was initially denatured at 94uC for 2 min, followed by 35 cycles of 1 min at 94uC, 1 min at 35uC (RAPD) or 40uC (rep-PCR), and 2 (RAPD) or 4 min (rep-PCR) at 72uC, with a final extension step at 72uC for 10 min. Obtained PCR products were separated by loading 7 ml of the reaction volume on a 1% (w/v) agarose gel followed by 120 min electrophoresis at 4 V/cm in 16Tris/acetate-EDTA (TAE) buffer. Gels were stained with ethidium bromide and visualized with UV light. A 1 kb DNA ladder (Smartladder; Eurogentec, Seraing, Belgium) was used as molecular weight marker. Gel images were acquired with the BioChemi System (UVP, Upland, CA, USA). Obtained images were processed by using GelCompar software, version 6.6.4 (Applied Maths, Sint-Martens-Latem, Belgium). Following normalization and background subtraction, fingerprint similarities based on the combined dataset were calculated using the Pearson correlation coefficient. Cluster analysis was performed by the unweighted pair group method with arithmetic averages (UP-GMA) [42]. All reactions were performed at least twice to check reproducibility, and yielded similar results. In all analyses, sterile distilled water was used as a negative control.
Phenotyping
For each strain, carbon source oxidation was determined by Phenotype MicroArray TM (PM) technology (Biolog, Hayward, CA) using PM plate 1. Using this technology, kinetic profiles are generated by continuously monitoring the metabolic activity during incubation [27]. The inoculum was prepared by growing strains for 24 hours at 25uC on trypticase soy agar supplemented with 1% NaCl. Cells were suspended in 10 ml inoculation fluid (IF-0 supplemented with 1% (w/v) NaCl, Biolog) until an optical density (600 nm) of 0.38 (60.02) was reached, using a SPEC-TRAmax PLUS384 UV-vis spectrophotometer (Molecular Devices). The inoculum was diluted (1:5) in inoculation fluid containing dye mix D (Biolog). Each well was inoculated with 100 ml. Plates were incubated in the OmniLogH automated incubator-reader of Biolog for eight days at 25uC and were read every 15 minutes. Resulting data were analyzed using OmniLogH PM Kinetic Analysis software (version 1.6) according to the manufacturer's instructions. Comparison of the isolates was performed by numerical analysis using the Pearson product-moment correlation coefficient and hierarchical clustering with UPGMA. The clustering results were validated using cophenetic correlation [43]. A selection of reactions was performed at least twice to check reproducibility, and yielded similar results.
Additional phenotypic assays associated with virulence have been performed. All enzymatic and hemolytic assays were done according to Natrah et al. [44]. Briefly, overnight cultures of each bacterial strain were diluted to an OD600 of 0.5. For each assay, 5 ml of diluted culture was spotted in the middle of the test plate. MA plates supplemented with 1% Tween 80 (Sigma-Aldrich) or 1% egg yolk emulsion (Sigma-Aldrich) were used for the lipase and phospholipase assays, respectively. The development of opalescent zones around the colonies was observed and the diameter of the zones was measured after 2-4 days of incubation at 28uC. The caseinase assay plates were prepared by mixing double strength MA with a 4% skim milk powder suspension (Oxoid), sterilized separately at 121uC for 5 min. Clearing zones surrounding the bacterial colonies were measured after 2 days of incubation. Gelatinase assay plates were prepared by mixing 0.5% gelatin (Sigma-Aldrich) into MA. After 7 days of incubation, saturated ammonium sulfate (80%) in distilled water was poured over the plates and after 2 min, the diameters of the clearing zones around the colonies were measured. Hemolytic assay plates were prepared by supplementing MA with 5% defibrinated sheep blood (Oxoid). Clearing zones were measured after 2 days of incubation. All assays were done at least in triplicate.
Virulence Testing
In order to assess the virulence of 15 V. anguillarum isolates, gnotobiotic sea bass larvae were challenged with V. anguillarum, whereupon the larval survival was monitored by microscopical analysis at DAH 9, 11 and 13 ( Fig. 1; Table 2). After DAH 13 the sea bass larvae in the control treatment died of starvation, making us to decide to terminate the experiment at DAH 13 (six days after inoculation). Survival on DAH 13 of the sea bass larvae challenged with the serotype O1 strains VaNT1, VIB93, S3 4/9 and 87-9-116 (ranging from 9363% to 7868% survival) was not significantly different from the control group (8964%). Treatment with the O2 strain VIB12 (6765%) was significantly different from the axenic control but not significantly different from inoculations with S3 4/ 9 and 87-9-116. As such, based on these observations these five strains were classified as avirulent strains, at least in the gnotobiotic system used here. The other ten strains caused a significantly higher mortality compared to the avirulent strains as well as the axenic control and were classified as virulent strains. Strains VIB15 and 87-9-117, two O1 serotype strains containing the virulence plasmid, showed high virulence towards sea bass larvae (33610% and 1368% survival, respectively). Remarkably, although strain 87-9-117 was originally isolated from rainbow trout, it also caused high mortality in sea bass larvae, indicating that there is no stringent host-specificity for vibriosis. Two other O1 serotype strains (43 and JLL237), although lacking the virulence plasmid, were also virulent (2265% and 462% survival, respectively). Strains VIB103, JLL143, and VIB160, belonging to serotype O2 also caused high mortality (2963%, 2364%, and 662% survival, respectively). Apparently, although strain VIB160 was not isolated from fish but from sediment, it showed high virulence towards sea bass larvae. In addition, all tested O3 strains (VIB113 and CNEVA NB11008) appeared to be highly virulent (363% and 864% survival, respectively). An independent replication of this gnotobiotic sea bass challenging experiment confirmed the results shown in Fig. 1. For each experiment, the identity of the inoculated strains was confirmed by isolating the bacteria again at the end of the experiment (DAH 13) followed by RAPD fingerprinting, confirming that mortality was caused by the tested V. anguillarum strains and not by non-added strains.
DNA Analysis
To assess a potential relation between virulence and genotypic background, all strains were subjected to both a phylogenetic analysis and DNA fingerprinting. First, a phylogenetic tree was constructed based on the in silico concatenated sequences of the 16S rRNA gene (1365 bp), amiB (365 bp) and empA (371 bp) sequences (Fig. 2). However, due to the high degree of conservation of these genes (98-100% similarity), not all strains could be discriminated from each other. In particular, five out of the eight O1 strains (87-9-116, 87-9-117, VaNT1, VIB93 and VIB15) showed identical 16S rRNA, amiB and empA sequences, as did the two O3 strains (VIB113, CNEVA NB11008) (Fig. 2).
Next, two RAPD analyses (primers OPV-12 and OPN-08) and one rep-PCR analysis (primer pair REP1R-I and REP2-I) were performed. RAPD and rep-PCR patterns were obtained yielding 5 to 14 distinct bands. The UPGMA dendrogram derived from Pearson correlation based on the combined datasets showed high congruence with the phylogenetic tree ( Fig. 3; Table 2). However, the discriminative power displayed was considerably higher with the fingerprinting methods. Only strain VIB12, belonging to serotype O2, clustered differently in both trees. Based on a similarity level of 50%, 5 distinct clusters could be identified in the UPGMA dendrogram. Cluster I only contained O1 serotype strains (5 out of the eight studied), while cluster II was comprised of the two O3 strains (fingerprints sharing 100% similarity), two O2 serotype strains (VIB12 and VIB160) and one O1 strain (S3 4/9) as well. Cluster V is represented by two O2 strains (HI610 and VIB103), supplemented with one O1 strain (43). Isolates JLL237 (O1 serotype) and JLL143 (O2 serotype) landed in a separate cluster, cluster III and IV, respectively (Fig. 3). Although there is some correspondence between genetic clustering and serotype, our results do not suggest a correlation with the results obtained in our virulence assay, as both the avirulent and virulent strains appear scattered throughout the dendrogram.
Next, the V. anguillarum strains were clustered based on the ability to utilize different carbon sources. As can be seen from the UPGMA dendrogram constructed from the PM1 profiles (area under the curve), clear differences can be observed between the studied V. anguillarum strains ( Fig. 4; Table 2). Based on 90% similarity, five clusters can be identified, of which cluster A perfectly matched cluster I obtained by genotyping. The two O3 strains, CNEVA NB11008 and VIB113, which clustered together by genotyping (cluster II), were separated in this analysis (cluster B and D, respectively). Strains HI610 (O2 serotype) and 43 (O1 serotype) clustered together in cluster C, which is in agreement with the genotyping results (clusters V). Cluster B contained two O1 strains, JLL237 and S3 4/9, two O2 strains, VIB103 and VIB160, and one O3 strain, CNEVA NB11008. Cluster C contained strains JLL143 (O2 serotype) and VIB113 (O3 serotype). As previously mentioned, the O2 strain VIB12 showed very low metabolic activity and therefore clustered separately. Similar to the genotypic analysis, no correlation could be observed between virulence and the phenotypic analysis.
In addition, the production of extracellular enzymes, which are possibly involved in virulence and invasion of host tissue [44], of the 15 V. anguillarum strains was determined on agar plates containing various substrates (Table 2). Strain VIB12 lacked both caseinase and hemolytic activity, which may explain its avirulent behavior. Remarkably, although strain VIB15 was identified as a virulent strain towards sea bass larvae, it was the only strain which did not contain lipase activity. Likewise, two virulent O2 strains, VIB103 and HI610, were deficient in phospholipase activity. Finally, no hemolytic activity was detected in the avirulent strain 87-9-116, and the virulent strains 43 and CNEVA NB11008, belonging to serotype O1, O2 and O3, respectively (data not shown).
Discussion
In this study, a set of 15 V. anguillarum strains from different origins (sea bass, trout and cod, or sediment) were subjected to a previously designed standardized virulence assay using axenic European sea bass larvae as hosts [25]. Here, the utility of the method to assess and compare the virulence of different V. anguillarum strains was demonstrated on a larger scale for the first time. Out of the 15 tested strains ten isolates were categorized as virulent strains, whereas five isolates were found to be avirulent. In general, our results were in line with other virulence characterization studies of V. anguillarum. For example, strains VIB15, VIB103, 87-9-117, JLL143, JLL237 and CNEVA NB11008, which were found to be virulent in other studies ( [28,32,33], M. Halberg Larsen, unpublished data), were also virulent in our study. Additionally, strains VIB93, 87-9-116, VaNT1 and S3 4/9 which were avirulent in earlier studies ([8,28,30,45], M. Halberg Larsen, unpublished data), were avirulent here as well. On the other hand, the virulence characteristics of strains VIB160 and VIB113, which were both virulent in our study, were in contrast to previous studies [28]. This discrepancy may be explained by a different host specificity combined with the use of a different infection model system, i.e. intraperitoneal injection of Atlantic salmon versus challenge of sea bass larvae by immersion [46,47]. Additionally, despite strain VIB12 was originally isolated from a diseased sea bass, this strain was found to be avirulent in our screening system. Potentially, this could be explained by the developmental stage of the fish or potential co-infections facilitating entry of V. anguillarum in the adult fish.
Although the virulence plasmid pJM1 is generally regarded as an important virulence factor in V. anguillarum, no strict relation between the presence of the plasmid and virulence was detected in our study. For example, serotype O1 strains VaNT1 and VIB93, both containing the virulence plasmid [8,28], were found to be avirulent in our study. It is well known that bacteria can be deprived of their plasmids during storage or passage of the strains [48]. Therefore, presence or absence of pJM1 was verified for all our strains based on plasmid isolation and sequencing, confirming the strain characteristics. The avirulent character of these strains on sea bass larvae, both originally isolated from rainbow trout, Virulence Assessment of Vibrio anguillarum PLOS ONE | www.plosone.org may be explained by a stringent host specificity, which may differ from strain to strain (see earlier). In addition, the lack of virulence of VaNT1 may be explained by its ''rough'' colony morphology [8]. It has been recognized that rough strains, which lack the lipopolysaccharide O-antigen, are more susceptible to complement-mediated killing, resulting in an avirulent phenotype [17,49]. In contrast, two other O1 serotype strains that contain the virulence plasmid (VIB15 and 87-9-117) did result in a high mortality of the sea bass larvae, and, remarkably, strains lacking the virulence plasmid were found to cause high mortality in our study system (43 and JLL237). However, as plasmidless O1 strains may have a chromosome-encoded iron sequestering system instead of the pJM1 plasmid-encoded iron uptake system, its function may have remained intact, contributing to its virulence [50,51]. These data strongly suggest that presence of the virulence plasmid is not crucial for full virulence towards sea bass larvae.
In order to assess potential relationships between virulence and genotypic and phenotypic characteristics, the strains were subjected to both genotypic and phenotypic characterization. Next to sequence analysis of a number of housekeeping genes, strains were genotyped using RAPD and rep-PCR, both used successfully for epidemiological studies of fish pathogens previously [52][53][54][55]. The clustering based on the genetic fingerprints showed high similarity with the phylogenetic clustering, but had a much higher discriminatory power. However, whereas some correspondence between genotype and serotype could be observed, no relation could be found with virulence. In addition, similar to Vandenberghe et al. [56], a high phenotypic heterogeneity within the V. anguillarum isolates was observed. Nevertheless, again no correlation could be made with virulence. On the contrary, despite some exceptions, in general there was a good agreement between the phenotypic and genotypic clustering. One exception involves the two O3 strains VIB113 and CNEVA NB11008 which grouped in different clusters based on the phenotypic characterization, because of a few differences in carbon utilization. Strain VIB113 was for example unable to metabolize maltose and maltotriose in contrast to strain CNEVA NB11008. Furthermore, the distinct ecological niches from which these two strains were isolated, i.e. sea bass in France (CNEVA NB11008) versus rainbow trout in Denmark (VIB113), might explain the differences in carbon source utilization patterns. Keymer et al. [57] also observed a phenotypic diversity between coastal Vibrio cholerae strains isolated from different environments. An explanation for the low metabolic activity of VIB12 could not be found. This strain did not show any differences in growth characteristics in TSB +1% NaCl compared to the other V. anguillarum strains using the indirect Rapid Automated Bacterial Impedance Technique (RABIT) from Don Whitley Scientific (Shipley, UK) (data not shown), demonstrating similar growth kinetics among the different isolates.
In addition to the Omnilog Phenotyping, the production of extracellular enzymes potentially involved in virulence was determined for each isolate. Indeed, it was demonstrated that the avirulent strain VIB12 lacked both caseinase and hemolytic activity. However, no clear relation between enzymatic activity and virulence could be observed in this study. For example, although phospholipase activity has been previously described to be an important virulence factor for V. vulnificus, two virulent V. anguillarum strains, VIB103 and HI610, lacked phospholipase activity [58]. In addition, although strain VIB15 was identified as a virulent strain towards sea bass larvae, it was the only strain lacking lipase activity. Finally, no hemolytic activity was detected in avirulent strain 87-9-116, and in virulent strains 43, CNEVA NB11008, belonging to serotype O1, O2 and O3, respectively.
Altogether, the genetic and phenotypic properties of the 15 V. anguillarum isolates assessed in this study could not be correlated to the virulence towards sea bass larvae. This illustrates the complexity of the virulence mechanisms in V. anguillarum and suggests that virulence in V. anguillarum is highly multifactorial and cannot be assigned to one or a few crucial virulence factors. It is clear that further research is necessary to elucidate the underlying mechanisms of virulence differences in V. anguillarum. Using standardized virulence assays such as the one used in this study, together with whole genome sequencing [59], sequencing of whole transcriptomes [60] or epigenetics research [61], we should be able to increase our understanding of V. anguillarum virulence. However, anticipating on the different advantages of using a gnotobiotic screening method, it is acknowledged that real-life conditions will always be far more complex and that findings made under gnotobiotic conditions will need to be validated.
Supporting Information
Table S1 Results of the OmniLogH PM Kinetic analysis of 15 selected Vibrio anguillarum strains grown on a PM1 Phenotype MicroArrayTM plate. For each strain and each carbon source, the area under the curve is presented. Values below 5000 and above 8000 are indicated in red and green respectively. (XLS)
|
2016-05-04T20:20:58.661Z
|
2013-08-06T00:00:00.000
|
{
"year": 2013,
"sha1": "701ce1c517c83f7e3c1a37224b90e919937c0d48",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0070477&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "400db49d2b11f876a0e318cc559e7af1da889cba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258320089
|
pes2o/s2orc
|
v3-fos-license
|
Traumatic Aspects of Maoists’ Insurgency in Abhi Subedi’s Dreams of Mayadevi
This paper explores experience and perception of the traumatized people who have lost their near and dear ones in the Maoists’ insurgency. These people either depicted in the literary works or in newspapers have their testimony to share with others. Dreams of Mayadevi, a play written by Abhi Subedi has been taken as a testimony of the war victims who were badly crippled during the war. Mayadevi, the protagonist of this play who has lost her husband in Indian war and son, Siddhartha in the insurgency, and Salleri Sahinla who has lost all his family members are the victims and liable to share their painful testimony. To analyze this play, trauma theory has been analyzed. Theoretical parameter of some well-known theorists like Caruth, Lacrapa, Kirmayer etc. have been discussed to bring out the traumatic facts of the insurgency with the help of traumatized characters who represent all the victims of the war.
Introduction: Remembering and Mesmerizing
The tension and stress caused by the organized and prolonged armed conflict and its aftermath brings enormous psychological and social burdens on the individuals who experience them.As such, for the last three decades, Nepal witnessed the suffering and life threatening experience that caused individuals go through traumatic reaction, which is termed as 'post-traumatic stress disorder' and 'collective trauma'.These unhealed traumatic experiences may contribute to lingering the ongoing cycles of violence and further thwarting peace building efforts.The ten years of civil war, no doubt, has confiscated many innocent people's lives without any significant participation, besides, has been a fertile area to put pen to paper as a testimony to share their traumatic experience.Abhi Subedi, a prolific Nepali writer who has penned many literary works on the periphery of socio-political issues, has taken the consequences of the ten years of civil war as an issue for his writing.He has written Dreams of Mayadevi about a badly dissipated village due to the Maoists' insurgency, as a consequence, he has brought back the memories of the ten years of Maoist insurgency that took the life of more than seventeen thousand innocent people.Mayadevi and Salleri Sahinla are two main characters who have manifested the traumatic testimony of the horrific war as they have lost their near and dear ones in this insurgency.To illuminate their painful traumatic testimony in this article, some fundamental aspects of trauma theory propagated by Cathy Caruth, Dominick Lacrapa etc. have been applied to analyze the play which has been drawn to show how barbaric and cruelty the civil had been to innocent villagers who have been sandwiched between two fighting forces, but are still hoping for peace and stability in the country.
Theoretical Discussion: Trauma and Its Various Aspects
The initial impetus of trauma as a study commenced in the late nineteen century owing a large number of surviving casualties of wars.Initially, it was medical trauma; bodily trauma was viewed as essentially a wound or injury affecting body tissues or structures.Later, in the mid-twentieth century, psychological trauma has been developed as "an emotional shock, producing a disturbance, more or less enduring of mental functions" (Marder, 2005, p.298).Here, shock is "sudden depression of the nervous system or nervous exhaustion produced by violent emotion, accident, surgical operation etc." (Wastell, 2006, p. xvi).Emotionally, one feels lost and nostalgic.In bringing out trauma theory while analyzing literary works, Caruth (1995) has been a pioneer figure who has elucidated the tenets of trauma in narratives.Showing the importance of writing history either of great people's heroic deeds and other ordinary people's painful experience, she says, "The story of trauma, then, as the narrative of belated experience, far from telling of an escape from reality -the escape from a death, or from its referential force -rather attests to its endless impact on a life" (p.5).For her, it is literature as testimony which is one of the ways to tell one another about aspects of traumatic experience that cannot be contained by ordinary modes of expression and that may even exceed human understanding.Sharing and unfolding the knots about the past traumatic experiences can have some soothing features in reconciling with the warring sides and coming in meeting points, so that both sides can live together.In the same way, Pandey (2009), bringing the importance of cultural trauma, points out that its presence in narrative would function as a memory to illuminate the traumatic past and also a possibility to settle the hostility (p.132), which had led to the violence triggering the trauma.
Later, Freud (1978) formulated the idea of trauma working on women suffering from hysteria, which led him to postulate that it was actual seduction that was traumatizing these women (p.187).Clinicians such as Herman began to confront society with the reality of the existence of incest and domestic violence.Besides, the Vietnam War was another instance of traumatizing event.Cognitive behavioral model as a most influential approach to trauma was developed to address the residual effects of trauma using techniques (Wastell, 2006, p. xxi) that essentially subsume emotion beneath rational thinking focusing on getting the survivors' thoughts back into perspective through a combination of talking and activity.
Emergence of trauma in literature and literary theory was about two decades old as most influential and far reaching in understanding pathos of the victims of different events.Marder (2005) further argues, "Over the last fifteen years or so, the emergence of groundbreaking new work on trauma in literature and critical theory has made a profound impacts both within and beyond the field of literature" (1).Scholars like Cathy Caruth, and Shoshana Felman in the early nineties along with their significant works have been contributing largely creatively on the borders of trauma, literature and psychoanalysis.
With this, psychological impact of trauma has been focused on the diagnostic construct of post-traumatic stress disorder (PSTD), a situation in which people as subjects of horrific terror and cruelty and a history of exposure to a traumatic event and symptoms from each of three groups: "intrusive recollection of trauma event, avoidance of reminders of the event and emotional numbing, and hyperarousal to determine accountability for suffering and to seek restitution and redress."(Kirmayer et al. 2004, p. 2).On the other hand, while bringing out the traumatic memory as an important asset of trauma, Allan Young (1995) has observed that memory is the linchpin which holds together trauma and disorder in the construct of PTSD.Justifying this, she further opines, "The Dynamics of memory and of attributional processes are crucial for the diagnosis of PTSD because the criteria require that the person remembers and attributes his or her symptoms to the traumatic event" (47).In addition to fear and anxiety, the psychological consequences of trauma may lead to disturbances of memory, identity, and perception which is termed as dissociation.That individuals can forget, repress, or dissociate experiences of trauma only to have them cause distress later in time or re-emerge in the form of symptoms, fantasies or recovered memories can cause a terrible impact on the victims.
In the recent years, recognition has increased for the ways in which trauma can exert effects across the generation from parents to child to grandchild.In the name of collective trauma, it occurs both within families and in whole communities.Kirmayar et al. (2004) have even argued about how PTSD can be transmitted across generations through secondary traumatization, "Transgenerational transmission of the effects of trauma may include many processes at the level of parent-child interaction within the family" (p.10).People may simply learn to attribute their own symptoms of anxiety, depression, interpersonal difficulties, and other non-trauma-related psychiatric disorders to their parents' history of traumatic experiences.
Trauma theorists like Herman have even classified PTSD into two: simple PTSD relates to single events, and complex PTSD refers to the response to complex events such as war or chronic abuse (p.11).Both PTSDs compile short terms painful phenomena i.e. isolated events such as rape or armed robbery, man-made disasters such as sinking of ferries, natural disasters such as earthquakes or floods, and longer term i.e. chronic conditions such as war and child abuse.
Unlike psychological trauma, sociological trauma is related to the significance of cultural trauma of which C. Alexander (2004) says, "Cultural trauma occurs when members of a collectivity feel they have been subjected to a horrendous event that leaves in indelible marks upon their group consciousness, marking their memories for ever and changing their future identity in fundamental and irrevocable ways" (p.1).It incorporates the domain of both social responsibility and political actions.Besides, social groups, national societies, the whole civilizations would witness human suffering.They would identify "the cause of trauma, and thereby assume such moral responsibility, members of collectivities define their solidarity relationships in ways that, in principle, allow them to share the sufferings of others" (p.2).It would not be good to underestimate the existence of others' trauma and refuse to accept their responsibility for their suffering, which only propagates the tension.It normally pleads for other forms of trauma which subsequently touch the feeling of commoners.
While highlighting on the importance of traumatic events which would bring some good sides to writers, Edkins (2003) in her "Remembering Relationality Trauma Time and Politics" states that the memory of the organized violence of genocide, slaveries, wars, famines, and terrorism has been an important site for political and literary investment.According to her, writers would have the traumatic memory a good source of creating their own world for analyzing not only commemoration of such events, but also the practices of redistribution, recovery and reconciliation for their reawakening (p.100).Each one would bring forth important tenets of writing trauma.
In the same way, Pandey (2009), once again, bringing the importance of cultural trauma, points out that its presence in narrative would function as a memory to illuminate the traumatic past and also a possibility to settle the hostility which had led to the violence triggering the trauma.While explaining the condition of cultural traumatic experience, he says, "When someone is hurt, he or she lives through the injury not only physically but also psychologically.Additionally, the society interprets it in a specific cultural context" (p.125).Trauma perpetuates a mental condition which may interpret the injury entailed outer sides too.According to him, the politics of language of violence should not be chauvinistic; and the writers of trauma should employ the language which should neither reduce the specificity of the experience nor should nullify the possibilities of co-existence.It should rather develop a humanistic, critical consciousness in pacifying immemorial feuds, hatred; not the notion of separation and partition (p.136).Trauma caused racially or nationally should be cured or healed; not retrieved of the past.
In the modern time, the ravages of the First and the Second World War, Korea and Vietnam, US Iraq War, and the like have been potential to create traumatic experiences, which are fertile for the creative minds to write about trauma.Pioneer works of the researchers, victims and other prolific writers have enabled the development of the most widespread model of trauma to reflect the actual experience of survivors by enhancing the emotional elements from cognitive distortions.Although trauma is always framed as the exceptional, even where violence is endemic, it has a curious relationship with modernity.Though violence has been a constant feature of human life, the industrial revolution and the accelerated pace of urban life have brought new forms of presenting human emotions and experiences in literary writings.The trauma and its various aspects as seen in Dreams of Mayadevi would be contextualized as per the situation that happened in Nepal where common people were badly impacted time and again.
Discussion and Results: Trauma in Dreams of Mayadevi
Mayadevika Sapana (Dreams of Mayadevi), a play written by one of the renowned Nepali writers in the field of writing dramas, poetry, criticism, etc., talks of traumatic experiences of two principal characters: Mayadevi and Salleri Sahinla, both crippled by the Maoists' insurgency.Both "represent turbulence and calm in the same capacity and condition" (Rijal 2).Along with them, the play shows the scene of a remote village ravaged by the battle between two forces -the Maoist rebels and government forces where almost all boys have either left the village or joined the rebellion or disappeared by either forces.The playwright mentions about the condition of the village, "No young lads are in the village.Here are only women.Few old men are only here" (Subedi, 2009, p. 20).They are also traumatized with the pretext of the insurgency.They do not seem to be sure when, where and what happens to them as both forces have made this village a battlefield, and the village folks have been panicked.
Mayadevi, the mother of Siddhartha who has been lost for six months, whom she calls with a loving name Kale is a suspected Maoist rebel.Her husband had died in a war in India when Siddhartha was born.Due to such traumatic experiences whenever anything happens to her she feels that she is having the same pain as she felt when he was born.She narrates her story about how he was born.
When Kale was born to me I was very feeble.I was nearly old and had been in pain for two days.Kale's father did not have the luck to see his own son.He died in the war.I too had almost died.Any news either too sad or too joyful makes me feel as if I am going to give birth to child again.It is very difficult to be a mother.All women have the same fate.(p.30) The village is not peaceful, and the villagers are used to unexpected things that would be happening to them all the time.Subedi (2009) describes the terrible condition of the village like this, "Night vision helicopter has been speaking terrible and mysterious language.The familiar jungle looks strange.Dusks arrive with eerie silence.Mayadevi feels as if she is going through the same labor pain again" (p.25).The very traumatic situation can be noticed when Mayadevi dreams of giving birth to Gautama Buddha under a Peepal tree indirectly desiring "for peace that common people expected collectively.What common people needed or still need are not the warriors but saviors" (p.23).She still hopes peace for all the war-affected people so that no mother would have to live life like her.
Unlike Mayadevi, Salleri Sahinla, a retired army from Indian force, who does not seem mentally well, is seen wearing worn out combating dress and performing a march-pass anytime he likes.He is totally disturbed and traumatized.One of his arms is amputated.He has also lost his son and is still looking for him.He is seen telling others about his war experience: There is none whom I knew in the jungle where I was dead.I did not know whether there was chanp tree or not.I even did not know the river there.I did not know the cowshed.Somebody pushed me in the battlefield.The sound of gun firing was so harsh that our ears would be deaf.A friend next was shot.As I saw, he died.At moonlit nights there would be war as well as talks of love.At such nights I did not do anything except war.It's ok, what else should be said.You go.Maybe something will happen at night.No good thing will happen.Whatsoever should be done, do fast.(p.11) He is only one male in the village and supposes himself a watchman guarding women and children.
Even when the rebels come to inquire about Salleri Sahinla and try to abdicate him from there, he tells them how terrible war is and how it makes life terrible to all, . . .if you dare to know the consequence of war you are making, look at my body and me!I, a weak old man with so called medals of bravery stored into the trunk, and with an amputated hand, linger like a shadow in the street thinking about two proper meals a day.This is what you all are going to be.That war I fought took place somewhere far in the unknown land.Your war is far terrible.You all have to drink water from the same sprout.With amputated hands, you will live through the same dukha.Muted and with dried eyes, you will look at each other.Not a word would come out of your mouth.Cursed will be your predicament.( 23) Still, Salleri Sahinla hints one important human value that suffering of one person like him can or should save other people.He is seen praying and performing the act of ritualistic order for the purification of all those who have lost lives in the battlefield.While doing so he states "a view that spaces also become fluid and liminal during traumatic moments.Similar to its denizens who remain schizophrenic and traumatized, spaces too give themselves to multiple spheres on such occasions.Spaces too have mysterious and unconscious state of existence."(Rijal, 2014, p. 5) The pain he is having for a long time should be receded, and he hopes well to all.
Mayadevi's visit to the battlefield to look for her missing son is very traumatic.Finding dead and wounded youth writhing in pain even not aspiring for peace rather craving for more weapons to instigate terrific war against their own brothers and sisters makes her feel sorry for the pathetic condition of all denizens.Then, she dreams of herself as Mayadevi of ancient Kapilvastu and giving birth to Siddhartha.The realization of her near him, but still not seeing him, she becomes restless and calls him: Kale, my son, how can you go without meeting me?Don't know, what might have stopped him.Don't know, where he is entangled.But I have seen a very auspicious dream.No harm shall happen to you.I am happy that you are alive.Do remember me.Kale, I wish flowers bloomed in every step you take, my darling.(Subedi, 2009, p.42) Here, both characters are able to show how painful it is to live in the war-ridden place.Thus, these characters are "not surviving the crisis, they are also preserving the spatial values, which are human and perennial.Brave warriors and crafty politicians are not the only history makers.They are not the only forces, which create and control spaces.There are other forces, which are dear to common people" (Rijal, 2014, p.6).Along with them, the people are seen terrified and traumatic with the Maoists.Besides, the frequent visit of the government forces' surveillance penetrates them deeply.They support and betray each other.Their fear focuses on who is an insider or an outsider.However, the concealment of their fear and Mayadevi's dream can have some relationship with fear and trauma prevailing in the village.These people are seen living their life on the edge of a sword.
Conclusion: Remembering for Exposure
In short, the trauma caused by the organized and prolonged armed conflict and its aftermath has brought afore undeletable psychological and social scars on the individuals who have gone through the experience.As such, for the last three decades, Nepal witnessed the suffering and the life threatening experience that caused individuals go through traumatic reaction, which is termed as 'post-traumatic stress disorder' and 'collective trauma'.In such long span of time, many changes have taken place, but the scars the victims have been carrying on are still afresh.Mayadevi and Salleri Sahinla are the representative characters who have witnessed this conflict and bearing the traumatic pain and are displaying it time and again.War either in the country or outside always creates a havoc to the people who have lost their kin and kith.Those who fight in the war either kill others or get killed, but their relatives or dependents have to go through very painful situation which has been depicted in this play.Subedi, being a creative writer of performance art has been successful to weave the plot of an old mother who sees her son in every young boy.
|
2023-04-26T15:09:39.637Z
|
2021-12-31T00:00:00.000
|
{
"year": 2021,
"sha1": "0845f500d169a531f2ecffb42ea3e44cd23052a4",
"oa_license": null,
"oa_url": "https://www.nepjol.info/index.php/kmcrj/article/download/53300/39918",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a01a8c9edfd92edca18ccdc4f7a0a718c4c7ed35",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
}
|
248620796
|
pes2o/s2orc
|
v3-fos-license
|
Chitosan/Gold Nanoparticles Nanocomposite Film for Bisphenol A Electrochemical Sensing
: Bisphenol A (BPA) is considered an endocrine-disrupting compound and can cause toxi-cological effects, even at low doses. The development of sensitive and reliable sensors that would allow the detection of such contaminant is highly pursued. Herein, we report an electrochemical sensing strategy based on a simple and low-cost nanocomposite film sensor platform for BPA detection. The platform was developed by modifying a fluorine-doped tin oxide (FTO) electrode with layer-by-layer (LbL) films of chitosan (Chi) and gold nanoparticles functionalized with a polythiophene derivative (AuNPs:PTS). The growth of the Chi/AuNPs:PTS LbL films was monitored by UV–Vis spectroscopy. Electrochemical characterization revealed that the three-bilayer film exhibited the highest electrocatalytic performance and differential-pulse voltammetry (DPV) measurements demonstrated that the modified electrode was suitable for BPA detection through a quasi-reversible and adsorption-controlled electrochemical oxidation and reduction process. The developed sensor exhibited a linear response range from 0.4 to 20 µ mol L − 1 , with a detection limit of 0.32 µ mol L − 1 . The sensor showed good reproducibility with relative standard deviations of 2.12% and 3.73% to intra- and inter-electrode, respectively. Furthermore, the platform demonstrated to be suitable to detect BPA in real water samples, as well as selective for BPA detection in solutions with 100-fold excess of common interfering compounds. that this film architecture presents adhesion on the working electrode and promotes a great transfer electron transfer between the electrode/solution interface, allowing the BPA detection as shown in Scheme 1. The (Chi/AuNPs:PTS) 3 electrode selective response to BPA was verified by carrying out interference studies. The performance was evaluated in the presence of other phenolic compounds. The results showed that 100-fold excess of the 4-nitrophenol, catechol, phenol and hydroquinone caused approximately 15%, 0.2%, 7% and 27% of variation, respectively, in the response current peak, as displayed in Figure 4. Except for hydroquinone, the low values found for 4-nitrophenol, catechol, and phenol indicate no significant interference, showing that these chemical species did not affect the determination of BPA using the (Chi/AuNPs:PTS) 3 electrode.
Introduction
Bisphenol A (BPA; 2,2-bis(4-hydroxyphenyl) propane) is an endocrine disruptor compound widely employed as an intermediate in the production of polycarbonate plastics and epoxy resins, which are widely used for food packaging [1]. Several studies have reported that BPA is capable to disrupt many metabolic pathways, even at low concentrations (traces), leading to substantial damage to tissues and organs [2][3][4]. In this regard, efforts have been made to spread the awareness message of products containing BPA. For instance, plastics labelled with recycle codes 3 or 7 are more likely to contain BPA [5]. Moreover, its widespread use has led to environmental pollution. In view of its increasing harmful impact to the environment and human health, there is an urgent need to develop a more convenient and reliable method to analyze BPA in various matrices. In this regard, different approaches have been reported and explored to detect and quantify BPA [6][7][8][9]. Among Electrochem 2022, 3 240 them, electrochemical techniques stand out due to their merits of simple operation, short analysis time, low cost and high sensitivity [10][11][12].
In recent years, the working electrode modification with nanomaterials has become an efficient approach to enhance the surface area of an electrode for improved electrochemicalsensing performance [13][14][15]. Moreover, the use of conductive nanomaterials, such as metallic nanoparticles, conducting polymers and carbon-based nanomaterials, can also enhance electron transfer during the catalytic reaction [14]. Therefore, the choice of nanomaterial and its deposition technique are key in determining the sensing performance. In this regard, layer-by-layer (LbL) assembly appears as a simple, low-cost and versatile technique for the design of sensors using self-assembled functional nanomaterials with their composition and structure controlled at nanoscale [16].
With this in mind, herein, we report on the design, characterization and application of a sensor platform based on LbL film for the selective electrochemical detection of BPA. Multilayer films were fabricated using chitosan (Chi) as polycationic material and gold nanoparticles (AuNPs) stabilized with polythiophene sulfonate poly[2-(3-thienyl) ethyloxy-4-butylsulfonate] sodium salt (AuNPs:PTS) as polyanionic material. Chi and AuNPs:PTS were combined to build the nanostructured electrode due to their film-forming ability and good conductivity, respectively. The wide linear range and detection limit of FTO electrode modified with Chi/AuNPs:PTS LbL films were characterized by the differentialpulse voltammetry (DPV) technique. This sensor was further used to detect BPA in a real water sample.
A BPA stock solution (1.0 mmol L −1 ) was prepared in ethanol (Synth Chemical, São Paulo, Brazil). A 0.01 mol L −1 phosphate buffer (PBS) solution with pH 7.4 was used to dilute the solutions.
Synthesis of AuNPs:PTS Nanoparticles
Gold nanoparticles stabilized with polythiophene sulfonate poly[2-(3-thienyl) ethyloxy-4-butylsulfonate] sodium salt (AuNPs:PTS) were synthesized according to previous work [17]. Briefly, AuNPs were prepared by reducing gold (III) chloride (HAuCl 4 ) solution, with aqueous sodium citrate in the presence of PTS; both PTS and citrate help to stabilize the AuNPs. All details about the synthesis and characterization of AuNPs:PTS can be found in the reference [17].
Layer-by-Layer (LbL) Films' Fabrication and Characterization
A solution of 0.5 mg mL −1 of Chi was prepared by dissolving the polymer in acetic acid aqueous solution (1%). AuNPs:PTS solution was used as prepared. The Chi/AuNPs:PTS LbL films were deposited on conventional glass and FTO glass substrates (1.5 cm × 0.5 cm) by alternating immersion into cationic (Chi) and anionic (AuNPs:PTS) solutions for 10 min and 1 h, respectively. The adsorption time used for both materials was chosen according to previous results [17,18]. The functional area of the designed electrode (area functionalized with LbL films) was limited in 0.25 cm 2 . UV-Vis absorption spectroscopy analysis (Jasco-V-730) was used to monitor the growth of the films and cyclic voltammetry measurements was used to characterize its electron transference properties.
Electrochemical Measurements
All electrochemical measurements were carried out using a PGSTAT204 Autolab (Metrohm) electrochemical system, controlled by a NOVA (version 2.1.4, Metrohm Autolab BV, Utrecht, Netherlands) software. The FTO-Chi/AuNPs:PTS electrodes were placed as working electrodes. The counter and reference electrodes were a Pt foil and Ag/AgCl (3 mol L −1 KCl), respectively. The experiments were carried out by cyclic voltammetry (CV) and DPV using PBS solution (0.01 mol L −1 , pH 7.4) at room temperature. For electrochemical characterization of LbL films, the CV analysis was performed using a solution of 0.05 mol L −1 [Fe(CN) 6 ] 3−/4− with 0.1 mol L −1 KCl, varying the potential from −0.5 to 1.0 at 0.1 V s −1 . For oxidation mechanism study, CVs were performed using 1 µmol L −1 of BPA in buffer solution (pH 7.4) varying the scan rate from 5 to 300 V s −1 over a potential range from −0.3 to 0.8 V. The DPV analysis was carried out in the potential range from 0.0 to 0.7 V using a pulse amplitude of 25 mV, a step potential of 5 mV at a scan rate of 10 mV s −1 .
Choice of Materials
It has been proven that the choice of materials to design sensing platforms directly influences the sensing performance. Chi, a cationic polysaccharide with abundant amines, has attractive properties including excellent film-forming ability and adhesion. The AuNPs:PTS, at a ratio of 1:1, was chosen based on its better charge transfer ability (compared with other synthesized proportions) showed by impedance measurements in our previous work [17]. PTS acts as stabilizing agent for the AuNPs and helps the LbL film formation, once its sulfate groups negatively charged can interact with the amine positively charged from chitosan by electrostatic interactions. Therefore, it is expected that this film architecture presents adhesion on the working electrode and promotes a great transfer electron transfer between the electrode/solution interface, allowing the BPA detection as shown in Scheme 1.
cyclic voltammetry measurements was used to characterize its electron transference properties.
Electrochemical Measurements
All electrochemical measurements were carried out using a PGSTAT204 Autolab (Metrohm) electrochemical system, controlled by a NOVA (version 2.1.4, Metrohm Autolab BV, Utrecht, Netherlands) software. The FTO-Chi/AuNPs:PTS electrodes were placed as working electrodes. The counter and reference electrodes were a Pt foil and Ag/AgCl (3 mol L −1 KCl), respectively. The experiments were carried out by cyclic voltammetry (CV) and DPV using PBS solution (0.01 mol L −1 , pH 7.4) at room temperature. For electrochemical characterization of LbL films, the CV analysis was performed using a solution of 0.05 mol L −1 [Fe(CN)6] 3−/4− with 0.1 mol L −1 KCl, varying the potential from −0.5 to 1.0 at 0.1 V s −1 . For oxidation mechanism study, CVs were performed using 1 µ mol L −1 of BPA in buffer solution (pH 7.4) varying the scan rate from 5 to 300 V s −1 over a potential range from −0.3 to 0.8 V. The DPV analysis was carried out in the potential range from 0.0 to 0.7 V using a pulse amplitude of 25 mV, a step potential of 5 mV at a scan rate of 10 mV s −1 .
Choice of Materials
It has been proven that the choice of materials to design sensing platforms directly influences the sensing performance. Chi, a cationic polysaccharide with abundant amines, has attractive properties including excellent film-forming ability and adhesion. The AuNPs:PTS, at a ratio of 1:1, was chosen based on its better charge transfer ability (compared with other synthesized proportions) showed by impedance measurements in our previous work [17]. PTS acts as stabilizing agent for the AuNPs and helps the LbL film formation, once its sulfate groups negatively charged can interact with the amine positively charged from chitosan by electrostatic interactions. Therefore, it is expected that this film architecture presents adhesion on the working electrode and promotes a great transfer electron transfer between the electrode/solution interface, allowing the BPA detection as shown in Scheme 1.
Characterization of Chi/AuNPs:PTS LbL Films
The sequential growth of the Chi/AuNPs:PTS LbL films was monitored by UV-Vis absorption spectroscopy, as shown in Figure 1a. The spectra show an absorption band centered at 537 nm assigned to the AuNPs surface plasmon resonance [17,19]. As chitosan does not exhibit any UV-Vis absorption, the band related to nanoparticles was used to monitor the film formation. A linear growth was observed for films from one to five Scheme 1. Schematic representation of the electrochemical detection of BPA at Chi/AuNPs:PTS electrode interface.
Characterization of Chi/AuNPs:PTS LbL Films
The sequential growth of the Chi/AuNPs:PTS LbL films was monitored by UV-Vis absorption spectroscopy, as shown in Figure 1a. The spectra show an absorption band centered at 537 nm assigned to the AuNPs surface plasmon resonance [17,19]. As chitosan does not exhibit any UV-Vis absorption, the band related to nanoparticles was used to monitor the film formation. A linear growth was observed for films from one to five bilayers, which suggests that the same amount of material (Chi/AuNPs) was adsorbed in the substrate in each bilayer. Figure 1b shows the cyclic voltammograms performed in [Fe(CN) 6 ] 3−/4− solution for all films, with a quasi-reversible one-electron redox behavior ( I pa /I pc ∼ 1) [20]. All modified electrodes showed an increased peak current compared to the bare electrode, suggesting a good electron-mediating property of the LbL films. The highest increase in the anodic (I pa ) and cathodic peak current (I pc ) was observed for the (Chi/AuNPs:PTS) 3 . Compared to bare FTO, the increase was from 2352 µA to 2889 µA to I pa and from 2206 µA to 2613 µA to I pc . The inset in Figure 1b highlights its increment and confirms the faster electron transfer kinetics promoted for the electrode modified with three bilayers. Based on this, the (Chi/AuNPs:PTS) 3 electrode was chosen as a platform for BPA detection. bilayers, which suggests that the same amount of material (Chi/AuNPs) was adsorbed in the substrate in each bilayer. Figure 1b shows the cyclic voltammograms performed in [Fe(CN)6] 3−/4− solution for all films, with a quasi-reversible one-electron redox behavior ( ⁄~ 1) [20]. All modified electrodes showed an increased peak current compared to the bare electrode, suggesting a good electron-mediating property of the LbL films. The highest increase in the anodic (Ipa) and cathodic peak current (Ipc) was observed for the (Chi/AuNPs:PTS)3. Compared to bare FTO, the increase was from 2352 µ A to 2889 µ A to Ipa and from 2206 µ A to 2613 µ A to Ipc. The inset in Figure 1b highlights its increment and confirms the faster electron transfer kinetics promoted for the electrode modified with three bilayers. Based on this, the (Chi/AuNPs:PTS)3 electrode was chosen as a platform for BPA detection. The role of scan rates on the CV curves was investigated in order to provide insights about BPA electrochemical behavior on the designed electrode surface. Figure 2a shows the response for scan rates from 20 mV s −1 to 300 mV s −1 recorded in 3 µ mol L −1 of BPA. The redox peaks were observed centered at 0.49 and 0.35 V and are related to the BPA oxidation (anodic peak) and reduction (cathodic peak) process, respectively [21]. As can be noticed in Figure 2a, the anodic and cathodic peak currents increased with the increase in the scan rate from 5 to 300 mV s −1 . Moreover, Figure 2b displays that both Ipa and Ipc increased linearly with the scan rate. Such a linear increase indicates that the electrochemical reaction of BPA at the FTO-(Chi/AuNPs:PTS)3 interface is an adsorption-controlled process [19,22]. The role of scan rates on the CV curves was investigated in order to provide insights about BPA electrochemical behavior on the designed electrode surface. Figure 2a shows the response for scan rates from 20 mV s −1 to 300 mV s −1 recorded in 3 µmol L −1 of BPA. The redox peaks were observed centered at 0.49 and 0.35 V and are related to the BPA oxidation (anodic peak) and reduction (cathodic peak) process, respectively [21]. As can be noticed in Figure 2a, the anodic and cathodic peak currents increased with the increase in the scan rate from 5 to 300 mV s −1 . Moreover, Figure 2b displays that both I pa and I pc increased linearly with the scan rate. Such a linear increase indicates that the electrochemical reaction of BPA at the FTO-(Chi/AuNPs:PTS) 3 interface is an adsorption-controlled process [19,22].
The oxidation process of BPA at the as-prepared electrode surface shows a quasireversible behavior, observed for the cathodic and anodic peaks potential separation which suggests a quasi-reversible process [20,22]. According to these results, the proposed mechanism for BPA electrooxidation at the interface of the (Chi/AuNPs:PTS) 3 electrode consists of the transfer of two electrons and two protons [23], as depicted in Scheme 2.
DPV was employed to verify the (Chi/AuNPs:PTS) 3 electrode efficiency in detecting BPA. DPV is an excellent method for sensing, with advantages such as high sensitivity and low LOD, being extremely useful for measuring trace levels of analytes [24]. As shown in Figure 3 The oxidation process of BPA at the as-prepared electrode surface shows a quasireversible behavior, observed for the cathodic and anodic peaks potential separation which suggests a quasi-reversible process [20,22]. According to these results, the proposed mechanism for BPA electrooxidation at the interface of the (Chi/AuNPs:PTS)3 electrode consists of the transfer of two electrons and two protons [23], as depicted in Scheme 2. DPV was employed to verify the (Chi/AuNPs:PTS)3 electrode efficiency in detecting BPA. DPV is an excellent method for sensing, with advantages such as high sensitivity and low LOD, being extremely useful for measuring trace levels of analytes [24]. As shown in Figure 3 The oxidation process of BPA at the as-prepared electrode surface shows a reversible behavior, observed for the cathodic and anodic peaks potential sepa which suggests a quasi-reversible process [20,22]. According to these results, the pro mechanism for BPA electrooxidation at the interface of the (Chi/AuNPs:PTS)3 ele consists of the transfer of two electrons and two protons [23], as depicted in Schem The (Chi/AuNPs:PTS) 3 electrode selective response to BPA was verified by carrying out interference studies. The performance was evaluated in the presence of other phenolic compounds. The results showed that 100-fold excess of the 4-nitrophenol, catechol, phenol and hydroquinone caused approximately 15%, 0.2%, 7% and 27% of variation, respectively, in the response current peak, as displayed in Figure 4. Except for hydroquinone, the low values found for 4-nitrophenol, catechol, and phenol indicate no significant interference, showing that these chemical species did not affect the determination of BPA using the (Chi/AuNPs:PTS) 3 electrode. The (Chi/AuNPs:PTS)3 electrode selective response to BPA was verified b out interference studies. The performance was evaluated in the presence of othe compounds. The results showed that 100-fold excess of the 4-nitrophenol, catech and hydroquinone caused approximately 15%, 0.2%, 7% and 27% of variatio tively, in the response current peak, as displayed in Figure 4. Except for hydr the low values found for 4-nitrophenol, catechol, and phenol indicate no signif ference, showing that these chemical species did not affect the determination of the (Chi/AuNPs:PTS)3 electrode. The electrode repeatability and reproducibility were also investigated by p repeated voltammetry experiments with solutions containing 1 µ mol L −1 of BPA cessive measurements on the single modified electrode yielded a reproducib with a relative standard deviation (RSD) of 2.12%. Additionally, the DPV re three similarly prepared electrodes were evaluated and the obtained RSD 3.73%, confirming the robustness of the modification procedure. The electrode repeatability and reproducibility were also investigated by performing repeated voltammetry experiments with solutions containing 1 µmol L −1 of BPA. Ten successive measurements on the single modified electrode yielded a reproducible current with a relative standard deviation (RSD) of 2.12%. Additionally, the DPV responses of three similarly prepared electrodes were evaluated and the obtained RSD value was 3.73%, confirming the robustness of the modification procedure.
In order to evaluate the performance of (Chi/AuNPs:PTS) 3 in practical analytical applications, the determination of BPA in real samples of tap water was carried out using the recovery analytical procedure. The water samples, without pretreatment, were spiked with 5 and 10 µmol L −1 of BPA and the percentage of recoveries was found to be 92 and 102%, respectively, indicating that the sensor might be sufficient for practical applications of tap water.
Compared with other electrochemical BPA sensors (Table 1), the proposed sensor presented a broad linear range and a relatively low LOD. Its competitive performance compared to other sensing platforms may be ascribed to the synergistic effect between the AuNPs and chitosan. Moreover, it could be stated that our strategy for developing an electrochemical sensor for BPA detection has properties of low cost, ease of preparation, robustness, and could be used, for instance, for the fabrication of flexible and disposable electrodes, allowing its practical use.
Conclusions
We developed a simple, cheap, and suitable platform for BPA electrochemical sensing using FTO electrode modified with LbL film architectures of chitosan and AuNPs@PTS. CV measurements revealed that the modified electrode exhibited a quasi-reversible behavior at the BPA. The oxidation and reduction process via the two-proton/two-electron process were found to be adsorption-controlled. The as-prepared electrode displayed a good LOD of 0.32 µmol L −1 and good reproducibility. Moreover, no significant interference was found for 100-fold excess of 4-nitrophenol, catechol and phenol in the determination of target analyte. The recovery test showed that the designed electrode can be successfully applied in the detection of BPA in a real water sample. Furthermore, the electrode modification procedure reported here is simple, low-cost, not time-consuming and results in a robust sensor, thereby offering potential for preparation of commercial modified electrodes.
|
2022-05-10T15:52:02.007Z
|
2022-05-04T00:00:00.000
|
{
"year": 2022,
"sha1": "1b5c8551bb5d9e687d6f98f518978643404b94e9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-3293/3/2/16/pdf?version=1651649429",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6e09474ce3f8194a2081efe130dc13c423b4f5aa",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": []
}
|
53503212
|
pes2o/s2orc
|
v3-fos-license
|
Footprint caused by a vehicle configuration on the dynamic amplification of the bridge response
The passage of a vehicle over a bridge leaves a unique footprint in the form of measured strains (or displacements) across the structure. This paper proposes a new level I damage detection method for short-span bridges using footprints of Dynamic Amplification Factor (DAF) versus vehicle speed. The total response of a bridge to a moving load is time- varying, and it can be assumed to be made of two components: ‘static’ and ‘dynamic’. Here, DAF is defined as the ratio of the maximum total response to the maximum ‘static’ component. For a given bridge, DAF patterns will vary with vehicle configuration. However, for a vehicle configuration (or a number of them), the mean DAF pattern measured on the bridge will remain unaltered unless the conditions of the bridge changed. The latter is the subject of investigation in this paper. In order to test the feasibility of using these patterns for monitoring purposes, damage is simulated within a bridge model as stiffness losses of 10% and 30% at mid-span. Changes in stiffness are identified by differences between DAF patterns corresponding to the healthy and damaged bridges. Results show to be more sensitive to damage than a traditional level I damage detection method based on variation of natural frequencies.
Introduction
Bridge damage detection techniques have gained increasing consideration from the engineering communities to prevent sudden structural failure leading to catastrophic, economic, and human life losses. Local damage detection techniques, such as ultrasonic methods and X-ray methods, entail that there is damage and that the vicinity of damage is known a priori and readily accessible for testing. Therefore, vibration-based damage detection methods are typically used first to establish if there has been damage (level I technique), the location (level II), its quantification (level III) and finally the remaining life of the structure (level IV) [1]. The essential idea behind vibration-based damage identification is that damage induces changes in the physical properties (mass, damping, and stiffness) that will be noticeable in modal properties (i.e., natural frequencies, damping and mode shapes). Clearly, a drop in stiffness in a narrow localised portion of the bridge will go unnoticed to the 'static' component of a strain measurement (i.e., in a static loading test), unless the transducer would be located at the damaged location [2]. However, the total strain due to a moving load will be affected by a localised loss in stiffness, as a result of the changes in the 'dynamic' component which is related to the modal properties of the bridge.
As a vehicle traverses a bridge, strain at a given section varies with the vehicle position. If the same vehicle crossed the bridge along the same transverse path with the same speed a second time, the bridge strain would not vary except for a possible alteration in the initial conditions. If the vehicle 'static' configuration (vehicle 'static' configuration refers here to number of axles, their weights and spacing) or speed changes, then the response of the bridge will also change. Similarly, the measured strain due to a given vehicle 'static' configuration will change with an alteration of the structural bridge parameters (boundary conditions, stiffness distribution). Although 'dynamic' mechanical properties of the vehicle (i.e., stiffness and damping of tires and suspension, rotationary moments of inertia associated to the sprung masses, etc…) will also affect the bridge response, these are not considered here, as it is impracticable to assume that these properties could be identified on the field. However, 'static' mechanical properties and speed of a vehicle can be readily captured via a Weigh-In-Motion pavement-based system near the bridge or a Bridge Weigh-In-Motion installed on the bridge. Therefore, it is assumed that the impact of a vehicle fleet with a given 'static' configuration on a bridge can be characterized by a mean response, except for deviations due to vehicular 'dynamic' parameters. The road profile will also have a significant impact on the vehicle forces. While rough profiles will typically lead to a high variability in the bridge response, smooth profiles will produce more consistent (less variable) bridge responses. In fact, the response of a bridge to a sprung vehicle model running on a smooth (flat) profile is practically identical to that of a simpler model made of constant forces equal to the static axle weights of the sprung model. Therefore, this paper analyses the Dynamic Amplification Factor (DAF) patterns resulting from different vehicle 'static' configurations and how the "vehicle configuration -DAF" footprint on the bridge response can be used to monitor not only the passing traffic, but also the structural health of the bridge.
The impact of vehicle speed and 'static' configuration, namely number of axles, axle weights and their spacings, on the bridge response is now well understood in terms of their incremental static load effect as shown by the literature. For example, vehicle speed is considered a factor of supreme importance affecting bridge dynamics [3][4][5][6][7]. General speaking, the bridge dynamic response tends to: (i) increase with an increase in vehicle speed; (2) decrease as the vehicle weight increases [8][9][10][11] although [12] and [13] find that both static and dynamic components increase as the mass of the vehicle increases. The influence of the number of axles on the bridge response to a moving vehicle are specified by [14] (OMT 1991) and [15] (CSA 2006), however, their conclusions are related to particular scenarios and need to be interpreted with caution.
In order to test the use of changes in DAF patterns as a monitoring tool, a bridge is modelled as simply supported finite element discretized beam and vehicles are modelled as series of moving constant loads. This relatively simple analysis ignores interaction between vehicle and bridge, but it is used to capture and understand the underlying pattern (i.e., a contour plot of measured load effect versus time and section location) associated to different 'static' vehicle configurations that could be hindered by more complex models. This 'healthy' pattern is characterized by matching effects between the natural period of the bridge, the vehicle speed and configuration. Then, DAF is employed to detect changes in structural stiffness. In this process, the identification of the correct vehicle 'static' configuration, i.e., via a Weigh-In-Motion installed on the site, is key to the successful application of this method.
Simulation models 2.1 Bridge model
The bridge model consists of 150 discretised 1D beam elements (each element being 0.1 m long with two degrees of freedom at each end node). The beam section is assumed to have an inertia of 0.5273 m 4 and a mass per unit length of 28125 m -1 kg. The beam material has a modulus of elasticity of 35 GPa. The bridge in its healthy form is assumed to have constant properties throughout the bridge length. The values adopted for these properties are typical of a simply supported short-span solid slab deck, and they lead to main natural frequencies of 5.65 and 22.62 Hz. Damping ratio is assumed to be 0.03 for all modes. A model consisting of a series of constant point loads is used to represent a single vehicle crossing a simply supported bridge at uniform speed. This simulation model is obviously different from the reality in many aspects, as it does not take into account vehicle dynamics, road roughness and the interaction between vehicle and bridge. However, it does provide some insight into the underlying patterns in more complex simulation models. Three types of vehicle configurations based on the Eurocode 1 part 2 [16] are investigated (Table 1).
Calculation of the Bridge Response
The response of the discretized beam model to a series of moving time varying forces is given by the system of equations in Equation (1).
where M, C and K are the mass, damping and stiffness matrices respectively of the beam model and , and are vectors of nodal bridge acceleration, velocity and displacement respectively. is the time varying external forces vector. is a (n × nf) matrix for distributing nf forces to the n nodal degrees of freedom of the finite element beam. Rayleigh damping is used here to model the damping and is given by: where and β are constants. These constants are obtained from = 2ζω ω /(ω + ω ) and β = 2ζ/(ω + ω ), where ω and ω are the first two natural frequencies of the bridge in rad/s. The damping ratio ζ is assumed to be 0.03, the same for all modes [17].
Equation (1) is implemented in MATLAB and solved using the Wilson-Theta integration scheme [18] with a time interval of 0.002 s. The optimal value of the parameter =1.420815 is used for unconditional stability in the integration scheme. Initial conditions are assumed to be zero for all displacements, velocities and accelerations.
Definition and Testing of the Proposed Level 1 Damage Detection Algorithm
DAF has been used to assess the dynamic bridge response due to the crossing of vehicles by many researchers [19][20][21][22]. In this paper, DAF will be applied to damage detection. For this purpose, nine sensors are assumed to be uniformly spaced along the beam described in Section 2 and DAF is calculated for each sensor location using simulations of the response to a passing vehicle. The first and last sensors are located at 1.5 m and 14.5 m respectively from the first support and sensors in-between are spaced by 1.5 m each. Here, DAF refers to the ratio of maximum total strain ('static' + 'dynamic') to maximum 'static' strain for a given sensor location. Strain is calculated at the bottom of these nine locations based on the assumption that the neutral axis is located at 0.375 m of the measurement point.
The damage scenario is represented as a localised reduction in stiffness at 10 elements at and around the mid-span section (covering a full length of 1 m). Two different damage levels are simulated, i.e., stiffness reductions of 10% and 30%. The first natural frequencies of the bridge for each of the three states and the frequency ratios of the damaged states to the healthy one are shown in Table 2 for reference purposes. It can be noticed that the difference in first natural frequency amongst the different states is relatively small.
Table 2. Main frequencies of the three bridge states under investigation
The damage detection algorithm relies on a prior knowledge of the DAF pattern for the 'healthy' structure (or structure at a time T), and by comparison to this DAF at time T, it establishes if the structural conditions at time "T + ∆T" have changed. Hence, the first step of the algorithm is the calculation of DAF for the 'healthy' bridge due to the three vehicle configurations under investigation.
DAF pattern at a point in time T
For the calculations in this section, the bridge at time T is assumed to be in a 'healthy' state (i.e., as defined in Section 2.1). Figure 1 shows a 3D plot of DAF versus velocity (km/h) due to the crossing of the 2-axle truck defined in Table 1. Velocity increments of 1.08 km/h are employed in the generation of this graph. Figure 3 shows the 3D plot of DAF versus velocity (km/h) for a 2-axle vehicle passing over bridge with a 30% stiffness loss localised at mid-span. The total volume of this plot is calculated using the same method defined in Section 3. Figure 7. Resultant DAF using a 5-axle truck: (a) 30% stiffness loss, (b) 10% stiffness loss As expected, changes in DAF for 10% stiffness loss at mid-span are not as clear as for 30% stiffness loss. Table 3 shows the values of the DAF damage indexes. The resultant DAF damage indexes are the subtraction between the two volumes of the damaged and 'healthy' bridges for each of the vehicle configurations (the volumes have been calculated using Equation (3)). From Table 3, it can be noticed that, the bridge under the impact of a 2-axle truck is more sensitive to the mid-span damage than the other two vehicles (3-axle and 5-axle trucks). In addition, the results show that the DAF damage index is comparatively more sensitive to changes in stiffness than those changes in frequencies shown in Table 2.
Conclusions
This paper has proposed a new level I damage detection method based on differences in DAF at different points in time. The method assumes that the vehicle speed, number of axles and their spacing and weight distribution can be obtained on-site, i.e., via a weigh-in-motion system. It has been theoretically tested with the simulated response of a 15 m simply supported bridge to a series of moving constant load resembling three types of vehicle configurations (2-axle, 3-axle and 5-axle trucks). The responses have been obtained at nine equally spaced locations that allowed gathering a full picture of the DAF patterns throughout the length. Damage has been modelled as a localised reduction in stiffness of 10% and 30 % at bridge mid-span. The DAF damage index that has resulted from subtracting the DAF of a 'healthy' bridge from that of damaged ones have shown to be potentially able to detect damage. The resultant 3D plots have been shown to be more sensitive to the crossing of the 2-axle truck than the other vehicle configurations. Despite the natural frequency of the damaged bridge having changed very slightly for the 10% stiffness loss, the new method has still been able to successfully detect changes in the structural response. However, a number of issues still need to be addressed in the future which include: influence of variations in vehicle dynamics and road profile on the pattern, inaccuracies in obtaining the static response via filtering, etc...
|
2018-10-19T04:48:49.921Z
|
2015-07-09T00:00:00.000
|
{
"year": 2015,
"sha1": "c17aeedd04c316b294501554bb21b5fc6dbae97b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/628/1/012064",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "924d55a6d84df9db76cb20252b3906bee73ea459",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
}
|
22732374
|
pes2o/s2orc
|
v3-fos-license
|
Economic evaluation of interventions designed to reduce Clostridium difficile infection
Introduction Healthcare decision-makers are increasingly expected to balance increasing demand for health services with a finite budget. The role of economic evaluation in healthcare is increasing and this research provides decision-makers with new information about the management of Clostridium difficile infection, from an economic perspective. Methods A model-based economic evaluation was undertaken to identify the most cost-effective healthcare intervention relating to the reduction of Clostridium difficile transmission. Efficacy evidence was synthesised from the literature and was used to inform the effectiveness of both bundled approaches and stand-alone interventions, where appropriate intervention combinations were coupled together. Changes in health outcomes were estimated by combining information about intervention effectiveness and its subsequent impact on quality of life. Results A bundled approach of improving hand hygiene and environmental cleaning produces the best combination of increased health benefits and cost-savings. It has the highest mean net monetary benefit when compared to all other interventions. This intervention remains the optimal decision under different clinical circumstances, such as when mortality rate and patient length of stay are increased. Bundled interventions offered the best opportunity for health improvements. Conclusion These findings provide healthcare decision-makers with novel information about the allocation of scarce resources relating to Clostridium difficile. If investments are not made in interventions that clearly yield gains in health outcomes, the allocation and use of scarce healthcare resources is inappropriate and improvements in health outcomes will be forgone.
Introduction
Healthcare decision-makers are increasingly expected to balance increasing demand for health services with a finite budget. The role of economic evaluation in healthcare is increasing and this research provides decision-makers with new information about the management of Clostridium difficile infection, from an economic perspective.
Methods
A model-based economic evaluation was undertaken to identify the most cost-effective healthcare intervention relating to the reduction of Clostridium difficile transmission. Efficacy evidence was synthesised from the literature and was used to inform the effectiveness of both bundled approaches and stand-alone interventions, where appropriate intervention combinations were coupled together. Changes in health outcomes were estimated by combining information about intervention effectiveness and its subsequent impact on quality of life.
Results
A bundled approach of improving hand hygiene and environmental cleaning produces the best combination of increased health benefits and cost-savings. It has the highest mean net monetary benefit when compared to all other interventions. This intervention remains the optimal decision under different clinical circumstances, such as when mortality rate and patient length of stay are increased. Bundled interventions offered the best opportunity for health improvements. PLOS
Introduction
Clostridium difficile (C. difficile) is the most common cause of hospital-acquired infectious diarrhoea and represents a significant challenge for health services [1]. It is primarily transmitted via the fecal-oral pathway, through ingestion of bacterial spores [2]. The majority of cases follow the use of antibiotics, which disturb the resident gut microflora [2]. Other risk factors include advanced age, frequent admission to hospital and comorbidity [3]. Infection can cause a range of clinical symptoms, including severe diarrhoea, pseudomembranous colitis, and in extreme cases, death [4]. It is the most common cause of infectious, hospital-acquired diarrhoea and is reported to be responsible for 15-25% of antibiotic associated diarrhoea [2; 5] Following the spread of hypervirulent strains, most notably ribotype 027, an increase in severe cases has been documented in studies from the UK, Europe and North America [6]. Recurrent infection, where patients are treated for their initial infection then suffer either a relapse or reinfection, is a common feature of the disease [7]. Recurrence rates have been reported to be 25% of all cases (range 6-42%), increasing patient vulnerability and debilitation while incurring economic costs through increased patient length of stay and hospitilisation [8]. Information on the attributable cost of C. difficile infection (CDI) varies, with estimates of primary infection costs ranging from USD $3,400-$16,300 per case [9] and recurrent infection between USD $13,700-$18,000 [10]. CDI-attributable mortality has also increased, with pre-2000 estimates of 1.5% mortality increasing to between 4.5% and 5.7% in recent endemic periods [9]. There are standardised guidelines for controlling CDI [11] however, they are strategy-level, not particularly prescriptive and tend to be reactive. There are many infection control intervention options available to reduce transmission and most hospitals employ a mixed approach that combines antimicrobial stewardship, hand hygiene, environmental cleaning and fecal bacteriotherapy. There is paucity of economic evidence to support existing practice, with the majority of guidelines built solely on clinical evidence, with no consideration of the costs and health returns from alternative strategies of infection c0ntrol. Scarce resources for infection prevention should be used first in those programmes that deliver the greatest health returns per dollar invested [12][13]. The aim of this paper is to provide useful evidence for healthcare decision-makers about the cost-effectiveness of competing investments for prevention and for control of CDI.
Model structure
A decision analytic model was developed to evaluate the cost-effectiveness of interventions that reduce CDI [14]. Given the chronic and repetitive nature of the disease, the model needed to be state-based and able to include recursive events. A Markov model was developed based on the natural history of the disease and data from Australian hospitals. The model depicts the movement of adult patients through numerous health states over time, with each health state having a cost and health utility weight attached to time spent in that state. This structure provides the framework for the evaluation and is used to estimate the costs and health outcomes associated with each intervention. The model (Fig 1) includes two absorbing states-'death' and 'censored'. Patients are censored when their timeline reaches the limit of the available data, which was collected over an 18-month period. For example, we used observed data on those who were discharged from hospital in a vulnerable state and did not return to hospital during the length of available follow-up time, but who may have returned if we had a longer followup time. Death can occur from any health state, except censored. This model only applies to healthcare-associated infection and does not include infections that have been imported to the hospital from the community. As such, all patients begin in the health state 'at-risk' and represent newly admitted patients. Patients can either remain in the 'at-risk' state or transition into one of three states: 'non-severe infection', 'severe infection' or 'discharged healthy'. It is assumed that there is no possibility of movement between the two infective states. Patients who suffer an infection ultimately leave the system only either in the state 'discharged vulnerable 1' or 'death'. All patients who have suffered infection are classified as 'vulnerable' due to the damage done to the gut's microbiota and their exposure to antibiotics during treatment. Patients in the state 'discharged vulnerable 1' can remain in this state, experience a recurrence and move to 'recurrent infection', or enter either absorbing state. Due to a lack of data clarity, recurrent patients could not be categorised according to disease severity, so we assumed that all recurrent infections were severe. Patients who have a recurrent infection are transitioned from their infective category to a vulnerable state-'discharged vulnerable 2'-or one of the absorbing states. Patients who are 'discharged vulnerable 2' but who avoid another infection recurrence or death will eventually transition to 'censored'. The model structure emerged from consultations with infection control nurses and infectious disease physicians to ensure it resembles reality.
Interventions included in the evaluation
All strategies were compared with 'standard care', which was assumed to be: a current antimicrobial stewardship programme or antimicrobial restriction policy (AMS), a hand hygiene and environmental cleaning programme for the whole hospital, and the capacity to undertake fecal microbiota transplant (FMT). Rates consistent with those previously published were used [15]: the baseline level of antimicrobial exposure in a hospital at any given time was assumed to be 50% of inpatients [16][17]; the average time for gut flora to be properly restored was 90 days post-infection [18]; and the average length of stay for all hospitalised patients was assumed to be 5.8 days [19].
The interventions considered are described in Table 1. A panel of experienced infection control clinicians screened all interventions for clinical plausibility and practicality, with intervention bundles being included to reflect the reality of practice in service delivery [20]. A pragmatic approach was taken when choosing which interventions were included in the final analysis. This was especially important when considering bundled interventions, with only those that were deemed to be realistically plausible included, to minimise the likelihood of overestimating transmission reduction. In addition, some possible bundle combinations, such The economics of C. difficile infection as AMS and FMT, were deliberately omitted due to their clinical implausibility. This model is a flexible framework for the evaluation of numerous infection control strategies and includes a range of efficacy estimations for each intervention, based on published estimates. The inclusion of a range of efficacy estimates represents modelled changes to C. difficile transmission rates and shows the range of results that may be achievable under certain local conditions. The research team is happy to share the full model on an open access platform for others to evaluate other strategies, or different strategy efficacy levels that are not explicitly shown in these results and has made all input parameter information available in text and within supporting information files.
Hygiene improvement (HYG) is a combination of two fundamental infection control strategies-hand hygiene and environmental cleaning. Hand hygiene programmes are part of routine hospital infection control practice and the efficacy of environmental cleaning is presented elsewhere [11;21]. Antimicrobial stewardship (AMS) is a common intervention that targets a major risk factor in C. difficile transmission-the overuse of antibiotics. AMS programmes are designed to encourage more careful and controlled prescribing practices, to improve patient outcomes and limit antibiotic associated disease and resistance [22][23]. The effectiveness of AMS programmes has been the subject of debate, with achievable reduction in antimicrobial use being reported to range from 5% to 60% [24][25]. Fecal microbiota transplant (FMT) is where a stool sample from a non-infected donor is transplanted into the infected patient in an attempt to restore their normal gut microbiota. Whilst not commonly thought of as a Table 1. Interventions included in cost-effectiveness analysis.
AMS 1 (Large Reduction)
An antimicrobial stewardship programme that reduces antibiotic use across the hospital such that patients in the vulnerable category are reduced from 50% to 25% (best reported reduction attributed to AMS programmes)
AMS 2 (Moderate Reduction)
An antimicrobial stewardship programme that reduces antibiotic use across the hospital such that patients in the vulnerable category are reduced from 50% to 40% (average reported reduction attributed to AMS programmes)
HYG 1 (Large Reduction)
A hygiene improvement intervention that has the effect of reducing the transmission rate by half
HYG 2 (Moderate Reduction)
A hygiene improvement intervention that has the effect of reducing the transmission rate by a quarter
FMT 1 (Moderate Reduction)
Expedited gut recovery due to FMT for infected patients (time to recovery halved-45 days)
FMT 2 (Large Reduction)
Expedited gut recovery due to FMT for infected patients (best reported recovery rate-10 days)
FMT 3 (Conservative Reduction)
Expedited gut recovery due to FMT for infected patients (worst reported recovery rate-62 days)
AMS & HYG (Moderate AMS, Large HYG Reduction)
An antimicrobial stewardship programme and hygiene improvement programme delivered as a bundle (reduction in antibiotic use from 50% to 40% of patients; transmission rate halved due to effectiveness of hygiene improvement programme)
HYG & FMT 1 (Moderate FMT, Large HYG Reduction)
A hygiene improvement programme delivered in conjunction with FMT for recurrently infected patients (gut recovery time halved due to FMT; transmission rate halved due to effectiveness of hygiene improvement programme)
HYG & FMT 2 (Large FMT, Large HYG Reduction)
A hygiene improvement programme delivered in conjunction with FMT for recurrent patients (best reported gut recovery rate due to FMT; transmission rate halved due to effectiveness of hygiene improvement programme) https://doi.org/10.1371/journal.pone.0190093.t001 The economics of C. traditional infection control intervention, there is growing evidence that FMT is particularly effective in treating recurrent infections [26]. Health services can utilise FMT as a transmission-inhibiting intervention as its successful implementation as a therapeutic option will limit in-hospital transmission from recurrent patients.
Data sources
The model is populated with data from a range of sources ( Table 2). The estimation of intervention cost followed a published framework of identifying, measuring and valuing resources; to record only the expenditure required to produce a health effect [27]. Inventories detailing the resources required to physically run each intervention in a real-world hospital were developed and can be made available by request, enabling this study's method to be tailored for different health systems around the world. A microcosting approach was taken to value the medical resources required for all interventions in 2015 Australian dollars. Microcosting includes detailing all inputs used in the treatment of a particular patient [28]. Non-medical resources, such as staff time, were estimated using data from the Queensland Department of Health's human resources department. The estimation of infection-related costs was categorised by diagnosis costs, treatment costs and valuation of bed-days lost to CDI. Diagnosis costs included all costs incurred by the healthcare system when identifying and confirming the presence of infection. Only the cost of the most common treatment regimens was included in the analysis, with specialty regimens or rarely used drugs deliberately omitted. Treatment regimens for all infected states were informed by the Australasian Society of Infectious Diseases guidelines for treatment and costs were informed by the Australian Pharmaceutical Benefits Scheme (PBS) [29]. The value of ICU-bed and ward-bed days was informed by published estimates [30][31] and different methods for valuing bed days in monetary terms was explored in scenario analysis. The rate that patients move between compartments in the model was informed by a hospital morbidity dataset provided by the Western Australian Department of Health data linkage branch, which contained detailed demographic and clinical information for all admitted patients in the 18-months between 1 July 2011 and 31 December 2012. Those with an ICD-10 code for CDI (AO4.7) were extracted and examined in further detail. There were 844 C. difficile infections identified when non-severe (749) and severe (95) cases were combined. Patients who suffered an infection but did not require a stay in ICU were assumed to suffer 'non-severe' illness and patients who had an infection and a concurrent stay in ICU were classified as suffering 'severe' illness. Patients who experienced more than one hospitalisation due to the infection were tracked according to a non-identifiable patient code and these data informed the probability of recurrent infection. Patients whose cause of death included the ICD-10 code for C. difficile infection were used to inform the probability of dying from all of the infected health states. The probability of dying from the 'discharged healthy' state was estimated using data from Australian life tables, published by the AIHW [32]. Utility weights were assigned to all health states in the model, with weights reflecting a preference-based valuation of the health state, relevant to the patient's infection status. Where possible, health utility weights were based on estimates from published studies. Expert opinion was used for those health states that could not be informed from the literature. The quality of the evidence used to populate the model was assessed according to published criteria described by Cooper et al [33]. The parameters included in the economic model are in Table 2.
The best available evidence was used in the study, but due to typical limitations in availability and access it was obtained from a range of sources, both within Australia and from published work outside the Australian setting.
The economics of C. difficile infection
Estimating intervention effectiveness
The clinical effectiveness of each intervention was estimated from the results of a previously published, stochastic transmission model, which assessed the increase or decrease in C. difficile transmission as a result of that intervention [20]. Efficacy evidence was synthesised from six studies and was used to inform the effectiveness of both bundled approaches and stand-alone interventions. Hygiene improvement was found to have a large effect in decreasing disease incidence on its own (3.2 per 1000 bed days to 1.1 per 1000 bed days), but when coupled with another intervention, such as antimicrobial stewardship, little additional benefit was gained (2.7 per 1000 bed days to 2.3 per 1000 bed days). Meagre reduction in C. difficile incidence was observed for Antimicrobial stewardship (2.8 per 1000 bed days to 2.3 per 1000 bed days). Fecal transplant was found to be an ineffective control tool on its own (2.5 per 1000 bed days to 2.4 per 1000 bed days) and in combination with other transmission reduction strategies, such as hygiene improvement, there did not appear to be any collective effect (2.5 per 1000 bed days to 2.8 per 1000 bed days).
Estimating cost-effectiveness
The cost-effectiveness of each intervention is shown by the incremental cost-effectiveness ratio (ICER), which is the change in costs divided by the change in health outcomes between the intervention and the comparator of standard care. Changes in health outcomes were estimated by combining information about intervention effectiveness and its subsequent impact on quality of life. Interventions that reduced C. difficile transmission compared with standard care had a flow-on reduction in the total number of infections, which resulted in a decreased number of patients who spent time in a state of reduced health. The total change in health outcomes was expressed in terms of quality-adjusted life years (QALYs). Changes in total costs were estimated by measuring total infection-related costs for standard care and each intervention. Infection-related costs are a combination of diagnosis, treatment and hospital bed-costs. The accrual of costs is linked to the number of infections that occur when one intervention is in place compared to another, and as seen in Table 2, differ according to infection severity due to disparities in treatment routine and hospital-bed costs. The incremental cost of running each intervention, compared with standard care, was also included when estimating the change to total costs. Practical issues with using the ICER for decision-making exist, given that using the ratio of two numbers has awkward statistical properties [40]. In order to simplify this ratio information to a single number, the net monetary benefit (NMB) framework is used. Results are converted from the ICER to a net monetary benefit value, through the linear rearrangement of the ICER equation, as follows:
NMB ¼ ðWTP threshold x Change in Eff ectsÞ À Change in Costs
The interpretation of cost-effectiveness becomes particularly simple using the NMB: a positive NMB indicates that a strategy is cost-effective and a negative NMB indicates that a strategy is not cost-effective. Using the net benefit framework gives decision-makers a clear comparison of multiple interventions as choosing the optimal intervention is as simple as selecting the intervention that provides the highest mean net monetary benefit. A willingness to pay of $42,000/QALY was used as the decision-making threshold for all analyses, which is in line with published estimates for cost-effectiveness in the Australian setting [41]. A lifetime time horizon has been used for this evaluation and as per previously published guidelines; future costs and health outcomes were both discounted at 5% per annum [42].
Fixed value results
Fixed value analysis does not account for any uncertainty in the decision. The results in Table 3 and Fig 2 show that of the ten interventions included, three were not cost-saving (HYG 2, FMT 1 & FMT 3). Hygiene improvement 1 (HYG 1) achieved the greatest health benefits (127 QALYs gained) and the lowest costs (over $2 million saved). Without consideration of uncertainty, HYG 1 dominated all other interventions because it has the greatest costs saved and health benefits gained.
Probabilistic results
Parameter uncertainty was accounted for by using Monte Carlo simulation. In order to account for uncertainty in the model's parameters, 10 000 simulations of the model were made, where in each replication a value randomly drawn from each parameter's distribution was chosen. Table 2 shows the distributions that were applied to each of the model's parameters. At a threshold level of $42,000 per QALY gained, HYG 1 provided the highest mean NMB ($7.5 million). The bundled approach of AMS/HYG delivered the next highest mean NMB ($7.3 million) and AMS 1 provided the lowest mean NMB of all interventions ($15,000). Fig 3 shows the results of the NMB analysis for all interventions. The intervention with the highest mean NMB is deemed to be the optimal decision and should be considered for adoption ahead of the other interventions [43].
Scenario analysis
Uncertainties in other aspects of the evaluation also exist and are explored through scenario analyses. Different clinical scenarios were examined, where key parameters of the model were altered. The modified scenarios reflected plausible situations and test the robustness of the The economics of C. difficile infection to simulate serious outbreak conditions and, (iv) infection transmission rate was increased to simulate differences that may exist across the Australian setting. The results of scenario analyses are summarised in Table 4. Hygiene improvement 1 (HYG 1) was optimal in most scenarios-when bed day cost was altered, LOS was increased and when mortality rate was increased. Only when the infection rate was increased was another intervention optimal. In this scenario the AMS/HYG bundled intervention provided the highest mean NMB. The scenario with the greatest certainty, indicated by error probability, was when the infection rate was increased. In this scenario the probability of incorrectly choosing the AMS/HYG bundle as optimal was 0.56. The error probabilities relating to the other scenarios were all high-0.64 for scenarios 2 and 4 and 0.74 for scenario 1. This means that despite being the intervention that provided the highest NMB, the decision regarding HYG 1 as being optimal in multiple scenarios is highly uncertain.
Discussion
Hospitals in Australia have access to multiple infection control interventions in the quest to reduce CDI. These results show the majority of interventions resulted in health improvements, indicated by QALYs gained. QALY gains were driven by a reduction in the number of infected patients as a result of a successful intervention, when compared with the number of infected patients who existed under the comparator. The results also showed that bundled interventions, as opposed to stand-alone interventions, offer the best opportunity for health improvements. Three out of the four highest-ranking interventions in terms of highest expected QALY gains were all bundles. The findings show that if decision-makers did not adopt any of the proposed interventions, improvements in health outcomes and cost savings would be forgone. If decisions are not made about the expenditure of scarce health resources, which includes decisions about disinvesting in initiatives that are not cost-effective, increased expenditure for no health outcome gain is probable.
The findings from model-based evaluations are always dependent on the assumptions made. Exclusions and simplifications are not uncommon limitations and this study was no different. There were limitations on how transition probabilities were estimated as well as limitations regarding the accuracy with which hospital-acquired and community-acquired infections are described due to cross-jurisdictional differences in data collection. The estimation of quality of life utility weights was not possible from primary data for all health states in the model and estimating costs is a difficult process, especially when they are not routinely collected at the hospital level for research purposes. Finally, the choice of interventions for inclusion in the project was deliberately simplified and did not include all possible C. difficilerelated interventions. However, no economic evaluation of CDI has been undertaken in this The economics of C. difficile infection setting, making the findings an important addition to existing clinical knowledge. Modelling studies also have noteworthy strengths-no prospective randomised control trial could be designed and commissioned to compare multiple interventions side-by-side, resulting in practical information for decision-makers. The majority of the data used to inform the results was derived from local sources, making the results very applicable to the Australian setting. Finally, the methods that have been used in this study are transparent, reproducible and follow published standards for this type of evaluation.
Prior to this research, Australian decision-makers have had limited access to economic evidence to inform their decision-making about managing risk of CDI. Although there is uncertainty in the results of this evaluation, doing nothing to change the management approach relating to C. difficile is a decision in itself that should be weighed up with appropriate consideration. Decision-makers need to understand that remaining with the status quo-simply maintaining one's current or previous position-is itself an explicit decision. Put simply, if investments are not made in interventions that clearly yield gains in health outcomes compared to the status quo, the allocation and use of scarce healthcare resources is inappropriate.
Limitations
Our study has limitations. The model does not account for epidemiologic parameters relating to CDI, such as the possibility of being admitted with infection. This is due to the issue that exists in the way that CDI is classified and reported in Australia. The definition of what constitutes an infection is well known and easily accessible [44], but there remain concerns about the accuracy with which hospital-acquired and community-acquired infections are described as there is bound to be some overlap between genuine hospital-acquired infections and community-acquired infections that are diagnosed and confirmed during a hospital stay. The absence of individual clinical test results meant that the classification of illness severity was simplified. For this project, it was assumed that the patients who had a confirmed case of infection and a parallel stay in ICU suffered severe infection and those who did not were classified as nonsevere. This assumption may have resulted in an overestimation of severe cases, which could have had an effect on the results. Estimation of intervention efficacy was derived from the synthesis of a variety of evidence sources, which is typical for this type of evaluation. The particular estimates used in the model and presented here have been informed by the literature but are by no means definitive. As further evidence becomes available, the model can be updated and adapted to incorporate this.
Finally, given that the results were produced using predominantly Australian data, they are primarily intended for the Australian setting. However, the structure of the study is based on a flexible and adaptable Markov model, allowing inputs from newly identified literature or different healthcare settings to be included, making the results more appropriate for local settings in other jurisdictions around the world. This flexibility also extends to the capacity to include other interventions that exist to reduce C. difficile transmission in the hospital setting, if they are deemed more locally relevant than those presented here.
Comparison to existing literature
There is little knowledge around the world about the cost of C. difficile from an economic perspective. Since 2014, there have been three economic evaluations published that focus on the cost-effectiveness of fecal microbiota transplant (FMT) in different settings [36;45-46]. All three studies found that FMT was a cost-effective approach to treating recurrent infection but none explored FMT as a method for inhibiting transmission of the pathogen. Nowak et al. undertook a study on the economic impact of an antimicrobial stewardship programme, but its focus was not solely on Clostridium difficile infection and included other HAIs-vancomycin-resistant enterococci (VRE) and Methicillin-resistant Staphylococcus aureus (MRSA) in the analysis [22]. The results are not specific to C. difficile and are not optimal for decision-making purposes. Nelson et al. conducted an economic evaluation of six different interventions in the USA-setting and found that implementing infection control bundles is more cost-effective than stand-alone interventions, which supports our findings [36].
Conclusion
Until now, economic evidence relating to CDI has not been available for the Australian setting. The findings of this evaluation should be considered together with other relevant information that is appropriate at a local level, such as clinical outcomes, budget constraints and treatment priorities. The results support further investment in infection control interventions, providing evidence that such investment is an efficient use of scarce resources as cost-effective outcomes are likely to be realised.
Supporting information S1 File. Further input parameters for economic model. (DOCX)
|
2018-04-03T02:26:15.560Z
|
2018-01-03T00:00:00.000
|
{
"year": 2018,
"sha1": "2f0bda9b2f00e1fe435f4a8034b5865a28e6cd51",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0190093&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f0bda9b2f00e1fe435f4a8034b5865a28e6cd51",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
221115771
|
pes2o/s2orc
|
v3-fos-license
|
Theoretical Insights into the Catalytic Effect of Transition-Metal Ions on the Aquathermal Degradation of Sulfur-Containing Heavy Oil: A DFT Study of Cyclohexyl Phenyl Sulfide Cleavage
Steam injection is the most widely used technique for effectively reducing the viscosity of heavy oil in heavy oil production, in which in situ upgrading of heavy oil by aquathermolysis plays an important role. Earlier, transition-metal catalysts have been used for improving the efficiency of steam injection by catalytic aquathermolysis and achieving a higher degree of in situ oil upgrading. However, the unclear mechanism of aquathermolysis makes it difficult to choose efficient catalysts for different types of heavy oil. This theoretical study is aimed at deeply understanding the mechanism of in situ upgrading of sulfur-containing heavy oil and its catalysis. For this purpose, cyclohexyl phenyl sulfide (CPS) is selected as a model compound of sulfur-containing oil components, and, for the first time, a catalytic effect of transition metals on the thermochemistry and kinetics of its aquathermolysis is investigated by the density functional theory (DFT) methods with the use of the Becke three-parameter Lee–Yang–Parr (B3LYP), ωB97X-D, and M06-2X functionals. Calculation results show that the hydrolysis of CPS is characterized by fairly high energy barriers in comparison with other possible reaction routes leading to the cleavage of C–S bonds, while the heterolysis of C–S bonds in the presence of protons has a substantially lower kinetic barrier. According to the theoretical analysis, transition-metal ions significantly reduce the kinetic barrier of heterolysis. The Cu2+ ion outperforms the other investigated metal ions and the hydrogen ion in the calculated rate constant by 5–6 (depending on the metal) and 7 orders of magnitude, respectively. The catalytic activity of the investigated transition-metal ions is arranged in the following sequence, depending on the used DFT functional: Cu2+ ≫ Co2+ ≈ Ni2+ > Fe2+. It is theoretically confirmed that transition-metal ions, especially Cu2+, can serve as effective catalysts in aquathermolysis reactions. The proposed quantum-chemical approach for studying the catalytic aquathermolysis provides a new supplementary theoretical tool that can be used in the development of catalysts for different chemical transformations of heavy oil components in reservoirs due to hydrothermal treatment.
■ INTRODUCTION
Heavy oil and bitumen account for approximately 60−70% of total proved oil reserves all over the world. 1 The highly efficient development of heavy oils has a great significance for meeting the increasing demand for energy. However, unlike conventional light oil, the development of these heavy reserves is more difficult and requires special recovery methods (mainly thermal methods) due to its high viscosity and low American Petroleum Institute (API) gravity. 1 Steam injection is the most widely used technique for heavy oil production. 2−4 In situ oil upgrading by the aquathermolysis reaction to reduce oil viscosity is among the main mechanisms in a steam injection process. 3,5 It is widely accepted that the presence of heteroatoms (S, N, O) is one of the main reasons leading to the high viscosity of heavy oil because the heteroatomic links between hydrocarbon fragments, such as −O−, −S−, and −NH−, give rise to an increase in the molecular weight of oil components. 6 These heteroatoms usually exist in resins and asphaltene molecules in the form of C−O, C−N, and C−S bonds. 7 It is believed that the heat energy that steam passes to the hydrocarbons causes the cleavage of these heteroatomic bridges, which thus breaks large molecules into smaller ones, consequently reducing the viscosity and improving the flow properties of heavy oils. 3 In the aquathermolysis reaction, the weakest bonds are always cleaved first. The bond dissociation energies of the basic bonds C−H, C−C, C−N, and C−S are 96−99, 83−85, 69−75, and 66 kcal/mol, respectively, which means that the C−S bond is the one that can be cleaved most easily. 6,8 It is believed that even a small fraction of bond breakage can lead to a huge improvement in the flow properties of heavy oils. However, to accelerate C−S bond cleavage reactions at steam injection conditions (500−600 K), some catalysts need to be used.
Many experimental and theoretical studies have been carried out to get a clear picture of the mechanism of decomposition of heavy oil in the aquathermolysis reaction. 9−11 For this purpose, it is necessary to use model compounds for experiments or computer simulations owing to the fact that the complicated composition of heavy oil makes it difficult to understand the cleavage mechanism of the C−heteroatom bonds. Organic species containing aliphatic and aromatic fragments, as well as oxygen, sulfur, and nitrogen functional groups, are suitable model compounds for simulating the chemistry of thermal processes that takes place in natural environments. For example, thermolysis reactions of cyclohexyl phenyl ether, sulfides, and amines have been investigated under different reaction conditions in a series of experimental studies. 10−13 As evidenced in the experiments, an acidcatalyzed carbocation mechanism operates for these systems at high temperatures in aqueous media. 12−15 Gomes et al. used three different composite approaches (CBS-Q, G3, and G3MP2B3) to calculate the thermodynamic parameters of the conversion reactions of sulfur-containing compounds under the action of steam and heat. 16 Vasiliou et al. studied the thermal decomposition mechanism of thiophene both experimentally and theoretically and found that thiophene undergoes unimolecular decomposition by five pathways: C 4 H 4 S → (1) SCCH 2 + HCCH, (2) CS + HCCCH 3 , (3) HCS + HCCCH 2 , (4) H 2 S + HCC−CCH, and (5) S + HCC−CHCH 2 . 17 Ur Rahman Memon et al. proposed that the thermal decomposition of thiophene was initiated by the homolytic C−S bond cleavage followed by ring opening and isomerization. 18 Hore et al. conducted a study on thiophene and other five-membered ring compounds by laser pyrolysis. They proposed that transfer of 1,2-hydrogen is the most probable initiation stage in the decomposition of thiophene. 19 Although many efforts have been made, there is still a long way to clearly understand the decomposition mechanism of heavy oil in the aquathermolysis process, especially catalytic aquathermolysis. With more and more problems being exposed during steam injection applications, such as low efficiency, high energy, and freshwater consumption to produce a big amount of steam, the use of catalysts to catalyze aquathermolysis is increasing and becomes a very important area recently. 6,20 Transition-metal-based catalysts are the most widely used catalysts in the aquathermolysis process, including watersoluble inorganic metal salts; oil-soluble organic metal compounds; nano/microparticles of metal, metal oxide, or sulfide; etc. 3,6 The study is aimed at theoretically calculating the kinetic and thermodynamic parameters of possible steps in the metalcatalyzed aquathermolysis of sulfur-containing heavy oil by density functional theory (DFT) methods and comparing them with the parameters of aquathermal processes that take place in the presence of H + ions. To achieve these purposes, cyclohexyl phenyl sulfide (CPS) is selected as a model compound of sulfur-containing oil components, and its aquathermolysis mechanism and catalysis were investigated. As for catalysts, Fe, Co, Ni, and Cu were chosen as they are some of the most commonly popular and less expensive metalbased catalysts for the aquathermolysis process. Cyclohexyl phenyl sulfide is a typical model compound for simulating high-sulfur heavy oil components that are responsible for the cross-linking and high viscosity. A comprehensive ab initio study of different reaction pathways of aquathermolysis of cyclohexyl phenyl sulfide (CPS) also showed that the protoncatalyzed carbocation mechanism is a major route in the aquathermolysis of CPS in the presence of H + ions, which has the lowest energy barrier for the cleavage of the C−S bond. 21 The major final products of the reaction chain, in this case, are 1-methylcyclopenten and thiophenol, which agrees well with the experimental data. 13 As for other thermal aquathermal processes, such as the homolytic cleavage with formation of radical products and the hydrolysis, their thermodynamic characteristics and energy barriers in the C−S bond cleavage are significantly less favorable. 21 However, it should be noted that the delivery of H + ions in the organic media of heavy oil components, in which the aquathermolysis processes take place, is difficult. The metal-based catalyst seems to be able to catalyze this process and promote its occurrence since some of them have been tested to be effective in improving the level of in situ upgrading in aquathermolysis processes. In this work, in which the calculation of catalytic aquathermolysis using DFT is attempted for the first time, Cu 2+ , Co 2+ , Ni 2+ , and Fe 2+ were used as catalysts in the first step. In future work, more types of their compounds, oxides, sulfides, etc., will be calculated.
It is worth noting that theoretical quantum-chemical studies of the kinetics and thermodynamics of these catalytic reactions are practically absent despite the fact that the current level of the development of quantum-chemical methods and computational power allows one to calculate parameters of complex chemical processes that take place in high-viscosity oil reservoirs under high pressures and temperatures. Therefore, the theoretical calculation method used in this work for studying the catalytic aquathermolysis can be a valuable supplement in the development of catalysts for different chemical transformations occurring in oil reservoirs under high-temperature conditions.
■ THEORETICAL CALCULATIONS
Quantum-chemical calculations were performed using the Gaussian09 22 and PRIRODA 23 software packages.
The potential energy surfaces (PESs) of the reactions of CPS hydrolysis (reaction involving water molecules) were calculated using the DFT method with the Perdew−Burke− Ernzerhof (PBE) functional in the L1 basis set (PRIRODA). The PES values were scanned by varying the distance between the oxygen atom of the water molecule and the carbon atom of the C−S bond involved in the hydrolysis reaction at steps of 0.02 Å and by optimizing the geometry of reactants in each step (the calculations were started from a distance of 5 Å). The scanning process aimed at finding the PES region, in which the energy and geometry of neighboring intermediate states drastically change. In the next step, the Hessian matrix (the matrix of second-order derivatives of the energy with respect to atomic coordinates) of the intermediate structure immediately before and after the jump region was calculated using the PRIRODA software package. After that, the most negative frequency mode of the obtained structure is determined and a stationary point is searched along this frequency mode until meeting convergence criteria for the energy between iteration points (1 × 10 −7 a.u.) and for the vector of the Hessian-matrix partial derivatives (5 × 10 −6 a.u.). This algorithm makes it possible to find the transition-state geometry with a single imaginary frequency mode that determines the reaction path in the intrinsic coordinates (IRC).
The energies of starting reagents and reaction products of the elementary stages of aquathermolysis reaction sequences involving the heterolytic cleavage with formation of the cyclohexyl carbocation, as well as the chemical structure parameters, were calculated using the DFT method with the Becke three-parameter Lee−Yang−Parr (B3LYP) hybrid functional in the triple zeta valence polarization (TZVP) basis set within the integral equation formalism-polarizable continuum model (IEF-PCM) solvent reaction field model (embedded in Gaussian09 software).
The thermodynamic functions and parameters (electronic energy E and Gibbs free energy G) of reactants were calculated using Gaussian09 software by the DFT method (the B3LYP hybrid functional and the TZVP basis set). The energy changes of reactions were calculated by taking into account the calculated energies of reactants. The thermochemical characteristics of reactions at different temperatures were calculated at steps of 100°C using different hybrid functionals (B3LYP, ωB97X-D, and M06-2X).
■ RESULTS AND DISCUSSION
Several reaction mechanisms were considered in previous ab initio theoretical studies of the reaction of aquathermolysis of cyclohexyl phenyl sulfide, namely, homolytic cleavage with formation of radical species, proton-catalyzed heterolytic cleavage, and hydrolysis. 24 On the basis of these results and the results of experimental studies of the reaction products in the laboratory aquathermolysis process, 13 we suggest that the heterolytic cleavage via ionic routes is also the most preferable mechanism involved in the transition-metal-catalyzed aquathermolysis processes of heavy oils, which we have studied recently under laboratory conditions. 25−27 Even if the hydrolysis routes also involve ionic reaction schemes, they are characterized by rather high energy barriers according to the published results of ab initio calculations. 24 To compare these two possible mechanisms, i.e., heterolytic cleavage and hydrolysis, which are most likely involved in the processes of aquathermolysis of CPS under conditions of catalysis by transition-metal ions, we have calculated thermodynamic characteristics of these processes using rigorous quantumchemical approaches. The homolytic cleavage in this study was not considered since transition-metal ions hardly have catalytic effects on this process. Although the calculated energy barriers of the homolytic cleavage of C−S bonds in CPS at ambient temperatures (about 190 kJ/mol) were shown to be somewhat lower than the energy barriers of the heterolytic cleavage (about 290 kJ/mol) and be able further lowered under conditions of aquathermal treatment. 24 First, we have studied the hydrolysis of cyclohexyl phenyl sulfide in the uncatalyzed process and under conditions of catalysis with Fe 2+ ions using the DFT method within the PRIRODA software package. 23 The relevant reaction routes are given in Scheme 1.
Two reaction paths for the hydrolysis process are considered: one path involves the formation of thiophenol and cyclohexanol as reaction products, and the second path leads to the formation of phenol and cyclohexanethiol (Scheme 1a). In the theoretical calculations of noncatalytic hydrolysis of CPS at 0 K, which were performed using the PRIRODA software package with the PBE functional in the L1 basis set, we have obtained transition states with a single imaginary frequency mode and determined both reaction paths in the IRC starting from a molecular pair comprised of CPS and one water molecule. The corresponding energy diagram is shown in Figure 1. As can be seen, the energy barriers ΔE # calculated with respect to the energy of reagents are close to the published results obtained using the PRIRODA software package. 23 In the process of catalysis by Fe 2+ ions, the energy barriers of the hydrolysis process are noticeably lowered ( Figure 2). However, we were not able to locate the first-order saddle point with a single imaginary frequency mode in the configuration space because of overcrowding of the reaction site and sudden rotation of the cyclohexyl moiety to avoid stereochemical tension. The latter fact leads to an abrupt change in the structural configuration of the transition state during scanning the vector of the Hessian-matrix second derivatives and impossibility to find a configuration with a single imaginary frequency mode. Consequently, the hydrolysis route with the formation of cyclohexanol cannot be considered as a preferable reaction route despite the fact that it was preferable under conditions of noncatalytic hydrolysis.
Thus, the calculation results demonstrate the catalytic effect of the transition-metal ion on the hydrolysis of cyclohexyl phenyl sulfide. Nevertheless, despite a substantial theoretical catalytic effect of the iron(II) ions on the hydrolysis of cyclohexyl phenyl sulfide, the calculated energy barrier lowering is still not enough to surpass the energy characteristics of the direct heterolytic cleavage of the C−S bond with the formation of a cyclohexyl cation and phenyl sulfide.
Considering these findings, we decided to study the latter process in more detail. Previously, both the experimental 13 and theoretical 21 studies confirmed that the acid-catalyzed mechanism of heterolytic cleavage of CPS with the formation of methylcyclopentene and thiophenol as main products should be a major route in the aquathermolysis process. In addition, the mass spectral analysis of the reaction products of aquathermolysis of cyclohexyl phenyl sulfide shows that thiophenol is converted in significant amounts to secondary reaction products, such as the products of dimerization and reactions with intermediate cyclohexyl and methylcyclopentyl carbocations. 13 A general mechanism that explains the formation of these intermediates and reaction products in the proton-catalyzed aquathermolysis process of cross-linked cyclohexyl phenyl compounds has been proposed in the same study. 13 On the basis of this mechanism, we propose the sequence of reactions (shown in Scheme 2) to study the catalytic action of H + and iron(II), cobalt(II), nickel(II), and copper(II) ions in the heterolytic cleavage of C−S bonds in CPS. Table 1 shows the calculated values of the electronic energy and Gibbs free energy changes for each step of the reaction sequence and for the total aquathermolysis reaction under conditions when an acid and different transition-metal ions are used as the catalyst. As can be seen from Table 1, the second step involving the heterolytic breakage of C−S bonds is the most energy-consuming step in the reaction mechanism. Hence, it can be considered as a rate-limiting step in the entire sequence of reactions. The obtained energy barrier values of the heterolytic cleavage of a prereaction complex of CPS with H 3 O + are close to the published values. 24 At the same time, this step is a barrierless reaction followed by subsequent rearrangement of the generated cyclohexyl cation into the more stable methylcyclopentyl cation in the third exothermic reaction step.
Such an energy profile of the total reaction of CPS aquathermolysis allows one to consider the energy change in the second step as a kinetic barrier of the total reaction sequence. Hereinafter, the change in the electronic energy in the second step of the heterolytic aquathermolysis process is called "electronic energy of activation" and, accordingly, the change in the Gibbs energy is called "Gibbs energy of activation". Consequently, it is the lowering of these activation parameters that determines the catalytic action of the proton and various transition-metal ions on the aquathermolysis reaction. As shown in Table 1, the calculated energy barriers of the heterolytic cleavage of the C−S bond under catalysis with transition metals at room temperature in most cases are somewhat lower than the energy barrier of the acid-catalyzed cleavage. Remarkably, the calculation results show that the Cu 2+ ion may be substantially superior in catalytic activity not only to acids but also to the other transition-metal counterparts.
There is a well-known relationship between the standard rate constant (k 0 ) and the standard Gibbs free energy of activation (Δ # G 0 ) in the transition state theory of monomolecular reactions 28 where k B is the Boltzmann constant, T is the temperature, h is the Planck constant, and R is the gas constant. In this study, the reacting mixture was considered as a quasi-homogeneous mixture with the even distribution of reactants, in which intermolecular interactions are intentionally neglected so that formula 1 for ideal gases can be applied for calculating the standard rate constant of the heterolytic cleavage of prereaction complexes (second step in Scheme 2). This simplified model was used in a comparative analysis of the relative reaction activities of the catalytic systems. For the analysis, the k 0 values are calculated using formula 1. In the calculations, Δ # G 0 was considered to be equal to the ΔG 2 values (Table 2) for the second rate-limiting step in the sequence of reactions shown in Scheme 2.
For preliminary estimates, we analyzed the relative catalytic activities of different transition metals at room temperature (298.15 K). A comparison of the results given in Table 2 shows that these transition-metal ions (Fe 2+ , Ni 2+ , Co 2+ , and Cu 2+ ) are able to significantly increase the rate of the aquathermal cleavage of organic sulfides. With the Cu 2+ catalyst, the standard rate constant of the heterolytic cleavage step at 298.15 K is approximately 5 orders of magnitude higher than the rate constants of other studied catalytic systems.
To evaluate the catalytic activity of different transition-metal ions in the processes of heterolytic cleavage of C−S bonds under conditions of aquathermal treatment, standard Gibbs free energies of activation of the second step of CPS heterolysis shown in Scheme 2 were calculated at different temperatures using the DFT method with different functionals (Table 3). For Fe 2+ and Co 2+ , their high-spin states were determined to be ground states at 0 K in the DFT calculations with the PBE functional and L1 basis set in the PRIRODA software package. 23 The difference between the energies of high-and low-spin states in the intermediate reaction complexes was 470 kJ/mol for Fe 2+ and 26 kJ/mol for Co 2+ . As can be seen from Table 3, the B3LYP functional probably gives understated values of the free energy changes in comparison with the ωB97X-D and M06-2X functionals. The ωB97X-D long-rangecorrected hybrid functional with empirical dispersion corrections has reported to provide satisfactory accuracy for thermochemistry, kinetics, and noncovalent interactions. 29 The same is true with regard to the M06-2X functional of the Minnesota family. 30 Despite the popularity of the B3LYP functional in quantum-chemical calculations, it may give less reliable results for the thermochemistry of transition-metal complexes. The Gibbs energy changes calculated in this study Table 4 with the use of the ωB97X-D and M06-2X functionals are close to each other in most cases (see Table 3). Therefore, the later discussion is mainly based on the data obtained by the ωB97X-D and M06-2X functionals.
According to the DFT/ωB97X-D and DFT/M06-2X methods, the calculated values of the Gibbs energy of activation decrease, with an increase in the temperature from 300 K (ambient temperature) to 700 K (close to temperatures achieved under conditions of aquathermal treatment of heavy oil reservoirs), by a factor of 3−4 in the case of a catalytic system based on Fe 2+ , by a factor of about 3 for Co 2+ , by a factor of 4 to 38 for Cu 2+ , and by a factor of 2−2.5 for Ni 2+ . Cu 2+ significantly outperforms its counterparts in catalytic activity, as can be seen from the calculated Gibbs energies of activation. The investigated transition-metal ions can be arranged in the following descending order by their catalytic action: Cu 2+ > Co 2+ ≈ Ni 2+ > Fe 2+ .
The kinetic patterns of catalytic cleavage of CPS are more clearly illustrated by corresponding kinetic constants calculated using eq 1. Based on the calculated standard rate constants ( Table 4), all of the investigated transition-metal catalytic systems significantly accelerate the reaction rate at 500−600 K. This temperature range is usually achieved in reservoirs during the steam injection process for heavy oil recovery, which means that these metals have potential in catalyzing aquathermolysis processes in a real steam injection project in the oilfield. In the case of Cu 2+ , a relatively high kinetic constant is obtained even at T = 400 K (see Table 4). Figure 3 shows the logarithmic temperature dependences of the half-life of the reaction of heterolytic cleavage of CPS bonds for different catalysts, which are determined under the assumption of first-order decay by the following expression: t 1/2 = ln 2/k 0 . As noted above, activation parameters Δ # G 0 calculated by the DFT/B3LYP method are most likely significantly understated. For this reason, the half-life of the reaction corresponding to t 1/2 = 1 h achieved with this functional seems to be improbable in the temperature range of about 225−300 K for the different catalytic systems, let alone a half-time of t 1/2 = 1 s can be obtained for different transition metals in the temperature range of 260−340 K (see Figure 3a). A half-life of 1 h and 1 s with the ωB97X-D functional can be achieved for different catalytic systems in the temperature range of 310−375 and 350−430 K, respectively (Figure 3b). These simulation results obtained with the ωB97X-D functional seem to be more reliable. In comparison to the ωB97X-D functional, the M06-2X functional gives somewhat higher values (t 1/2 = 1 h and t 1/2 = 1 s in the temperature ranges of 340−410 and 400−470 K, respectively) (see Figure 3c).
To understand the reasons for a substantial difference in the catalytic activity of Cu 2+ , Mulliken population analysis of charge densities on relevant atoms of the prereaction complexes is performed, as well as bond lengths and highest occupied molecule orbital (HOMO)−lowest unoccupied molecular orbital (LUMO) gaps of these complexes are compared ( Table 5). The analysis shows that Cu 2+ has significant differences in the charge distribution, M−S bond length, and HOMO−LUMO gap compared to those for other ions. At the same time, there is no direct relationship between the reference data on the ionic radii 31 and the considered parameters for transition metals under study. As can be seen from Table 5, the effect of increased catalytic activity of Cu 2+ in comparison with the other metals is determined by a shorter M−S bond length, a substantial distribution of the positive charge to S, and a larger HOMO−LUMO gap in the prereaction complex. The Mulliken population analysis shows a significant positive charge on the S atom and a less negative charge on the C atom of the cyclohexyl ring in the prereaction complex with Cu 2+ compared to the other metals, which should facilitate the elimination of the cyclohexyl cation in the cleavage reaction. There are no significant differences in the C−S bond lengths of the prereaction complexes with different metals; however, this bond in the starting CPS was noticeably shorter in length (1.845 Å). The found HOMO− LUMO gaps of the prereaction complexes requires further analysis beyond the scope of this study but allow us to make a preliminary conclusion that lower gaps for complexes with Ni 2+ , Co 2+ , and Fe 2+ give rise to their greater stability through delocalization of the energy when compared to the complex with Cu 2+ . The less stability of the prereaction complex with Cu 2+ explains its higher reactivity and enhanced catalytic activity of Cu 2+ in the cleavage of C−S bonds.
■ CONCLUSIONS
In summary, using the DFT methods, it is confirmed that the hydrolysis is not a kinetically preferred process among the possible reaction routes in the aquathermalysis process of the model reagent (CPS), while the reaction sequence involving the heterolytic cleavage of C−S bonds with formation of the cyclohexyl carbocation has a substantially lower kinetic barrier in the rate-limiting step. The results of simulation of the thermochemistry and kinetics that transition-metal ions, such as Cu 2+ , Fe 2+ , Co 2+ , and Ni 2+ , noticeably reduce the kinetic barrier of the hydrolysis of CPS. Moreover, transition-metal ions more substantially reduce the kinetic barrier of the heterolytic thermal cleavage of CPS in comparison with the acid-catalyzed process verified by the DFT calculation results obtained with the B3LYP, ωB97X-D, and M06-2X functionals. Also, the estimates of the half-lives of the reaction for ideal gas conditions show that relatively short half-lives in the heterolytic cleavage of CPS can be achieved with transition metals in the temperature range of 400−500 K. All of these results confirm theoretically that transition-metal ions can serve as effective catalysts in the downhole upgrading of highsulfur heavy oil under conditions of superheated steam injection. In general, the catalytic activity of the investigated transition-metal ions can be arranged in order Cu 2+ ≫ Co 2+ ≈ Ni 2+ > Fe 2+ . In terms of the rate constant, the Cu 2+ -based catalytic system outperforms the other metals by more than 5 orders of magnitude.
The first time use of the DFT calculation for catalytic aquathermolysis not only gives important information about how to promote aquathermolysis reaction in steam injection process but also provides a valuable theoretical calculation method as a supplement in developing catalysts for different chemical transformations occurring in oil reservoirs under high-temperature conditions, as well as in other industrial areas.
|
2020-08-06T09:06:43.515Z
|
2020-08-03T00:00:00.000
|
{
"year": 2020,
"sha1": "aae7f0ef4b6df8e6d37691b0b775517d0530746e",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c02069",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3314885993d12e47e463c60f5c3200b148927e31",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
219949241
|
pes2o/s2orc
|
v3-fos-license
|
Seafood Waste as Attractive Source of Chitin and Chitosan Production and Their Applications
Chitosan is a cationic polymer obtained by deacetylation of chitin, found abundantly in crustacean, insect, arthropod exoskeletons, and molluscs. The process of obtaining chitin by the chemical extraction method comprises the steps of deproteinization, demineralization, and discoloration. To obtain chitosan, the deacetylation of chitin is necessary. These polymers can also be extracted through the biological extraction method involving the use of microorganisms. Chitosan has biodegradable and biocompatible properties, being applied in the pharmaceutical, cosmetic, food, biomedical, chemical, and textile industries. Chitosan and its derivatives may be used in the form of gels, beads, membranes, films, and sponges, depending on their application. Polymer blending can also be performed to improve the mechanical properties of the bioproduct. This review aims to provide the latest information on existing methods for chitin and chitosan recovery from marine waste as well as their applications.
Introduction
The reuse of waste from the fishing industry is not a common practice, and large percentage of biomass of waste is discarded directly into the environment without previous treatment [1]. Any material that is not used during its production or consumption process due to technology or market limitations, which can cause damage to the environment when not properly managed, is considered waste [2,3]. However, this type of waste is a source of raw material with high benefits and can be used to produce biocompounds [4].
New processing and management techniques for these wastes are needed in order to generate quality co-products and reduce environmental impacts, therefore, we would have greater job creation and sustainable development of the fishing industry [5,6].
The seafood industry annually generates about 106 tons of waste, most of which is destined for composting or to be converted into low value-added products such as animal feed and fertilizers [7]. In this context, approximately 2000 tons of chitosan is produced annually, whose main source of extraction is from shrimp and crab shell residues [8]. Chitin is widely distributed in nature and is the main element of the marine invertebrate exoskeleton and can be found in the structure of insects, arthropods, and molluscs [24][25][26]. Figure 2 presents the main sources of chitin production and its extraction. Chitin polymorphism can be visualized using X-ray diffraction, where three crystalline structures are observed, α, β, and γ, which differ by the number of chains per cell, degree of hydration, and unit size. The α-chitin is the most abundant form, being found in arthropod exoskeletons, where the dispositions of the polymeric chains are antiparallel, which favors the existence of numerous inter-and intra-chain hydrogen bonds that result in a densely packed material ( Figure 3). In β-chitin, the disposition is parallel and they are found in animals that show flexibility and resistance, such as squids. The γ-chitin displays a mixture of both positions [27,28] Chitin is widely distributed in nature and is the main element of the marine invertebrate exoskeleton and can be found in the structure of insects, arthropods, and molluscs [24][25][26]. Figure 2 presents the main sources of chitin production and its extraction. Chitin polymorphism can be visualized using X-ray diffraction, where three crystalline structures are observed, α, β, and γ, which differ by the number of chains per cell, degree of hydration, and unit size. The α-chitin is the most abundant form, being found in arthropod exoskeletons, where the dispositions of the polymeric chains are antiparallel, which favors the existence of numerous inter-and intra-chain hydrogen bonds that result in a densely packed material ( Figure 3). In β-chitin, the disposition is parallel and they are found in animals that show flexibility and resistance, such as squids. The γ-chitin displays a mixture of both positions [27,28] Chitin polymorphism can be visualized using X-ray diffraction, where three crystalline structures are observed, α, β, and γ, which differ by the number of chains per cell, degree of hydration, and unit size. The α-chitin is the most abundant form, being found in arthropod exoskeletons, where the dispositions of the polymeric chains are antiparallel, which favors the existence of numerous interand intra-chain hydrogen bonds that result in a densely packed material ( Figure 3). In β-chitin, the disposition is parallel and they are found in animals that show flexibility and resistance, such as squids. The γ-chitin displays a mixture of both positions [27,28].
Chitin derivatives have high economic value due to their biological activities and applications, being biodegradable and biocompatible polymers as well as produced by renewable natural sources. Harnessing the by-products of crustacean processing is a profitable activity because of their richness in high value-added compounds [29,30]. Chitin derivatives have high economic value due to their biological activities and applications, being biodegradable and biocompatible polymers as well as produced by renewable natural sources. Harnessing the by-products of crustacean processing is a profitable activity because of their richness in high value-added compounds [29,30].
Biosynthesis of Chitin
Chitin occurs in complexes strongly linked with other substances in the cuticles of crustaceans, and some portions of polypeptides are linked to a small number of amino groups [31].
Generally, the carbon source used for chitin synthesis is glucose, starting the process of glycogen catalysis by the enzyme phosphorylase and being converted into glucose-1-P. In the presence of phosphomutase, glucose-6-P is formed and further converted to fructose-6-P by hexokinase. Intracellular fructose-6-P is converted to glucosamine-6-P via aminotransferase using L-glutamine. Then, glucosamine-6-P is converted to N-acetylglucosamine-6-P via N-acetyltransferase using acetyl co-A as a substrate. The phosphate group on it is changed from the 6-P to the 1-P position by phosphoacetylglucosamine mutase. Subsequently, N-acetylglucosamine-1-P is converted to UDP-Nacetylglucosamine via pyrophosphorylase using triphosphate as the cosubstrate. Chitin is formed from UDP-N-acetylglucosamine in the presence of the enzyme chitin synthase ( Figure 4) [32][33][34]. The chitin deacetylation reaction results in chitosan [35,36].
Chitin isolation from Natural Resources
Seafood is a major source of animal protein in many countries, however, besides the edible part, these raw materials have an inedible one [37]. A significant part of the environmental contamination is caused by the wastes from fishing industries, which triggers an environmental problem due to their unpleasant odor, attracting and stimulating the proliferation of insects. They can also be harmful to human health when disposed of without any kind of previous treatment [38,39].
The effluents resulting from the fishing industry, if released without previous treatment in the environment, cause physical and chemical changes in water bodies, which may cause mortality of aquatic animals and impact the local microfauna and microflora, given that this residue is
Biosynthesis of Chitin
Chitin occurs in complexes strongly linked with other substances in the cuticles of crustaceans, and some portions of polypeptides are linked to a small number of amino groups [31].
Generally, the carbon source used for chitin synthesis is glucose, starting the process of glycogen catalysis by the enzyme phosphorylase and being converted into glucose-1-P. In the presence of phosphomutase, glucose-6-P is formed and further converted to fructose-6-P by hexokinase. Intracellular fructose-6-P is converted to glucosamine-6-P via aminotransferase using L-glutamine. Then, glucosamine-6-P is converted to N-acetylglucosamine-6-P via N-acetyltransferase using acetyl co-A as a substrate. The phosphate group on it is changed from the 6-P to the 1-P position by phosphoacetylglucosamine mutase. Subsequently, N-acetylglucosamine-1-P is converted to UDP-Nacetylglucosamine via pyrophosphorylase using triphosphate as the cosubstrate. Chitin is formed from UDP-N-acetylglucosamine in the presence of the enzyme chitin synthase (Figure 4) [32][33][34]. The chitin deacetylation reaction results in chitosan [35,36]. Chitin derivatives have high economic value due to their biological activities and applications, being biodegradable and biocompatible polymers as well as produced by renewable natural sources. Harnessing the by-products of crustacean processing is a profitable activity because of their richness in high value-added compounds [29,30].
Biosynthesis of Chitin
Chitin occurs in complexes strongly linked with other substances in the cuticles of crustaceans, and some portions of polypeptides are linked to a small number of amino groups [31].
Generally, the carbon source used for chitin synthesis is glucose, starting the process of glycogen catalysis by the enzyme phosphorylase and being converted into glucose-1-P. In the presence of phosphomutase, glucose-6-P is formed and further converted to fructose-6-P by hexokinase. Intracellular fructose-6-P is converted to glucosamine-6-P via aminotransferase using L-glutamine. Then, glucosamine-6-P is converted to N-acetylglucosamine-6-P via N-acetyltransferase using acetyl co-A as a substrate. The phosphate group on it is changed from the 6-P to the 1-P position by phosphoacetylglucosamine mutase. Subsequently, N-acetylglucosamine-1-P is converted to UDP-Nacetylglucosamine via pyrophosphorylase using triphosphate as the cosubstrate. Chitin is formed from UDP-N-acetylglucosamine in the presence of the enzyme chitin synthase ( Figure 4) [32][33][34]. The chitin deacetylation reaction results in chitosan [35,36].
Chitin isolation from Natural Resources
Seafood is a major source of animal protein in many countries, however, besides the edible part, these raw materials have an inedible one [37]. A significant part of the environmental contamination is caused by the wastes from fishing industries, which triggers an environmental problem due to their unpleasant odor, attracting and stimulating the proliferation of insects. They can also be harmful to human health when disposed of without any kind of previous treatment [38,39].
The effluents resulting from the fishing industry, if released without previous treatment in the environment, cause physical and chemical changes in water bodies, which may cause mortality of aquatic animals and impact the local microfauna and microflora, given that this residue is
Chitin Isolation from Natural Resources
Seafood is a major source of animal protein in many countries, however, besides the edible part, these raw materials have an inedible one [37]. A significant part of the environmental contamination is caused by the wastes from fishing industries, which triggers an environmental problem due to their unpleasant odor, attracting and stimulating the proliferation of insects. They can also be harmful to human health when disposed of without any kind of previous treatment [38,39].
The effluents resulting from the fishing industry, if released without previous treatment in the environment, cause physical and chemical changes in water bodies, which may cause mortality of aquatic animals and impact the local microfauna and microflora, given that this residue is characterized by its high concentrations of nitrogen, phosphorus, organic carbon, suspended solids, and oxygen [1].
Seafood waste is a potential source of raw material for chitin extraction [40]. However, due to their origin from natural resources and their chemical and physical variability, the properties of chitin and chitosan can have a direct impact on their applications [41]. Characteristics that may be related to the extraction process are molecular weight, degree of deacetylation, degree of purity, viscosity, and crystallinity [42].
Chemical Extraction
This type of extraction consists of the use of a strong alkaline solution, such as hydrolysis with sodium hydroxide at high temperatures and concentrations, causing the breakdown of polymeric chains, and establishing a high degree of chitosan deacetylation [43].
The chemical extraction method is composed of three basic steps ( Figure 5), an alkaline solution deproteinization, an acid solution demineralization, and a discoloration. It is noteworthy that all these steps are directly related to the physicochemical properties of the chitin obtained [44,45]. The source for chitin extraction is subjected to washing, drying, and grinding of powder particles [46]. characterized by its high concentrations of nitrogen, phosphorus, organic carbon, suspended solids, and oxygen [1]. Seafood waste is a potential source of raw material for chitin extraction [40]. However, due to their origin from natural resources and their chemical and physical variability, the properties of chitin and chitosan can have a direct impact on their applications [41]. Characteristics that may be related to the extraction process are molecular weight, degree of deacetylation, degree of purity, viscosity, and crystallinity [42].
Chemical extraction
This type of extraction consists of the use of a strong alkaline solution, such as hydrolysis with sodium hydroxide at high temperatures and concentrations, causing the breakdown of polymeric chains, and establishing a high degree of chitosan deacetylation [43].
The chemical extraction method is composed of three basic steps ( Figure 5), an alkaline solution deproteinization, an acid solution demineralization, and a discoloration. It is noteworthy that all these steps are directly related to the physicochemical properties of the chitin obtained [44,45]. The source for chitin extraction is subjected to washing, drying, and grinding of powder particles [46]. This type of conventional extraction can cause problems in the disposal of waste generated, which is necessary for the neutralization and detoxification of wastewater [47].
Chemical deproteinization
The deproteinization involves the disruption of chemical bonds between proteins and chitin, requiring chemicals to depolymerize the biopolymer [48], whose removal of the associated proteins is an essential step in the polysaccharide purification process [49,50].
Conventional extraction of chitin from marine waste through the deproteinization involves the use of bases and strong acids at high temperatures, which demands high energy consumption and generates effluents with high chemical concentrations, requiring appropriate treatment for their neutralization [51].
The use of strong acids and bases during the chitin extraction process leads to an increase in the cost of materials involved in the process, as well as a low-purity end product [52].
Chemical demineralization
Demineralization is a process for the removal of minerals, especially calcium carbonate, using strong acids [53]. The most commonly used acids in this treatment process are sulfuric acid, hydrochloric acid, acetic acid, nitric acid, and formic acid [54][55][56]. This type of conventional extraction can cause problems in the disposal of waste generated, which is necessary for the neutralization and detoxification of wastewater [47].
Chemical Deproteinization
The deproteinization involves the disruption of chemical bonds between proteins and chitin, requiring chemicals to depolymerize the biopolymer [48], whose removal of the associated proteins is an essential step in the polysaccharide purification process [49,50].
Conventional extraction of chitin from marine waste through the deproteinization involves the use of bases and strong acids at high temperatures, which demands high energy consumption and generates effluents with high chemical concentrations, requiring appropriate treatment for their neutralization [51].
The use of strong acids and bases during the chitin extraction process leads to an increase in the cost of materials involved in the process, as well as a low-purity end product [52].
Chemical Demineralization
Demineralization is a process for the removal of minerals, especially calcium carbonate, using strong acids [53]. The most commonly used acids in this treatment process are sulfuric acid, hydrochloric acid, acetic acid, nitric acid, and formic acid [54][55][56].
Demineralization occurs through the decomposition of calcium carbonate in calcium chloride, with the release of carbon dioxide [57], as shown in the reaction:
Discoloration
This is an additional step during the extraction process and is performed if you wish to obtain a colorless product as it aims to eliminate astaxanthin and β-carotene pigments when they are present in the extraction source. It uses organic or inorganic solvents such as acetone, sodium hypochlorite, and hydrogen peroxide [42].
Biological Extraction
The biological extraction method involves the use of microorganisms that produce enzymes and organic acids at a relatively low cost, with a cleaner and greener process, favoring the production of quality chitin [58,59].
The biological extraction process has been made more attractive by obtaining high-quality products, with the cost of production being affordable and not generating high concentration chemical effluents, as mentioned in the chemical process [60].
Biological methods often used for chitin extraction are enzymatic deproteinization and fermentation using microorganisms [61,62].
Enzymatic Deproteinization
Enzymatic deproteinization of the waste from the fishing industry to obtain hydrolyzed protein is a method based on the addition of enzymes for protein fragmentation, having the advantage of not producing environmental degradation products [51,62].
Proteases are of utmost importance for protein removal during chitin extraction from fishing industry waste [63]. The proteases involved in the protein removal process from seafood residues are papain, trypsin, pepsin, alkalase, and pancreatin [64,65].
Fermentation
Hydrolyzed proteins can be obtained by proteolytic enzymes produced by the lactic acid bacteria activated due to a low pH in the medium. The advantage of this process is that it allows the recovery of value-added by-products such as proteins, enzymes, and pigments that can be applied, for example, in the food industry [66].
The efficiency of fermentation through microorganisms depends directly on the amount of inoculum, glucose concentration in the medium, the pH during the culture, and fermentation time. This type of extraction using microorganisms is a tendency in biotechnology and bioremediation researches [67].
Chitin to Chitosan Conversion
Chitosan is a polysaccharide obtained by chitin deacetylation reaction through alkaline hydrolysis and subsequent treatment with acid solutions, consisting of 2-amino-2-deoxy-D-glycopyranose units joined by glycosidic bonds β (1 → 4) ( Figure 5). However, the polymers differ in relative proportion and solubility of these units. They can function as an ion exchange resin for being soluble in organic acids and diluted minerals; nevertheless, their precipitation occurs with a pH value above 6.0 [71][72][73].
Chitosan are all chitin derivatives having a degree of deacetylation of 50% or more. The relative proportions of these units generate distinct structural characteristics, such as the degree of deacetylation and molecular weight, whose structural characteristics are related to the physicochemical and biological properties of the polymer [30,74,75].
Chitosan has the characteristic of solubility in acidic media because of the free amino groups being protonated (NH3 +), where the precipitation tendency increases from the moment the pH approaches 6.0, regarding the increase of -NH2 clusters in the molecular structure. Thus, amino groups make it possible for them to bind to negatively charged materials, such as other polysaccharides, enzymes, and cells, being insoluble in water, concentrated acids, acetone, and alcohol [76][77][78].
In recent years, many studies have been performed on chitosan and have shown a close dependence relationship between the structural and morphological characteristics of chitin, chitosan and their derivatives, their properties, and potential applications [79].
This polymer can be found in nature in small quantities in the cell walls of some fungi (Zygomycetes), or it can be obtained by alkaline hydrolysis of chitin from crustacean and exoskeletons of arthropods [80]. Crustacean shells contain 15% to 20% of chitin, 25% to 40% of protein, and 40% to 55% of calcium carbonate; the latter is responsible for crustacean rigidity [81].
Chitin and Chitosan Blend with Other Polymers
Polymer blending is defined as the homogeneous mixture of two or more different polymer species. Many blends are produced for financial reasons to reduce the costs of a technical application. However, the blend may lead to better properties of the product obtained [82].
Chitin and Chitosan Applications
New technological approaches are needed to improve human lives and the environment. What makes chitosan a material of industrial interest are its broad spectrum of properties, such as water insolubility, cationic biopolymer behavior, positive global charge in biological pH, and easily capable of forming gels. These properties make it interesting for agricultural applications, medicine, environment, and food, as indicated in Figure 6 [88,89].
Active Ingredient Carrier
Nanotechnology has applications in many areas, such as engineering, medicine, pharmaceuticals, and agriculture, revolutionizing various processes and products [90]. Chitosan is a polymer of great industrial and biotechnological interest because of its abundant extraction source,
Active Ingredient Carrier
Nanotechnology has applications in many areas, such as engineering, medicine, pharmaceuticals, and agriculture, revolutionizing various processes and products [90]. Chitosan is a polymer of great industrial and biotechnological interest because of its abundant extraction source, being biocompatible and positively charged, defining it as a potential material for the active ingredient delivery system [91].
Chitin and chitosan-based nanomaterials can be used as carriers of cosmetic ingredients, such as chitin nanofibrils face masks capable of releasing active ingredients at different doses and time, and can be used as antibacterial, anti-inflammatory, sunscreen, anti-aging cosmetics depending on the active ingredient selected [92].
Nanotechnology can also be applied in agriculture, favoring agroindustry to be greener. Nanoparticles can increase the effectiveness of agrochemicals, resulting in lower doses and fewer applications, as well as reducing the risk of environmental contamination and promoting effective pest control in agriculture [93,94].
Tissue Engineering
The engineering of artificial tissues represents major advances in the biomedical field as it assists in reconstructive processes, favoring improvement in human life quality [95]. It involves the regeneration of lost or damaged tissues using biomaterials associated with cell or growth factors [96].
An important requirement for scaffolding is to have an interconnected structure with high porosity to ensure proper nutrient penetration and diffusion into cells [97].
Characteristics for choosing a biomaterial for tissue engineering are: presence of interconnected pores, controlled biodegradability, modifiable chemical surface, mechanical properties similar to the site of implantation, insignificant toxicity, and ease of obtaining desirable shapes and sizes [98].
Active Pharmaceutical Applications
Several polymers have been used in the production of mucoadhesive delivery systems; nevertheless, chitosan and its derivatives are the most broadly used due to characteristics of atoxicity, biocompatibility, antimicrobial activity, and adequate permeation [99].
Chitosan-based nanoparticles are widely used as devices for drug administration because they have useful features as a drug-loading vehicle. It has a biological property of mucoadhesiveness, implying the transient opening of epithelial junctions for drug entry [100].
Chitosan is a natural polyaminosaccharide with a non-toxic, non-allergenic, biocompatible, and biodegradable characteristic, and derivatives from chitosan are reported as anticoagulants. The literature described that the chitosan has a similar close structure like heparin, and based on this feature, many molecules of chitosan derivatives have been synthesized [101][102][103][104][105].
Anticoagulants are clinically used in different medical conditions like thrombosis and have the maximum annual growth rate among the top ten treatment areas. The developed compounds exhibited a faster onset of action and potency than nicoumalone after one hour of the drug administration. The sulphated N-alkyl derivatives of chitosan were suggested as the more potent anticoagulants than sulfated quaternary derivatives/sulfated chitosan [106,107].
In this sense, the most prominent commercial application of chitosan is its use as a hemostatic functional system. Therefore, chitosan-based wound dressings are available on the market for clinical use as products HemCon®Bandage and ChitoFlex wound dressings (HemCon Medical Technologies, UK), as well as CELOX™ (Medtrade Products, England); all products are FDA approved (http://www.hemcon.com and http://www.celoxmedical.com, respectively) [108].
Antimicrobial Agent
Many investigations reported the chitosan antimicrobial activity, but the mechanism has not yet been fully elucidated and studies are fundamental in the search for clarification of the potential of chitosan. One of the most studied properties of chitosan is its antimicrobial activity, related to the ability of its positively charged amino groupings to bind to the surface of the bacterial wall or the plasma membrane because they have negative charges. Thus, there is a change in cell permeability, favoring the flow of ions and proteins from the cytoplasm into the extracellular space and causing cell death [109]. Higher degree of acetylation, higher molecular weight of chitosan, and the antibacterial activity mediates the changes in cell permeability and blocks the transport of the bacteria [110][111][112]. Lower degree of acetylation of chitosan and lower pH favor antibacterial activity; however, the reduction of the molecular weight and the activity is toward Gram-negative bacteria, as well as the molecular weight and degree of acetylation influenced the antifungal activity with various fungi [113].
Other factors related to chitosan antimicrobial activity are the absorbing property of metal ions and their ease of penetrating the cell wall and binding to DNA, inhibiting messenger RNA synthesis, and low molecular weight of chitosan molecule induced inhibition of DNA transcription and mRNA synthesis in E. coli [110][111][112][113][114][115].
Therefore, the antimicrobial activity of chitosan and its derivatives is widely explored for the production of self-preserving materials through food protection and packaging. Chitosan films have a large application in food packaging materials, forming a protective antimicrobial barrier and preserving the nutritional quality of foods [116,117].
Water Treatment
Water and the range of services generated by this depleted natural resource contribute to poverty reduction, economic growth, as well as social and environmental sustainability, contributing to improvements in social welfare [118].
Wastewater from the food, textile, vegetable oil processing, oil production, and domestic sewage companies is a major source of pollution, given that it contains various organic compounds and is not properly treated before being discharged into the effluents [119].
Adsorption is the adhesion or fixation of molecules or electrostatic bonding of a fluid to a solid surface, enabling the elimination of compounds, metal ions, or other materials using an inactive sorbent of biological origin or natural products, by means of attractive forces between the material removed and the biosorbent [120]. The literature cites some adsorbents that are effective in removing toxic metal ions and are environmentally friendly, such as chitin, chitosan, cellulose, and guarana [121]. Chitosan is used to remove oils, greases, and heavy metals [122,123].
The increase of the degree of chitosan deacetylation is related to a greater number of amino groups, which are the main sorption centers, with being the degree of deacetylation and pH the main factors that affect the absorption capacity of chitosan [124].
Chitosan Applications in Food Technology
Chitosan biopolymer is biocompatible, nonantigenic, nontoxic, and biofunctional molecule and has attracted notable attention as a potential food preservative of natural origin [125][126][127]. Chitosan from shrimp isolation was preconized as GRAS (Generally Recognized As Safe) based on the scientific procedures for multiple technical effects in several food categories (GRAS Notice No. GRN 000443) [108,128].
Applications of chitosan were investigated for extension of shelf life of bread by retarding starch retrogradation and/or by inhibiting microbial growth have been observed. The authors evaluated chitosan molecule with 493 kDa coating on shelf life of baguette surface using 0.5%, 1.0%, or 1.5% chitosan diluted in 1.0% acetic acid. The results indicated barrier properties of chitosan to baguette-coated 1% chitosan, less weight loss, hardness, and retrogradation compared with the control during storage for 36 h at 25 • C [129,130].
Chitosan obtained by extraction from a bio-waste product using many energy-efficient methods. Chitosan is much cheaper as compared to other biopolymers. Nevertheless, the exceptional properties of chitosan make it a relatively stronger candidate for food packaging applications. The most popular and most economical way for production of chitosan is from deacetylation process of chitin. However, it is also possible to obtain chitosan directly from some fungi cell walls [131,132] and other organisms [133,134]. Chitosan film has been widely used to extend the shelf life of food, with addition of Ca 2+ ions changes the permeation rate of CO 2 and O 2 through the chitosan membranes and increases the useful life of the fruits [17] Raw materials to prepare a series of films of chitosan and glycerol formulation showed strawberry preservation [135]. Coating with chitosan films by immersion in a 1% polysaccharide solution containing 0.1% Ca 2+ prevents changes in the sensory properties of vegetables. In addition, chitosan is also useful in the production of paper for food packaging coated with it, which acts as an inhibitor of microbial growth [136]. Films formed by chitosan and polyvinyl alcohol with lignin nanoparticles are characterized by increased strength compared to films formed by individual components, antibacterial action against gram-negative microorganisms, and the synergistic antioxidant effect of chitosan and lignin [137].
Conclusions and Future Trends
As a major by-product of the seafood waste, a massive amount of crustacean shell waste is generated each year that can be used to produce value-added chitin, which can be converted to chitosan using a relatively simple deacetylation process. As the bio-waste product using many energy-efficient methods, chitosan is much cheaper as compared to other biopolymers. In the present review, chitin and chitosan has been presented as ideal renewable agents for native form or upgraded and incorporated with antimicrobial particles and natural compounds with multifunctional applications. Demands for alternative materials in various fields of biotechnology and industry are driven by technological advancement, favoring the increasing use of biopolymers and having chitosan as the most abundant and renewable polysaccharide, thus attracting more attention from researchers.
Chitin can be easily obtained from marine animals, crustacean residues, insects, and microorganisms. Chitosan is obtained by deacetylation of chitin, presenting antibacterial and antifungal properties, biocompatibility, mucoadhesivity, atoxicity, among others. Since previous decades, chitosan has been very important in several industrial applications, including biomedicine, textile, food, pharmaceutical, and cosmetic industries. For future applications, chitosan-based materials can be used as advanced composites or fibers, having a promising utility for accelerating tissue repair and wound healing processes for the pharmaceutical and biomedical industries. In addition, chitosan has become attractive to boost studies involving tissue engineering.
Therefore, this polymer is very attractive for application in several areas due to its characteristics, giving this polysaccharide a very promising future as a biomaterial. These include development of new smart future trends to biomaterials as promising wound healing effects and therapeutic molecules that are released at the same time as the microbial growth.
Conflicts of Interest:
The authors declare that there are no conflicts of interest regarding the publication of this article.
|
2020-06-18T09:06:33.219Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "449e75685195c2f76a39b983218cf5a9e504ea1c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/12/4290/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5dac49b713f487da4f5eb1438da3e0dfd8dc98a1",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
103682174
|
pes2o/s2orc
|
v3-fos-license
|
Controlled Pinning of Conjugated Polymer Spherulites and Its Application in Detectors
The nucleation of single macroscopic spherulites at desired positions, as well as ordered arrays of multiple spherulites, is demonstrated by combining the use of crystallizable solvents with local control of solvent evaporation during solution deposition. Moreover, the temperature assisted localized frustration of molecular orientation is shown, enabling the fabrication of samples containing both isotropic areas and spherulites. These macroscopic circular polycrystalline structures are characterized using a range of polarized spectroscopic techniques that allow quantifying their large degree of chain orientation. In order to show the potential of these large conjugated polymer spherulites centered at desired locations, graded bilayer organic photovoltaic devices were fabricated to be used as polarimeters, solid state light polarization detectors with no moving parts, and position sensitive photodetectors.
DOI: 10.1002/adom.201700276
is of great interest because of their impact on mechanical, and functional properties, including thermal conductivity, [1] charge carrier mobility, [2][3][4][5][6][7] and electrical conductivity. [8,9] In addition, for suitable macroscopic polymer chain arrangements, polarized absorption and emission can be observed. [10] As a consequence, a variety of methods have been developed to manipulate and control the microstructure and the orientation of conjugated polymers. [11] Some of these methods result in spherulites, densely branched, polycrystalline regions of material with spherically, or in 2 dimensions circularly, radiating fibers. [12,13] Because of their circular shape, which contains all possible fiber orientations in the plane, spherulites are an attractive system to probe orientation-dependent electrical and optical properties. Conventional deposition techniques such as spin-and blade-coating usually result in isotropic layers with a random orientation of the crystalline lamellae down to the microscopic scale. This is caused by the immense density of nucleation sites and the resulting uncontrolled growth. Several methods to control the nucleation density have been demonstrated. For example, a careful control of the solvent partial pressure during swelling of an already deposited layer allows redissolving all but a few crystallites, which can then be regrown under controlled conditions to microscopic spherulites. [3,14] Another method, which is experimentally simpler to realize, uses the epitaxial growth of conjugated polymers or small molecules on crystalline 1,3,5-trichlorobenzene (TCB) from the melt, [15] or directly from solution, [16] resulting in larger macroscopic spherulitic structures. In the former, molten TCB acts as solvent for the conjugated system and upon reducing the temperature below the melting point, TCB solidifies, enforcing the epitaxial crystallization of the conjugated system. For the solution process, on the other hand, a given amount of TCB is mixed together with the conjugated component into a carrier solvent such as chlorobenzene (CB). During deposition at room temperature, the wet film dries, increasing the concentration of the solid content in the solution until the solubility limit is reached and the solutes start solidifying. In order to create oriented structures, TCB should crystallize before the polymer or small molecule, which defines a minimum TCB to polymer content (around 4% in volume for a 20 g L −1 P3HT solution in CB). [16] The growth direction of spherulites can be controlled to some extent by patterning the substrate, [17] or by enforcing a The nucleation of single macroscopic spherulites at desired positions, as well as ordered arrays of multiple spherulites, is demonstrated by combining the use of crystallizable solvents with local control of solvent evaporation during solution deposition. Moreover, the temperature assisted localized frustration of molecular orientation is shown, enabling the fabrication of samples containing both isotropic areas and spherulites. These macroscopic circular polycrystalline structures are characterized using a range of polarized spectroscopic techniques that allow quantifying their large degree of chain orientation. In order to show the potential of these large conjugated polymer spherulites centered at desired locations, graded bilayer organic photovoltaic devices were fabricated to be used as polarimeters, solid state light polarization detectors with no moving parts, and position sensitive photodetectors.
Introduction
Semiconducting polymers have been studied extensively in the last decades because of their remarkable optoelectronic properties and promising low-cost solution processability. From an engineering point of view, the precise control of the two related properties of crystallinity and orientation of polymers dominant drying direction. [18] However, to the best of our knowledge, controlling the initial location of nucleation during a conventional solution deposition process has thus far been elusive. [13] The difficulty arises from the stochastic character of the local composition fluctuations that result in the nucleation process, as well as the competition between nucleation and growth processes. Due to this lack of control, few applications make use of these otherwise interesting structures. For example, while multitudes of organic light detectors exist, [19] few are intrinsically polarization sensitive, even though there are many methods to prepare oriented layers of organic materials. Such detectors may incorporate conjugated polymers that have been oriented by rubbing, [20] or by stretching. [21,22] Similar functionality can be obtained from aligned carbon nanotubes. [23] Yet no method has been described for an easy solution-based process used to control the initial nucleation of spherulites, as described in this work.
The basic idea here introduced, is to control the drying locally using a flow of gas, and thus achieve the critical composition for TCB nucleation at the desired location, expediting the growth of a spherulite. Using this simple idea, samples comprising a single spherulitic structure, centered at the desired position and covering an area of about 4 cm 2 , are demonstrated in this paper. The generalization to ordered arrays of spherulites is also shown. The presented method can be used to handily manufacture polarization sensitive photodetectors that allow to determine the intensity of incident light, as well as the angle and the degree of polarization, without requiring any moving parts. This could simplify the manufacturing of polarimeters with a wide range of applications, among them quality control, [24] or remote sensing. [25] 2. Results
Nucleation and Growth of a Single Spherulite
Thin film samples containing spherulites were prepared by deposition of a solution containing poly(3-hexylthiophene-2,5-diyl) (P3HT) and the additive TCB. As we have shown in a previous work, the density of nucleation centers which grow into macroscopic spherulites increases with higher weight ratio of TCB to P3HT and decreases with temperature. [18] As an example, a typical sample prepared with a ratio of 5:1, such as the one pictured in Figure 1a, contains ≈10 nucleation sites, resulting in up to cm 2 -sized spherulites on a 25 by 25 mm 2 glass substrate. To prepare reproducible samples and devices, control over spherulite crystallization and over the precise location of initial nucleation is required, which so far has not been demonstrated.
Simple approaches, like modifying the substrate topography (e.g., by scratching) have no effect, while more complex methods, like engineering the substrate wettability, work somehow too well. They not only restrict the location of initial nucleation but also the further growth of spherulites. [17] It is this independent control of nucleation, without affecting growth, which we demonstrate in this work.
By carefully observing the drying process of a regular sample, one can conceive of a possible way to induce a preferential nucleation site. As can be inferred from Figure 1a, spherulites preferentially nucleate at the sample edges-at the contact line between the solution, the substrate, and the surrounding atmosphere. It is here, where the sample starts to dry first, locally increasing the concentration of the solid content, including the concentration of TCB, which is important in this case. This is recorded in Figure 1c for a sample during spincoating. Noting that P3HT is orange in solution and purple in the dried film, it is clearly visible that the solvent first evaporates at the edges, and then proceeds towards the center of the sample, which remains in a dissolved state for the longest.
From the previous observation, it can be seen that spherulite nucleation sites appear where the film dries first and then grows from there. Therefore, one possible way to control the location of spherulites within a film is to speed up the evaporation of solvent at desired sites. For spin-coated samples, a single spherulite-like structure can be obtained when the sample starts drying at the center and proceeds outward from there. This way, nucleation can be confined to a single area that is much smaller than the final spherulite size, which is a significant improvement in nucleation control over the random nucleation at the contact line extended all along the substrate edge. A strategy to achieve the aforementioned central drying is by injecting a flow of gas through a needle pointing at the center of the substrate during spin-coating. Figure 1d shows consecutive still images of this process, clearly demonstrating that drying now starts at the center of the sample. The resulting sample is presented in Figure 1b, which shows a single spherulite-like structure. A complete video comparing both processes is provided as Figure 1. Spherulite samples and the drying process responsible for their formation during spin-coating. Samples can comprise a) several randomly nucleated spherulites, or b) a single centered one. c) The pair of three subsequent still images span 1 s during regular spin-coating, and d) 2 s while supplying a flow of gas through a needle visible above the substrate. The complete video is provided as Video S1 in the Supporting Information. Samples are illuminated from the back by vertically polarized light in panels (a) and (b). The scale bar is 10 mm.
Video S1 (Supporting Information). For best results, care has to be taken to suppress any other, unwanted airflow in the environment during deposition.
Suppression of Spherulite Growth by Heating
Pure TCB has a melting point of 63 °C and stays liquid if sufficiently heated, acting like a regular high-boiling point solventadditive that increases crystallinity by allowing for a longer lasting drying process. [26] Because TCB does not crystallize at elevated temperatures, it is not possible to use the local application of heat to control nucleation. Yet interestingly, combining a gas flow and heat allows to prepare samples that contain both isotropic and anisotropic regions, as depicted in Figure 2. The sample shown was fabricated as described in the previous section, the only difference being that one corner of the substrate was heated directly before deposition. As we will show, this capability can be of prime importance for a detector that measures the complete linear polarization of light. Figure S2 (Supporting Information) shows atomic force microscopy (AFM) measurements illustrating the differences in texture across the sample shown in Figure 2. Specifically, spherulitic regions may exhibit considerable surface roughness compared to isotropic regions.
Nucleation and Growth of an Array of Spherulites
To demonstrate the general viability of the presented method, a regular array of spherulites was grown on blade-coated samples. One major difference between both spin-and blade-coating deposition methods is the speed at which the solvent evaporates. For a given temperature, solvent evaporates faster during spin-coating, due to the rotation of the sample. The additional flow of gas, that is employed to induce controlled nucleation, must be the dominating influence, surpassing any other, undesired airflow that may be present in a laboratory fume hood or glove box. Yet at the same time, the flow cannot be increased arbitrarily, since it will disturb the wet layer, physically displacing the solution on the substrate. In order to satisfy both of these conditions during the lengthy drying process of blade-coated samples, a 3D-printed array of nozzles pictured in Figure 3 was manufactured. The concentric inlets and outlets serve to inject gas at the center of each inlet, while extracting it at the corresponding outlet. By confining the flow, each nozzle can be partially isolated from neighboring ones, providing circularly symmetric conditions at all nozzles. After coating of the sample, the nozzle array is placed above the still wet film, thus sealing any gaps beside the actual nozzles and preventing any unintended flow. Figure 3c shows a photograph of a typical blade coated sample with 9 spherulites, that was prepared using this nozzle array. It is apparent that in addition to the forced drying process of the spherulites, the regular drying process, which starts at the edges of the sample, proceeds in parallel. Because of that, the growth of spherulites at the edge is disrupted by the regular drying process. However, since this process is restricted to the sample edges, it would be negligible if scaled up to larger samples. To further improve the results, an enhanced control over both the rate of injection as well as extraction is needed, ensuring truly identical conditions at all nozzle, completely isolating them from neighboring ones.
Optical Characterization
In order to understand the degree of orientation and crystallinity, as well as the different textures that are visible in the radial direction of the polarized photographs in Figures 1b and 2b, the samples were characterized using polarized photoluminescence (PL) spectroscopy, polarized Raman scattering, and ellipsometry. Complementary AFM data was also measured.
The dichroic ratio (DR) of the PL, the ratio of PL for excitation parallel versus perpendicular to the orientation direction, can be used to quantify the degree of orientation. An average value of DR PL = PL ∥ /PL ⊥ ≈ 4 at the wavelength of maximum PL around 700 nm was found for a representative sample, as calculated by averaging the polarized PL images shown in Figure 4.
Representative parallel and perpendicular spectra are shown in Figure 5a. These values confirm a large degree of orientation for a sample with the pinned nucleation center, comparable to values of DR = 3.77, observed in transmission, as shown in Figure S3 in the Supporting Information, and in line with earlier reports on samples deposited from solution with no control over nucleation position. [16] Right at the center, crystallites still have random orientation. Although the degree of orientation is large in the whole spherulitic structure, there are small (<20%) oscillating variations in DR PL along the radius, which might be due to fluctuations in solid content as the drying front progresses along the liquid layer. These oscillations are correlated to the different textures observed radially, which arise from a change in the degree of misalignment of the fiber-like structures created during the epitaxial crystallization as seen in Figure S2 in the Supporting Information. [18] These fibrillar structures, which extend along the radial direction are not perfectly straight, but instead snake slightly back and forth. Correspondingly, the dichroic ratio is higher in regions where most of the fibers are parallel, compared to regions where they are less perfectly aligned. Sample thickness, which was determined by ellipsometry, is relatively homogeneous, ranging between 80 and 100 nm. Uniaxially oriented layers, which were obtained by pinning nucleation to the 1D contact line during bladecoating, [18] show a slightly lower dichroic ratio DR ≈ 3, which is in part due to the high surface roughness of the fibrillar layers, which contain a large volume of voids.
Polarized Raman also confirms a high degree of polymer orientation. The polarized Raman measurements in Figure 5b show a Raman dichroic ratio of DR Raman ≈ 2.5 at 488 nm excitation. Using this wavelength, the Raman signal comes from both amorphous and crystalline polymer chain segments. [27] The PL, on the other hand, arises preferentially from the crystalline segments due to energy transfer toward the lowest lying energy level, known as selective PL. This, together with the specific dipole orientation of PL and Raman vibration, partially explains why DR PL > DR Raman . Reassuringly, the Raman dichroic ratio increases if exciting at a higher wavelength of 785 nm, which preferentially excites the crystalline chain segments, as seen in Figure 5c.
It has been shown that the Raman spectra of P3HT can provide information regarding the degree of crystallinity. Specifically, the peak centered at around 1450 cm −1 consists of the superposition of an amorphous component (peaking at higher energy) and a crystalline component (peaking at lower energy). [27] Therefore, a simple analysis of the center of mass of the convoluted peak can inform us about the degree of crystallinity. As shown in Figure 4f, the Raman peak for an excitation perpendicular to the orientation direction is shifted about 3 cm −1 toward higher energies with respect to parallel excitation, indicating that a higher ratio of amorphous polymer chains lies in perpendicular direction. [27] In regions where the formation of spherulites has been suppressed by an increased substrate temperature during deposition, this shift toward higher energy disappears, as shown in Figure S4 in the Supporting Information. Even though not oriented, this indicates that the frustrated regions have a high degree of crystallinity, similar to that of spherulites, as it is expected for P3HT films deposited from high boiling point solvents. [26] Adv. Optical Mater. 2017, 5, 1700276 Additional information regarding orientation and crystallinity can be obtained by looking at the absorption and refractive index, or in other words, the degree of dichroism and birefringence. Spectroscopic ellipsometry is a well-known technique for the determination of the complex refractive index of thin films and has here been used to characterize the real part (solid lines) and imaginary part (dashed lines) of a spherulitic sample, as plotted in Figure 5d. Details on the modeling are given in the Experimental Section. Pinned spherulitic films exhibit a strongly anisotropic complex refractive index. Absorption features are strongly pronounced for an excitation parallel to the orientation direction. Figure 5e,f shows false-color images for the magnitude and the orientation of the linear dichroism of a pinned P3HT spherulite, respectively. Figure 5e,f were calculated from measurements using a Mueller-matrix microscope with an excitation wavelength of 532 nm, which is explained in more detail in the Experimental Section.
The shown orientation directly corresponds to that of the polymer chains, assuming that the transition dipole moment lies along the chain backbone. Figure 5e,f can serve to elucidate the process of spherulite formation. A comparatively isotropic region is located at the center of these pinned spherulites. Here, the abrupt solvent evaporation rapidly forms a dry film, not allowing for large-scale order. Along the contact line, around the circumference of this isotropic region, numerous nucleation sites form, from each of which a spherulite starts to grow in a space-filling manner. Competing with each other, they give the appearance of a single spherulite everywhere but the center. In the ideal case, the nucleation would be confined on a circle with infinitesimal circumference. However, mostly limited by the finite extension of the flow of gas, nucleation instead happens along a contact line that is neither a perfect circle nor infinitesimally short. In both the ideal as well as the real case, however, the nucleation itself still remains a stochastic process. Due to this remaining randomness, frontiers between neighboring spherulites can be visible in cases where the nucleation centers are distributed very irregularly along the contact line.
Application as Polarimeter
The capability to pin the nucleation of a spherulite at a desired location results in samples with predictable local orientation, opening the possibility to fabricate conceptually new types of devices. One example is a polarization detector with no moving parts (c.f. a conventional photodetector coupled to a rotating polarizer). Here, we demonstrate such a detector by producing organic photovoltaic (OPV) P3HT:PCBM graded bilayer devices. The P3HT layer contains a single, centered spherulite, which is covered by a [6,6]-phenyl C 61 butyric acid methyl ester (PCBM) layer deposited from an orthogonal solvent. Fortunately, the orientation is retained upon deposition of PCBM and subsequent annealing, as evidenced by the barely affected dichroic ratio plotted in Figure S3 in the Supporting Information. Furthermore, Mueller-matrix images do not show any noticeable change upon deposition of PCBM, as shown in Figure S5 in the Supporting Information. Due to the radially distributed polymer chains, the electrical response of each cell depends on both the polarization of the incident irradiation as well as the intensity as a matter of course.
In order to detail the overall performance, Figure 6a shows the JV-characteristics of a device containing 8 cells under unpolarized illumination of 1 Sun. Average cell characteristics are summarized in Table 1, while Figure 6b shows the external quantum efficiency (EQE) of the highest performing cell.
Next, we evaluate the performance of devices like these as a polarimeter. Figure 7a shows the short circuit current density J sc versus the angle of polarization of the incident light ϕ. A different device, with an average efficiency of only 0.8% was chosen, due to the particularly instructive character of its cells, as pictured in Figure 7b. Several facts are immediately apparent and serve to demonstrate the utility of the envisaged polarization sensor. Most importantly, the electrical response of each cell does indeed depend on the polarization, following the expected behavior given by Malus' law, proportional to cos 2 (ϕ). Solid lines in Figure 7a where c n is a polarization insensitive offset (due to the nonoriented polymer chains) of cell n, b n reflects the angle of preferential orientation for each cell, and a n accounts for the dichroism-related photoresponse. The angle of maximum response b n is different for each cell, since cells are distributed around the center of the spherulite, as pictured in Figure 7b. One cell exhibits a response that is essentially flat, independent of the polarization. As can be verified in Figure 7b, the reason for this is that most of the cell's active area is composed of isotropic polymer, not containing any spherulite. This design has several advantages for applications as polarimeter. First, there are no moving parts, like in conventional polarimeters composed of a rotating linear polarizer placed in front of a polarization insensitive detector. This allows for faster measurements, which are not limited by the frequency of rotation of the polarizer.
Second, the accuracy in determining the polarization angle of incident light depends on the relative angle θ between polarizer and detector. Accuracy is poorest for angles when cos 2 (θ) has extreme values, i.e., when polarizer and detector are exactly parallel or perpendicular, since then the derivative is zero. This is normally circumvented by simultaneously measuring the response of several cells. Important in this case is the relative orientation of the individual cells to each other, not their absolute orientation. This is traditionally achieved by using several detectors, for example, four detectors based on wire-grid polarizers with relative orientation of 0°, 45°, −45°, and 90°. In our presented devices, this favorable relative orientation between cells is achieved by design, simply by placing individual cells along the circumference of a single spherulite, allowing for a simple fabrication. Furthermore, the number of cells can be easily changed as needed, potentially allowing for higher accuracy in an overdetermined system. Figure S6 (Supporting Information) shows the parameters obtained from the fits of Figure 7 and their dependence on the bias voltage in more detail. The signal of interest is the ratio of the amplitude a n divided by the offset c n , which is used as a proxy for the detectivity of the polarization. This ratio does only slightly increase with increasing reverse bias voltage, whereas the angle b n is not significantly influenced by the bias voltage.
Because of this, the proposed polarimeter could be used under zero-bias. Figure S7 (Supporting Information) shows similar plots, this time versus the incident irradiance. Satisfyingly, b n does not depend on the light intensity, which allows for a simple calibration procedure of the proposed polarimeter.
The calibration procedure for the use of this device as a polarimeter consists in determining the coefficients a n , b n , and c n for each of the 8 cells in one device. After calibration, the accuracy was estimated by predicting the polarization angle and irradiance of a test data set of 10 measurements, from the measured short circuit current using least squares fitting. On this data set, when using all 8 cells, the angles could be determined with a standard deviation of 2.7°, with a largest observed error of 6.4°. Similar results are obtained when using less cells, as long as cells are chosen that complement each other. Best results are obtained when selecting only a bare minimum of the best performing cells, in this particular case red, green, and gray, with a standard deviation of 0.5° and a largest observed error of 1° for the test data set. This error is of the same order than the estimated error due to the manual adjustment of the linear polarization during calibration, and compares well to other polarization sensitive organic devices, with errors of the order of 1%. [21,22] A video of the determination of the polarization and irradiance of light in real-time is provided as Video S8 in the Supporting Information.
A similar procedure than the one shown in the video has been used to determine the linear polarization of sunlight scattered by the sky. Since the polarization of scattered light depends on the position of the sun, our simple polarimeter can be used to determine the direction of the sun without directly observing the sun-in effect demonstrating a sundial that works in the shade.
Application as a Position Sensor
Another application that makes use of the polarizationdependent response is the determination of position along a single axis within a given cell. Previous position sensitive photo detectors based on organic materials rely on the resistivity of indium tin oxide (ITO), [28] on position-dependent changes of the transient response due to high device capacitance, [29] or on active layers having a thickness gradient. [30] In our case, we use the continuous change of orientation of a spherulite. Specifically, we take advantage of local changes of the angle of maximum absorption within a single extended cell. A position sensitive photodetector can be achieved by locally illuminating part of a cell with a small polarized light spot while monitoring the photocurrent. In our experiment, we used a 532 nm polarized laser, which was focused down to a spot size of 50 µm. As is observed in Figure 7, the measured local photocurrent depends on the polarization of the incoming light and is proportional to the local orientation of the spherulite. Because this local orientation changes throughout a cell, so does the photocurrent for a fixed polarization. A 2D photocurrent scan covering the whole purple cell from Figure 7, excited at four different angles of polarization, is shown in Figure 8a (1). Open symbols represent measurements for angles 180° < ϕ < 540° to demonstrate reproducibility. b) Photograph of the glass-substrate side of the same device. The sample is illuminated by vertically polarized light from the back as well as unpolarized light from the front, to better reveal the spherulite on top of the aluminum cathode. All but the outlined white 1.6 by 1.6 mm 2 squares were masked during measurement. The arrows represent data extracted from the fits of panel (a) for the corresponding colors. The length of the arrow is proportional to the amplitude divided by the offset a n /c n , whereas the orientation corresponds to b n measured counterclockwise (or b n + 180° for cells on the right side). The black line in the bottom left indicates ϕ = 0° = 180°. The scale bar is 5 mm. these photocurrent maps along the vertical direction, shown in Figure 8e, the location of the light spot along one dimension can be determined from the photocurrent. This simple approach can be further improved, making it independent of light intensity, by instead considering the ratio of two photocurrent measurements at orthogonal polarizations. Figure 8f shows the inverse of this ratio as well as a quadratic fit used to parametrize the data, which allows to determine the location with an estimated resolution of 100 µm, which is of the order of the used spot size. This is comparable to other organic detectors, with demonstrated resolutions ranging from about 600, [30] to 20, [28] and 12 µm. [29] Because of the small cell size of 2 by 2 mm 2 used in this study, an accurate detection of position is only possible along a relatively short distance of 0.6 mm. However, this distance can be increased by using longer electrode contacts and a suitably aligned spherulite.
Conclusions
We presented a facile solution-based method that allows to control the exact number and location of macroscopic P3HT spherulites using the crystallizable solvent TCB, and a spatial control over the evaporation rate obtained by localized gas flows. Furthermore, these spherulites can be deposited together with isotropic regions of material in a single step on the same substrate by controlled local heating. Using this method, we manufactured graded bilayer OPV devices with a polarization-dependent electrical response. These devices allow to determine 3 of the 4 Stokes parameters, and as such have applications as polarization sensitive photodetectors (polarimeters) and position sensitive photodetectors.
Sample Preparation: P3HT was dissolved in CB at a concentration of 30 mg mL −1 . Typically, 150 mg mL −1 of TCB were added to this solution. PCBM was dissolved in DCM at a concentration of 10 mg mL −1 . ITO coated substrates with patterned electrodes and cut glass microscope slides were cleaned by consecutive sonications for 15 min in water with detergent, acetone, isopropanol, and finally in a solution of 10 wt% of NaOH in water, rinsing them with water after each step.
PEDOT:PSS was filtered using a 0.2 µm PVDF syringe filter, spincoated at 3000 rpm, and annealed at 120 °C for 10 min. P3HT was spin-coated at 2500 rpm in a Laurell WS-400-6NPP-LITE spin-coater, or blade-coated on a Zehntner ZAA 2300 motorized stage at 20 mm s −1 with a ZUA 2000 blade set to a height of 200 µm. For some samples, a flow of gas of 2 mL s −1 was supplied through a flat-tipped needle with 0.8 mm diameter to induce controlled nucleation of a single spherulite.
The array of nozzles was 3D-printed with a Miicraft+ stereolithography printer with 100 µm feature size in z-direction and 7 s exposure per layer, using the resin BV-001 (model cream). Inlets have a diameter of 200 µm and a pitch of 8 mm. Blade-coated samples were prepared in two steps. First, a liquid film was deposited by passing the blade across the stationary substrate. Subsequently, the nozzle array was placed 1 mm above the wet sample by means of a spacer. The nine areas of controlled flow, each centered on an inlet, and circumscribed by an outlet, were sunk in by an additional millimeter by design. Gas was injected and extracted at rates of 10 mL min −1 using two NE 1000X syringe pumps from New Era Pump Systems, by applying positive and negative pressure to the inlet and outlet, respectively. The flows were maintained for several minutes, until the samples were dried. Figure 7). e) Average profiles along the vertical direction, as well as their ratio. f) Quadratic fit to the inverse ratio.
Additionally, to inhibit the nucleation of TCB, some samples were partially placed on a hotplate heated to 70 °C for several seconds. PCBM solution was filtered using a 0.22 µm PTFE syringe filter and spin-coated at 4000 rpm. DCM is a bad solvent for P3HT, allowing for sufficient diffusion of PCBM into the P3HT layer, yet not dissolving it during deposition. [26] Samples were annealed at 120 °C for 10 min in a nitrogen filled glove box, before thermally evaporating 100 nm of aluminum to form the cathode. Afterward, finished samples were annealed again at 140 °C for 10 min in inert atmosphere to further diffuse the PCBM into the P3HT layer, forming a diffusive bilayer with a gradual change in composition. These graded bilayers of P3HT and PCBM had an average thickness of 160 nm, as determined by AFM. Subsequently, samples were encapsulated using UV curable epoxy and a glass cover slide.
For optical characterization, P3HT was deposited directly onto cleaned glass substrates, and stored in a desiccator overnight to sublimate the TCB.
Sample Characterization: Photovoltaic performance was characterized using a AAA Solar Simulator conforming to the AM1.5G spectrum (SAN-EI Electric, XES-100S1). Irradiation was calibrated to 1000 W m −2 using a NREL certified Si reference cell (Oriel Instruments, 91150V). For polarized measurements, a linear polarizer (Thorlabs, LPVISE200-A) on a rotatable mount was interposed in the light path, lowering the irradiation to ≈ 575 W m −2 . Individual cells had an area of 2 by 2 mm 2 and were covered by a shadow mask with 1.6 by 1.6 mm 2 sized holes during measurement. Before the measurement, samples were exposed for 1 h to irradiation of 1000 W m −2 , in order to burn in and stabilize subsequent performance.
EQE was measured using a custom-built setup. Light from a xenon lamp was passed through a monochromator (Horiba microHR). Using a beamsplitter (Thorlabs BPD254-G), this light then simultaneously illuminated the sample and a photodiode connected to a power meter (Thorlabs S120VC & PM100A). Current was acquired using a multimeter (Keithley 2400 SourceMeter).
Light-beam induced current was measured on a custom-made system. The sample was located on a motorized X, Y, and Z stage, and illuminated locally by a 532 nm laser (Thorlabs, CPS532). After passing through a multimode optical fiber and a beamsplitter (Thorlabs, EBS1), the laser was again polarized using a linear polarizer (Thorlabs, LPVISE200-A). Light was focused on the device using a long working distance 20× Mitutoyo objective. The incident light intensity was monitored with a photodiode power sensor (Thorlabs, S120VC) connected to a digital power meter (Thorlabs, PM100D). The short circuit current was monitored with a multimeter (Keithley 2400 SourceMeter). Data was acquired using a custom software, which allowed to keep the sample plane in focus and the diameter of the laser spot below 30 µm.
Photographs were taken using a Canon EOS 1000D camera, by illuminating samples from the back and placing them on a single, or between two crossed polarizer sheets (Thorlabs, LPVISE2 × 2).
Polarized photoluminescence and Raman scattering images were acquired using a WITec alpha300 RA confocal Raman microscope. The samples were placed on a combined step motor/piezo stage and excited through a 10× objective using solid state lasers centered at 488 and 785 nm. Images were analyzed using WITec Project FOUR software.
Space-resolved Mueller-matrix measurements based on the transmission of light through the spherulites were done with a custommade Mueller-matrix microscope that has been described in detail elsewhere. [31,32] The basic optical design of the instrument is based on a commercial Carl-Zeiss JENAVAL polarization microscope and it allows for measurements at different levels of magnification and wavelength. Briefly, this instrument works by frequency-analyzing the intensity of light captured at every pixel of a camera detector, while two rotating film compensators, one placed before the sample and the other one after it, continuously modulate the polarization of light with two distinct frequencies. The Mueller-matrix captured at each pixel of the image can then be processed to calculate, among others, the linear birefringence and linear dichroism at that point of the sample with a simple analytic formalism. [33,34] The spectroscopic values of the linear birefringence and linear dichroism were measured in transmission with a Mueller-matrix polarimeter. [35] The spot size had a diameter of ≈0.8 mm so that different parts of the spherulite could be selected for measurement.
Spectroscopic ellipsometry was measured with a Sopralab GES5E rotating polarizer ellipsometer with a focused spot size of ≈200 µm. Based on the images of linear dichroism and linear birefringence, we measured at several points with the plane of incidence of light oriented parallel or perpendicular to the directions of largest absorption or refraction, which coincide with the directions along or perpendicular to the fibers. Within the experimental error, the anisotropic optical response is uniaxial with a variable in-plane optic axis aligned with the fibers. [36] Thus, the extraordinary components n e and k e are high and the ordinary ones n o and k o are low. Accordingly, the ellipsometric spectra were analyzed by combining pairs of spectra measured in the two described configurations to evaluate the film thickness and the n and k parameters of the uniaxial films. The spectra vary from point to point due to changes in thickness, roughness, and crystallinity. From these, the thickness seems to be rather constant. In order to obtain a reliable description of the spectral optical anisotropy, the ellipsometric and polarimetric results were combined. For further details regarding the ellipsometric characterization, see Note S9 in the Supporting Information.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
|
2019-04-09T13:03:47.064Z
|
2017-07-31T00:00:00.000
|
{
"year": 2017,
"sha1": "59aa44a7ff0188db55e8cddb5e36d9793bf38155",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adom.201700276",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "27d39bc78e63f96f42b95fca9f16ab9185833e60",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
247072116
|
pes2o/s2orc
|
v3-fos-license
|
M-O-V-E APROACH TO ACQUIRING PATENT LAW VOCABULARY
The paper addresses the issue of lifelong learning of the professional vocabulary through the prism of M-O-V-E approach to acquiring Patent Law vocabulary during the course of learning English for specific purposes. The work outlines the frame-based grounding for the approach to be effectively built and successfully exercised. Frame-based analysis supplies the core for selection the study material and preparing Patent Law class handouts. The shift in the teaching paradigm from Teacher-centred Learning to Studentcentred Learning gets articulated in terms of its impact on the learner, making the latter a motivated investigator ready for fruitful interactive study process. Through the M-O-V-E the following aspects of acquiring Patent Law vocabulary are brought into light:1) motivation of the modern learner; 2) the opportunities for effective facilitation in the classroom; 3) verification of the knowledge gained through the study; and 4) engagement into professional career concerned with the usage of the knowledge gained while university studies and further professional advancement. To the main focus of the work also belong the provision of the practical guidelines for harmonious implementation of M-O-V-E into continuous and a lifelong educational process of the future legal professional.
Introduction
Effective communication is considered to be a vital tool of a lawyer since it has been graded as the ability to sustain the fruitful interaction between a client and a legal professional. The success in legal environment depends greatly on the level of communicative skills developed during the educational process, pupillage, and pursuing a career in law. None can be surprised by the fact that the capacity to communicate efficiently is designated by a modern employer to be a must considering the data of 2019 where the communication remains among top 5 critical competences of a modern lawyer according to Forbes' official report 1 and to say more, in 2020 Prospects 2 lists communication as # 4 among the skills required for an excellent lawyer career. The data obtained illustrate that such once fundamental qualities for legal profession as creative problem solving, information analysis and research, organisation, resilience and self-confidence, initiative, and the ability to work under pressure are left behind by the communicative competence. But the issue of communication is a quite controversial one and the truth once mentioned by Plato still stays relevant to nowadays' education: "Wise men speak because they have something to say; fools − because they have to say something". University education is not about being a fool and thus the paper addresses the collection of tools to develop the professional vocabulary in those learners who reject saying something because of their need to just produce an utterance or to get at least some score. The modern university students are ready for developing their communicative skills through the desire to speak.
The study covers the gap between the researches in fields of legal terminology, language philosophy, semiotics and rhetoric by offering a frame and semantic approach to utilizing the methods of teaching English for specific purpose. The language factor in training of the legal professionals is awarded attention in such interdisciplinary research as Legal Discourse of P. Goodrich (Goodrich, 1987), which focuses on the aspects of rhetoric in relation to jurisprudence and legal analysis, science and legal faith. Another contribution to studies of language aspect in field of law is made by P. Tiersma 3 , L. M. Solan 4 , J. Ainsworth, and R. W. Shuy. Their joint research raises the issue of adhering to different communicative strategies in English-speaking legal environment, whereas the study of D. Cao 5 uncovers the multilingual impact on legal profession. There are also studies impacting the understanding of legal discourse in terms of forensic linguistics outlined by R. W. Shy 6 . Furthermore, the investigations covering the language aspect of such topics as rape and bankruptcy trials are produced by the team of scientists: D. de Carvalho Figueiredo, Ch. Heffer, S. Bernstein, M. Coulthard, A. Johnson, G. Stygall, R. Moekitsi 7 who pay the prime attention to the phonetic and pragmatic peculiarities of the communication while interviewing a client, upon receiving testimonies considering the effects of multicultural courtrooms' environments. The work of R. C. Wydick "Plain English for Lawyers" 8 is covering the pieces of practical advice to those studying legal language and deserves the specific mention hence the fact it uncovers the practical need for and way of simplifying EFL (English as Foreign Language) classes in graduate education through reduction of the surplus lexis usage, together with too complex and long syntactical constructions, old fashioned phrases, the application of which requires solid evidence and legal grounding. The set of works mentioned above leaves the scope for developing and application of professional vocabulary for law open for a research. Thus the question under investigation is of high importance as it sheds light to the issue of M-O-V-E technique as one combining the results of the cognitive and semantic research with methodology of language teaching and studying. This provides the paper with the possibility to satisfy the need for practical and applicable novelty. The paper aims at exploring the effectiveness of M-O-V-E approach as the set of teaching and learning techniques to shaping the vocabulary for Patent Law and crystalizes the practical guidelines for its continuous expending, which is undermined by frame semantics and the elements of onomasiological as well as semasiological analyses. The study sets following objectives:1) to figure out the composition of grounding for M-O-V-E; 2) to elicit practical application of M-O-V-E to vocabulary for law; 3) to outline the set of modern educational means honing successful application of frame-based teaching and learning Patent Law vocabulary through M-O-V-E; 4) to provide the view on incorporating M-O-V-E into educational process.
The basics of M-O-V-E approach to building the professional vocabulary
"Education is not the learning of facts but training of the mind to think" (Albert Einstein).The thought outlined by Albert Einstein long ago is transparently describing the educational process as one provoking the student to move towards the knowledge instead of remaining inert during perceiving the information provided by an educator. The same principle is exercised within M-O-V-E approach (Dunaievska 2020(a) in acquiring the professional speaking skills in general and in laying the grounding for the skills via upbuilding the professional vocabulary in particular. Utilisation of the four-staged educational scenario M-O-V-E extends from lexis acquiring to targeting the limitless opportunities for its expanding through lifelong application. The approach fits the university environment since it fixes the issues with modelling the real life situations and postulates learning. It replaces the conventional scheme of pure learning of the terminology accompanied by its meaning and its examples in the context. Each level of the approach maintains the links to the real-life situations established primarily through frame-based analysis to make both the preparing for teaching and facilitating in learning to become the interactive processes. The frame-based approach assumes the notions of encountering the situations based either on evoked framing serving the familiar situations or invoked framing provoked by the events previously unknown to the participants (Fillmore 1982: 20; Dunaievska 2020 (c), so the task of the educator is to make as many transitions from invoked framing to the evoked one as possible, due to the fact that a learner may possibly be engaged in these situations in the future career. Also, the frame-based vocabulary assumes the operation with possible legal career events through using the action frame structures outlined by S. A. Zhabotynska (Zhabotynska 2010).
Thus the four-staged construction of 'M-O-V-E' approach to teaching a foreign language for specific purpose is pillared by frame- -E signifies Engagement that outlines the fourth stage of acquiring professional vocabulary.
The interaction between components of M-O-V-E in action is depicted on the following figure, where frame-based vocabulary formation signifies the bases to all four stages of the approach.
Figure1. Interaction of M-O-V-E components
The figure introduces the dynamic character of developing the professional skill through application of M-O-V-E in teaching a foreign language vocabulary for specific purposes. Each component of the M-O-V-E approach can be applied to teaching Legal English Course vocabulary and covers each stage of activation the certain amount of requirements to conducting the educational process.
Development of communicative skills in Legal English as Foreign Language classes requires the adaptation to approaching a modern learner from the standpoint of an active participant and a motivated explorer of information. The modern learner is not just a receiver of the study material but an active participator in the search and accumulation of knowledge where the teacher is a facilitator whose main task is to provoke the strive for advancement in a student. The change owes to the conceptual shift in the educational paradigms from Teacher-centred Learning to Student-centred Learning (SCL) 9 . The modern approach to the process of education is representing a notion of equal accessibility to the source of knowledge where both a student and an educator have levelled opportunities in accessing the study material, but an educator, being a facilitator, is navigating a learner for the latter not to lose the focus on prime things and to filter the deceiving facts out. The following figure depicts the shift in paradigms and explains the basic differences in perception of modern educational process.
Figure 2. Shift in Educational Paradigms
The mechanism depicted in Figure 2 addresses the processes occurring in the change effecting the access to the information (study material). The part of the figure corresponding to the Teacher-centred paradigm drafts the mechanism as one that follows the rout from point # 1 designating a process performed by an educator through point # 2 involving the medium and the form of transferring the information / study material to the learner and reaching the final point at spot # 3 encompassing the learner's feedback after reflection of what was transferred from the educator earlier. The scheme illustrates a non-finite process due to the fact that the educator may distinguish the gaps in the knowledge gained by the learner and thus activate the necessity to improve the quality / content of study information or upgrade the media of transfer launching the process from point # 1. The key focus of the part corresponding to the Teacher-centred paradigm falls upon the transformation of learning process of the participants and the type of their interaction in the classroom: information → educator→ learner, where the educator becomes an intermediary between a learner and the information / study material. This, partly, involves the certain level of subjectivity due to the fact that the information / study material gets interpreted through the prism of the educator's understanding. On the other hand, the second part of the figure depicts a way advanced interaction in terms of its independent perception. In Student-centred paradigm the learners get the equal excess (# 1 and # 2) to the authentic information / educational material, but not the one that has been reflected by the teacher (due to Teacher-centred paradigm). In this way, the learner has to act towards working out the best method to acquire knowledge, to exhibit it (# 3) and, simultaneously, the learner is directed and controlled by a facilitator through this process (# 4). The paradigm is also of an endless nature as the issues spotted on point # 4 give the rise to launching the detection of the failing elements in the chain material → educator + leaner → educator + correction or optimisation of knowledge gained →learner.
The teacher is not the only one trusted source of information anymore and a student is free to work with educational material from the first minute of engaging into a discipline. When the paradigm shift becomes obvious, the era of a teacher as the only trusted source of knowledge and experience is behind. The issue is reflected in the guidelines provided is by D. Branndes and P. Ginnis, where a student accepts responsibility of planning the curriculum that results in full understanding of their place in the studying process of student-centred learning (SCL). 10 The first of 4 pillars of M-O-V-E 11 approach is erected by motivation. The environment created by Modern European Higher Education Area (EHEA) in 1999 has introduced the principles of transparency, mobility, and great emphasis on the learners as proactive ones, opening new horizons to motivated students to become the core of modern system of education.
The shift in approach to education opens the classroom for the new methods and tendencies in teaching and at the same time it enables the university staff to adapt the study process to the changes. The special attention must be paid to those educators who are ready to provoke their students to think rather than to passively perceive the educational material. The educators' kits get extended via opting for novel educational components stimulating the learner to be proactive in mastering their career. To say more, law schools are among the frontiers in upgrading the educational paradigm. In fact, these are those who expect the "effective communicators, critical and dynamic thinkers, problem solvers, and career experts" to graduate from the educational establishments for law (Živković 2016: 102). Concerning the point elicited by S. Živković it becomes quite obvious that the development of professional vocabulary is listed among the top priorities of legal education curriculum. Hence, the fact that the latter tends to follow the principle of diversity its "objectives are not limited to linguistic factors, but also include developing of art and critical thinking" 12 though "actual business in law is transacted and executed in language". 13 In this case, communication skills' development gets mandatory accompanied by critical thinking, problem solving and career advancement (Dunaievska 2020 (a): 176). This set of techniques undermines the grounding for the next pillar of M-O-V-E -Opportunities. The pillar encompasses the variety of methods / technics for acquiring Legal English vocabulary which creates the range of opportunities for the learners while dealing with studying the vocabulary for law. Among the possible options are Problem-based, Project and FLIPped Learning due to their focus on development of critical for a modern student of lawproactivity and problem solving features as they are likely to also bring to the classrooms the techniques underlining critical thinking and creativity.
Problem-based learning is grounding on a proactive learning strategy: the learners are stimulated to earn their knowledge and competences gradually. Problem-based learning offers a variety of activities nurturing the strive for: getting to know more, independency, self-management together with working out the feedback with orderly executed analysis. Furthermore, strategy of problem-based learning affiliates the motivated learner (Gabriel et all, 2015: 1867) and due to this fact it is more than an efficient contribution to M-O-V-E. 10 FLIPped Learning having started as "peer instruction" exercised by Harvard Professor E. Mazur's together with his "just in time teaching" 14 turns out to a fruitful challenge in terms of Learner-centred paradigm. The method is practically embodied into a classroom flip or depolarisation of teacher → learner interaction process towards a cooperation which results in engaging the student into independent research perceiving the teacher as not the only source of verified information but rather an experienced assistant in the search and a facilitator in the whole study process. Later, the method of a "flip" is concerned by J.W. Bakerand and gets undertaken as an extremely productive teaching instrument in active learning: a learner is provided by a mentor rather than by a teacher "by his side" 15 Being successfully utilised by J. Bergman and A. Sams, 16 learning with a flip or a FLIPped learning grew into an innovative media to facilitate effectively in academic classroom. Students are instructed to access the study material before the class takes place, and are actually doing their own research in their own comfortable way using the media to perceive the study material according to their taste of media (accompanied by the music / in a quiet place / listening to the podcasts or watching the video records). This appears to be particularly useful while having a remote mode of education when a learner earns an opportunity to be active user of their background knowledge and acquire skills in this process that makes them active learners, not bound by bias.
Among the techniques for handling M-O-V-E effectively is the one which makes law students think critically due to its ability to fix the problem with modelled life situation in a classroom environment, because "by allowing students to think critically about how a concept applies to a real-life situation, it deepens the quality of their understanding and their ability to apply their prior knowledge on new situations" (Živković, 2016: 107). The technique corresponds the way the basis for M-O-V-E is formedthe utilisation of frame-based approach. Concerning the second technique it is training of creative thinking pattern which is put by B. Bloom on the highest level facilitated in learning environment with the potential to be reached by acting in the classroom while a student has to undergo the chain of stages while dealing with the study material: remember understand apply analyze evaluate create. The chain is a vital source to adhere when verifying the steps of evaluating any type of the information and is advised to be followed in building professional vocabulary through M-O-V-E. Thus, the stage "Opportunities" gives the learner a possibility to encounter the new lexis through such steps 17 as: Step 1: Appellation to the facts / basic concepts through memorizing the repeated number of the analysed situations (in terms of frame-based approachframes); Step 2: Figuring out the ideas and concepts via applying the classification, translation, selection the corpus of lexis based on the situations they appear in; Step 3: Employing the learnt data in unknown situations through sketching and scheduling; 14 Step 4: Establishing the ties with new situations through contrasting and comparing the already familiar lexis; Step 5: Proving and supporting the need for the application of the known lexis while justifying a stand or decision by appraising, arguing, and supporting (for example: mooting); Step 6: Creating an original piece due to designing, assembling, constructing, and investigating the new vocabulary items through engagement into legal assistance as a paralegal in a law clinic.
To support the lifelong priority of application the vocabulary acquired through M-O-V-E the stages of V -Verification and E -Engagement are supplied. The basis for verification of the frame-based vocabulary may vary depending on the options to enter into practice the learner is deciding for. To the optimal ways to verify the accumulated vocabulary belong the participation in the internship programs and volunteering in the local law clinic while advising the foreign clients and participating in international conferences, workshops, meetings. Concerning the stage of engagement it must be admitted that a motivated learner will be ready to get engaged into different cases presupposing the proficient command on foreign language for law.
Framing of Patent paw vocabulary corpus
Since M-O-V-E approach to developing the professional vocabulary for law has the frame-based analysis as its prerequisite, it is necessary to follow the mandatory steps 18 bearing a joint character of onomasiological and semasiological 19 features influencing the following procedures: semasiological aspect of framing Patent Law vocabulary: integrative defining; composite defining; spotting the basic constituents; corpus fracturing; onomasilogical aspect of framing Patent Law vocabulary: situation-based sorting; identifying the slots; outlining the frame pattern / scheme.
The frame-based vocabulary analysis is ordered according to the next chain of procedures: 1) integrative defining→ 2) composite defining + situation-based sorting → 3) spotting the basic constituents + identifying the slots → 4) corpus fracturing + outlining the frame pattern / scheme.
The application of the integrative defining aims at determining the number of dictionary articles explaining the analysed notion which at the same time function as "containers for universal knowledge" on the issue. Finally, the lexis is sorted with its class reference and the scheme of the frame containing the universal understanding of the notion and the situation connected with it is outlined.
On the first stageintegrative defining of patent dives into analysis of multiple definitions of a notion under investigation aiming at providing a definition composed from 5/6 or more available. Integrative defining of patent is assisted by One Look dictionary 20 search since the latter provides multiple definitions sorted from dictionary articles explaining the lexeme "patent" as: 1) Merriam-Webster 21 : an official document conferring a right or privilege; a government authority or license conferring a right or title for a set period, especially the sole right to exclude others from making, using, or selling an invention; 2) Oxford dictionaries 22 : a government authority or license conferring a right or title for a set period, especially the sole right to exclude others from making, using, or selling an invention; 3) Collins Dictionary 23 : an official right to be the only person or company allowed to make or sell a new product for a certain period of time; 4) Macmillan Dictionary 24 : an official document that gives someone who has invented something the legal right to make or sell that invention for a particular period of time, and prevents anyone else from doing so; 5) Cambridge Advanced Learners' Dictionary 25 : the official legal right to make or sell an invention for a particular number of years; 6) Legal Dictionary Law Com. 26 an exclusive right to the benefits of an invention or improvement granted by the U.S. Patent Office, for a specific period of time, on the basis that it is novel (not previously known or described in a publication).
On the second stage the content of definitions is sorted out based on their reference to the situation, so composite defining focuses on evaluation of the basic components crystallized from definitions retrieved from 6 dictionary articles. The following constituents of patent have distinguished: things and their characteristics related to a patent: an official / legal / exclusive document, right to, the benefits, company, a government authority, a license; the US Patent Office, a new product, an improvement, a publication; people related to a patent: an inventor, the only person, someone who has invented something, anyone else; actions related to a patent: to grant, to make, to be allowed, to exclude, to sell, to use, to give, to prevent, to be known, to be novel, to be described; markers of time, place, and manner related to a patent: for a specific period of time, for a certain period of time, on the basis that it is novel.
Utilising the results of outlined constituents' analysis, the situation-based sorting allows the learner as a law student to familiarise with the potential events connected with patents and Patent Law which they may encounter in the future. In fact, the effectiveness of this stage will influence the number of frames the learner will be ready to evoke while facing the issues regarding patents and Patent law. Driven by the results of composite defining one can outline the following events: Inventing Something Novel; Meeting Legal Requirements; Interaction with the US Patent Office; Describing a Patent; Obtaining a Right; Authorising a Patent; Earning the Benefits; To Be Allowed to Use Something Novel; Selling a Patent; Obtaining a License; Granting a Patent; Using a Patent for a Certain Period of Time. This is only short and preliminary set of the potential situations with reference to patent and Patent Law which sets the guidelines for constructing the core categories for the rest of the corpus content of patent-and Patent Law-related lexis.
Furthermore, these situations are the determiners launching the search for the lexis regarding each situation or an event pertaining to it.
Also, it is important to note that while facilitating in acquiring vocabulary regardless of the field of law or specialisation, a learner (whether a junior / senior / undergraduate student or a paralegal) must bear in mind the perception of the function they perform in all the processes pertaining the domain of law. The function of a lawyer in the overwhelming majority of cases / situations or events, due to the research conducted in 2020, is an implied one and not the evident or explicitly observed from the description of the event. A lawyer can be pointed out as an independent intermediary while disputes, proceedings, prosecutions by doing advocacy or providing advisory services (Dunaievska 2020 (b): 37). Grounding on the findings of the latter, it becomes evident that each situation connected with legal advice or other legal issue implicitly includes the job a learner is supposed to do upon a graduation exclusively on behalf of one of the sides either a claimant / a defendant in civil cases or a prosecutor / a defence attorney in a criminal case.
Further work over the corpus content of the situations can be illustrated by some cases of utilising the previously outlined events. For example, the situation of inventing something novel can be approximated via action frame structure: Somebody (a person) -Acts (invents) -Something. The action frame structure can further be split into slots filled with the lexis used to mark the participants of the event. Slots function as containers for the verbalising each of the participants, their actions and their characteristic features. The lexical material for filling in the slots is collected from the major and additional sources offered by an educator. With reference to a patent and Patent Law the successive sources draw particular attention: All the above mentioned educational material handles the lexis corpus of patent and Patent Law to be applied while educational process and as the result is attributed to categorising while frame-based analysis being aided by placing to proper slots of the particular frames (situations/events of touching potentially upon patent or Patent Law).
Slot Somebody of the event of inventing something novel encloses such lexemes as: a candidate, an inventor, an owner, a researcher, an entrepreneur.
Slot Acts (invents) contains: to create, to invent, to modernise, to revolutionise, to improve, to upgrade, to advance etc.
Slot Something counts: model, appliance, improvement, method, mechanism, process, machine, manufacture, composition of matter etc.
The next situation in the matter of a patent is described via frame "Meeting Legal Requirements" through action frame structure: Something (an invention) -Acts (meets) -Somethings (requirements). The slot Something is to be filled with previously presented lexis from the frame fixing the situation of inventing something novel. The slot Acts (meets) is objectivised through lexis: to be patentable; to be novel; to be utilised; to be non-obvious. Each constituent of the slot gets further extended due to the requirements established by the particular state or the union of states. The slot Requirements contains: patentability, novelty, non-obviousness, uniqueness, fair representation etc.
The corpus structuring presupposes having the exact number of the situations crystallised during integrative defining the educator sorts the corpus (the amount of lexis pertaining to patent and Patent Law) created by collection of the terms from sources on the topic according to the situations they might be used in and the slots they might further belong to. It is highly recommended to perform the frame-based analysis of patent and Patent Law vocabulary before starting the topic of Patent Law with the students of Law school, since the results of it are to be transparently listed in the handouts for students with an aim to intensify their comfortable application.
Implementing M-O-V-E into the classroom activities
The results of the carefully executed frame-based vocabulary analysis can be listed in the table containing the probable patent and Patent Law situations which may come in form of Vocabulary toolkit "Patent / Patent Law". The form is to be inserted into Google document for being easily accessible and transparent for a highly motivated learner. Due to the requirements of FLIPped Learning the learners are provided with both the link to accessing the file and the rights of an editor for copying or editing or commenting on the document if necessary a week before the classes on patent and Patent law start, for the learners to have enough time to reflect on the offered technique. Each learner in the course is required to copy the Vocabulary toolkit form, to enable the educator to access the document with the link as well as to grant the editor the rights for adding comments or offering some corrections to their personal toolkits. A learner is required to extend their Vocabulary toolkit on patent and Patent Law during the whole period of working over the topic. Undergoing the course of studying patent and Patent Law the learners are offered the variable opportunities for acquiring the vocabulary on patent and Patent Law within the following mode: 1) they are supplied with study material before the classes; 2) the learner is to attend the class being fully equipped with the study material and well prepared for active participation in discussions added by other engaging activities; 3) the learner is required to join a study group making a project-research dedicated to solving the distinguished problem or a challenge tackling the field of Patent Law in a team as well as on individual basis to create a unique and independent work. While complying with the project-based task and solving the issues of Patent Law a learner is developing the command on critical thinking and creativity. Upon the completion of the Patent Law course the Vocabulary toolkit developed through the course remains intensively used and gets continuously extended while dealing with other issues from classes of Legal English. The educator offers the options for learners for example to do a volunteering at the university law clinic by assisting the legal professionals with their foreign visitors or to prepare the conference proceedings in Patent Law for international participation which makes the learner to exercise and verify their foreign language speaking skills and upgrade the level of their vocabulary. Furthermore the learners obtain the opportunity to extend their vocabulary on spot by adding the newly encountered and previously unknown lexis to their Vocabulary toolkits hence it being kept in the Google Document with 24/7 access. Finally, the learners are highly recommended to join the professional communities for Patent Law professionals to enable the continuous enrichment of their professional vocabulary through engagement into the activities and projects of the community.
Conclusions
The application of M-O-V-E approach to acquiring professional vocabulary is effective addition to development of a legal professional as a skilful communicator who speaks because they have something to say. Moreover M-O-V-E can be regarded as a valuable addition for teacher the vocabulary of Patent Law since it is addressing the policy postulated by The European Framework of Key Competences for Lifelong Learning 36 and thus supplies a prerequisite to lifelong learning inventory hence the fact it presupposes the limitless development of the personal professional vocabulary set. The whole idea of M-O-V-E is nurturing the acquiring of the professional vocabulary for Patent Law or any other domain to be based on the procedure of transforming invoked framing (unfamiliar situations) through exercising professional lexis into evoked framing (situations familiar at least to some certain extend), which aims at aiding the future legal professional in application of the gained communicative skills and learnt vocabulary without any barriers.
|
2022-02-24T16:03:28.582Z
|
2020-12-15T00:00:00.000
|
{
"year": 2020,
"sha1": "2078446fc83390e27a2139f48bea9f0c4919dbc2",
"oa_license": "CCBYNC",
"oa_url": "https://rtmll.vnu.edu.ua/index.php/fgh/article/download/73/34",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "79a3154f4343dc2d04d168ce7f83761ce48fd789",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
}
|
195416538
|
pes2o/s2orc
|
v3-fos-license
|
Wind Energy Potential Assessment by Weibull Parameter Estimation Using Multiverse Optimization Method: A Case Study of Tirumala Region in India
: In this paper the multiverse optimization (MVO) was used for estimating Weibull parameters. These parameters were further used to analyze the wind data available at a particular location in the Tirumala region in India. An e ff ort had been made to study the wind potential in this region (13 ◦ 41 (cid:48) 30.4” N 79 ◦ 21 (cid:48) 34.4” E) using the Weibull parameters. The wind data had been measured at this site for a period of six years from January 2012 to December 2017. The analysis was performed at two di ff erent hub heights of 10 m and 65 m. The frequency distribution of wind speed, wind direction and mean wind speeds were calculated for this region. To compare the performance of the MVO, gray wolf optimizer (GWO), moth flame optimization (MFO), particle swarm optimization (PSO) and other numerical methods were considered. From this study, the performance had been analyzed and the best results were obtained by using the MVO with an error less than one. Along with the Weibull frequency distribution for the selected region, wind direction and wind speed were also provided. From the analysis, wind speed from 2 m / s to 10 m / s was present in sector 260–280 ◦ and wind from 0–4 m / s were present in sector 170–180 ◦ of the Tirumala region in India.
Introduction
The increasing energy demand, from all the sectors, is creating a stressful situation over fossil fuels. Even after knowing that, excessive use of these fuels will substantially increase pollution in the environment. In developing countries like India, the second populous country in the world requires a larger number of energy sources than any other country. As fossil fuel plays a major part in pollution and creating global warming [1,2], there is a need to focus on improving an alternate energy source for meeting the energy demand. Now it is the time to increase the renewable energy capacity because of their inexhaustibility and eco-friendliness. We know that wind is stochastic in nature, i.e., its speed and direction will be varying with time. With the clear knowledge of wind statistical properties, it becomes easier to predict the energy available at that particular location. Among the statistical analysis [3], Weibull distribution plays an important role in estimating the wind potential at any region.
As per the Global Wind Energy Council (GWEC) report of April 2018, the total installed capacity of wind turbines is 539,123 MW. Among that, India's contribution is around 32,848 MW until the end of 2017 and it was 28,700 MW in the year 2016. This shows the wind potential of India in the Asian subcontinent, and it needs to be explored further in order to meet the energy demand. In the year 2017, the total offshore wind turbine installation accounted for 4334 MW and it had increased by 87% compared to 2016. With this, the total world offshore installation reached 18,814 MW and it is growing faster than expected.
As stated earlier, compared to 2016, India has added 4148 MW into the grid and it is the first time it has crossed 4 GW in a year. The total electricity produced from wind in 2017 is 53,726 GWh in India and its share in the total consumption is 4.35%. Now India tops second among the Asia wind energy market and ranks fourth among global rankings. It is required to know the wind potential [4] at different locations for new installation for improving grid integration and also to reduce dependency on fossil fuels. Few characteristics like, wind speed and wind direction will vary with geographical location, though it is close to each other. India has an area of 3.287 million km 2 and the population of about 1,342,512,706 according to data provided by the government of India. As per the forecasted reports, the energy consumption will increase up to 60% by 2040, i.e., 4.2% annually. If this is the case, then the whole country will not be able to meet the electricity demand with present fuels.
In India total number of wind turbines installed until 2017 were 32,136 and it is still growing bigger. Recently many studies are conducted for estimating the wind potential using probability distribution functions and some of the studies for different regions are as follows: In Kutahya of Turkey [5] wind potential was studied for two years at 10 m and 30 m hub heights, in [6] the region of Tehran of Iran has been studied. In [7] wind speed has been collected at three hub heights of 10 m, 20 m and 30 m using two meteorological stations in Borjcedria from 2008 to 2009. The results were used for selecting the best wind turbine for the site being selected. In [8] wind data were used from 2001 to 2006 for calculating Weibull parameters [9]. The results showed that the site was well suitable for grid power generation for water pumping and battery charging applications.
In the literature, many researchers studied and focused on Indian wind scenario characteristics for selecting a suitable location for different applications. Kumaraswamy et al. [10] have done a study on wind farms located at three different locations for a period of one year and estimated the wind potential over the west-central region of Karnataka. For this study, Weibull distribution was used and found EP-02 as the best site for optimum wind energy production. Murthy and Rahi [11] carried out a study to assess the possibility to install a wind farm in Himachal Pradesh by using micro turbines. They reported that Hamirpur, a hilly site in Himachal Pradesh, has wind potential of a large magnitude from the March to June months for low ratings of wind turbines. The state of Gujarat is having the highest wind potential with 35,071 MW and next is the Andhra Pradesh of 14,497 MW. Tamilnadu is also having an abundant wind resource across the Kanyakumari coastal area and it amounts to 14,152 MW and later Karnataka (13,593 MW), Maharashtra (5961 MW) and the remaining are all having small installed capacities of wind energy. In 2010 the energy market regulatory agency (EMRA) brought up a new procedure for a meteorological measurement for wind. This regulatory agency states that the site must have at least one-year data measured for installing the wind farm.
To know the wind potential at any location, modeling the wind behavior is an important factor. Identifying the pattern and predicting wind behavior plays an important role in deciding the wind potential. There have been several probabilistic frequency distributions presented in the literature such as gamma, log-normal, Weibull and normal are some popular distributions used for modeling wind speed. However, these studies to determine the Weibull parameters are unable to adjust in fitting the wind distribution histogram. So, some other alternative methods to find the Weibull parameters has to be investigated.
At present, the application of intelligence techniques is used for optimizing the Weibull parameters which has reduced the errors in electricity production. Generally, an optimization technique is divided into single solution-based and population-based. In the single solution-based optimization, it starts with a single random solution and improves over pre-defined generations. Here, there is no information sharing and there arises many issues like local optima, deceptiveness and premature convergence with a single solution. Whereas, in population-based algorithms the optimization starts with random solutions and improves as the iterations are increased. The main advantage of population-based optimization is that there exists information exchange among the solutions of candidates. In this way they can handle the local optima, bias of search space and premature convergence easier and faster. Some of the meta-heuristic optimization such as the genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO) and differential evolution algorithm (DEA) have overcome the limitations of single solution optimization methods. Therefore, in this paper a multiverse optimization (MVO) technique is implemented to estimate the parameters to adjust and fit the actual wind profile. This methodology serves as an innovative solution for any particular wind conditions provided a well-defined pattern has been provided. The MVO outperforms other optimization techniques and is being applied for wind energy applications to achieve rapid convergence in estimating Weibull parameters. This MVO algorithm is compared with PSO, which is best among the SI based technique and GWO, MFO as one of the most recent algorithms. The proposed algorithm has high exploitation ability due to combination of WEP/TDR constants and wormholes combined to provide high exploitation. Superior exploration of the proposed algorithm is due to white and black holes to exchange different objects.
Here, the wind data for a period of six years from January 2012 to December 2017 in the Tirumala region (13 • 41 30.4" N 79 • 21 34.4" E) is being studied which is located in the southern part of India. In order to analyze the wind distribution, scale and shape parameters are determined using MVO. This gives two values, which is further utilized to determine the probability density function of the Weibull and Rayleigh distribution. With this study it has been estimated that there is sufficient wind potential in this region for enhancing more wind power for driving wind turbines. For determining the elevation of the site, ArcGIS (10.6, Esri, Redlands, CL, USA) is used and wind rose is used for studying the wind direction [12].
The rest of the paper is organized as: Wind data measuring at the selected location and its characteristics are described in Section 2. The proposed MVO is explained in Sections 2 and 3 the data has been analyzed using different distributions like Weibull distribution and Rayleigh distributions. Section 4 deals with analysis of wind data using statistical parameters like the mean, variance, standard deviation and root mean square error (RMSE) and Section 5 covers conclusion.
Location of Site and Collection of Wind Data
An Indian site, Tirumala (13 • 41 30.4" N 79 • 21 34.4" E), was selected for performing a statistical analysis of wind speed and direction. Hourly wind data was collected for a period of six years from 2012-2017. The site is probably a cool and humid place with better wind density. Tirumala is located at a distance of 28 km from Tirupati city of Andhra Pradesh. As per the 2011 census, the population of Tirupati is around 3.74 lakhs and due to its heavy population, this location was chosen for study. The wind speed and directions were measured using anemometers at two different hub heights of 10 m and 65 m. The measuring instruments were installed by the National Atmospheric Research Laboratory (NARL) located in Gadanki near Tirupati.
There were two important observation devices namely Doppler Sodar (MFAS64, SAMEER, Mumbai, Maharashtra, India), which is operated using a multi-frequency around (1600-2500 Hz) and the other was the UHF (ultra-high frequency) Wind Profiler (1280 MHz, PEC India, New Delhi, New Delhi, India). The measured hourly data was averaged monthly for the 10 m and 65 m height and shown in Figure 1. From Figure 1, it can be clearly observed that the maximum average wind speed available in the month of January was 4 m/s at 10 m and 5.2 m/s at 65 m. Similarly, the lowest average wind speed was present in the month of August and was 2.5 m/s at 10 m and 3 m/s at 65 m. The wind pattern slowly reduced from January and again reached to a maximum in the month of December.
where, i v is the wind speed at ith hour and n is the number of hours. From this average value ( m v ), σ (standard deviation) can be calculated from Equation (2). For higher wind speed, the average value must be high and standard deviation should be low. As the standard deviation was low it shows that the dataset was more uniform. The total number of data samples collected was 52,609 and from these 310 missing data were present and the remaining data were taken into consideration for the analysis. The surface plot for the selected site is shown in Figure 2 and these were obtained using ArcGIS software. From Figure 2 it can be stated that, the highest altitude was nearly 960 m from the sea level and the violet color shows an altitude of 960 m. The average wind speed (v m ) and standard deviation (σ) were calculated using the following expressions: where, v i is the wind speed at ith hour and n is the number of hours. From this average value (v m ), σ (standard deviation) can be calculated from Equation (2). For higher wind speed, the average value must be high and standard deviation should be low. As the standard deviation was low it shows that the dataset was more uniform. The total number of data samples collected was 52,609 and from these 310 missing data were present and the remaining data were taken into consideration for the analysis. The surface plot for the selected site is shown in Figure 2 and these were obtained using ArcGIS software. From Figure 2 it can be stated that, the highest altitude was nearly 960 m from the sea level and the violet color shows an altitude of 960 m.
where, i v is the wind speed at ith hour and n is the number of hours. From this average value ( m v ), σ (standard deviation) can be calculated from Equation (2). For higher wind speed, the average value must be high and standard deviation should be low. As the standard deviation was low it shows that the dataset was more uniform. The total number of data samples collected was 52,609 and from these 310 missing data were present and the remaining data were taken into consideration for the analysis. The surface plot for the selected site is shown in Figure 2 and these were obtained using ArcGIS software. From Figure 2 it can be stated that, the highest altitude was nearly 960 m from the sea level and the violet color shows an altitude of 960 m.
Multiverse Optimization Method for Parameter Estimation
The solution to any optimization problem involves artificial intelligence, which is used in various applications. This optimization includes probabilistic models of a natural occurring phenomenon to determine the optimal solution for a function. Though it gives a solution it does not guarantee the best one but it will converge quickly to the best.
Multiverse optimization (MVO) is one of the evolutionary techniques developed by Mirjalli, inspired by the concept of cosmology. The theory behind this is the white hole, wormhole and black hole and these are mathematically modelled in constructing the MVO. In this algorithm, the population is divided into two phases, i.e., exploration and exploitation. The white and black hole concepts are used to explore the search spaces. The wormholes will guide the MVO in exploiting the search space.
Here each solution resembles the universe, now each solution is assigned with some inflation rate and correlating with fitness function. The following rules are used in MVO:
Multiverse Optimization Method for Parameter Estimation
The solution to any optimization problem involves artificial intelligence, which is used in various applications. This optimization includes probabilistic models of a natural occurring phenomenon to determine the optimal solution for a function. Though it gives a solution it does not guarantee the best one but it will converge quickly to the best.
Multiverse optimization (MVO) is one of the evolutionary techniques developed by Mirjalli, inspired by the concept of cosmology. The theory behind this is the white hole, wormhole and black hole and these are mathematically modelled in constructing the MVO. In this algorithm, the population is divided into two phases, i.e., exploration and exploitation. The white and black hole concepts are used to explore the search spaces. The wormholes will guide the MVO in exploiting the search space.
Here each solution resembles the universe, now each solution is assigned with some inflation rate and correlating with fitness function. The following rules are used in MVO: When inflation rate is high, then probability of white hole is high; When inflation rate is high, then black hole probability is low; When the universe is having higher inflation rate, then it sends the object through white holes; The objects will randomly move to the best universe through wormholes irrespective of inflation rate; When inflation rate is low, then it receives the object from black holes.
For exchanging the objects in the universe, a roulette wheel mechanism was used. For every iteration the universe was sorted and the inflation rates were calculated to select the white hole among them. Let us consider: where d represents the number of variables and n is the number of universe (solutions); x r NI Ui (4) where j i x is jth parameter of ith universe and Ui gives the ith universe, NI(Ui) is the normalized inflation of the ith universe, r1 is a random number between 0 and 1.
j k x is the jth parameter of kth universe from roulette selection.
To avail the changes of each universe and to improve the inflation rate of wormholes, assuming that wormholes are present in between the universe and best universe. This can be shown below: where j x represents the jth parameter and travelling distance rate (TDR), wormhole existence probability (WEP) are coefficients. The lower and upper bounds are shown as lbj and ubj for the jth variable. And r x are random numbers between [0,1].
When inflation rate is high, then probability of white hole is high;
Multiverse Optimization Method for Parameter Estimation
The solution to any optimization problem involves artificial intelligence, which is used in various applications. This optimization includes probabilistic models of a natural occurring phenomenon to determine the optimal solution for a function. Though it gives a solution it does not guarantee the best one but it will converge quickly to the best.
Multiverse optimization (MVO) is one of the evolutionary techniques developed by Mirjalli, inspired by the concept of cosmology. The theory behind this is the white hole, wormhole and black hole and these are mathematically modelled in constructing the MVO. In this algorithm, the population is divided into two phases, i.e., exploration and exploitation. The white and black hole concepts are used to explore the search spaces. The wormholes will guide the MVO in exploiting the search space.
Here each solution resembles the universe, now each solution is assigned with some inflation rate and correlating with fitness function. The following rules are used in MVO: When inflation rate is high, then probability of white hole is high; When inflation rate is high, then black hole probability is low; When the universe is having higher inflation rate, then it sends the object through white holes; The objects will randomly move to the best universe through wormholes irrespective of inflation rate; When inflation rate is low, then it receives the object from black holes.
For exchanging the objects in the universe, a roulette wheel mechanism was used. For every iteration the universe was sorted and the inflation rates were calculated to select the white hole among them. Let us consider: where d represents the number of variables and n is the number of universe (solutions); x r NI Ui (4) where j i x is jth parameter of ith universe and Ui gives the ith universe, NI(Ui) is the normalized inflation of the ith universe, r1 is a random number between 0 and 1.
j k x is the jth parameter of kth universe from roulette selection.
To avail the changes of each universe and to improve the inflation rate of wormholes, assuming that wormholes are present in between the universe and best universe. This can be shown below:
Multiverse Optimization Method for Parameter Estimation
The solution to any optimization problem involves artificial intelligence, which is used in various applications. This optimization includes probabilistic models of a natural occurring phenomenon to determine the optimal solution for a function. Though it gives a solution it does not guarantee the best one but it will converge quickly to the best.
Multiverse optimization (MVO) is one of the evolutionary techniques developed by Mirjalli, inspired by the concept of cosmology. The theory behind this is the white hole, wormhole and black hole and these are mathematically modelled in constructing the MVO. In this algorithm, the population is divided into two phases, i.e., exploration and exploitation. The white and black hole concepts are used to explore the search spaces. The wormholes will guide the MVO in exploiting the search space.
Here each solution resembles the universe, now each solution is assigned with some inflation rate and correlating with fitness function. The following rules are used in MVO: When inflation rate is high, then probability of white hole is high; When inflation rate is high, then black hole probability is low; When the universe is having higher inflation rate, then it sends the object through white holes; The objects will randomly move to the best universe through wormholes irrespective of inflation rate; When inflation rate is low, then it receives the object from black holes.
For exchanging the objects in the universe, a roulette wheel mechanism was used. For every iteration the universe was sorted and the inflation rates were calculated to select the white hole among them. Let us consider: where d represents the number of variables and n is the number of universe (solutions); x r NI Ui (4) where j i x is jth parameter of ith universe and Ui gives the ith universe, NI(Ui) is the normalized inflation of the ith universe, r1 is a random number between 0 and 1.
j k x is the jth parameter of kth universe from roulette selection.
To avail the changes of each universe and to improve the inflation rate of wormholes, assuming that wormholes are present in between the universe and best universe. This can be shown below: where j x represents the jth parameter and travelling distance rate (TDR), wormhole existence probability (WEP) are coefficients. The lower and upper bounds are shown as lbj and ubj for the jth variable. And r x are random numbers between [0,1].
When the universe is having higher inflation rate, then it sends the object through white holes;
Multiverse Optimization Method for Parameter Estimation
The solution to any optimization problem involves artificial intelligence, which is used in various applications. This optimization includes probabilistic models of a natural occurring phenomenon to determine the optimal solution for a function. Though it gives a solution it does not guarantee the best one but it will converge quickly to the best.
Multiverse optimization (MVO) is one of the evolutionary techniques developed by Mirjalli, inspired by the concept of cosmology. The theory behind this is the white hole, wormhole and black hole and these are mathematically modelled in constructing the MVO. In this algorithm, the population is divided into two phases, i.e., exploration and exploitation. The white and black hole concepts are used to explore the search spaces. The wormholes will guide the MVO in exploiting the search space.
Here each solution resembles the universe, now each solution is assigned with some inflation rate and correlating with fitness function. The following rules are used in MVO: When inflation rate is high, then probability of white hole is high; When inflation rate is high, then black hole probability is low; When the universe is having higher inflation rate, then it sends the object through white holes; The objects will randomly move to the best universe through wormholes irrespective of inflation rate; When inflation rate is low, then it receives the object from black holes.
For exchanging the objects in the universe, a roulette wheel mechanism was used. For every iteration the universe was sorted and the inflation rates were calculated to select the white hole among them. Let us consider: where d represents the number of variables and n is the number of universe (solutions); x r NI Ui (4) where j i x is jth parameter of ith universe and Ui gives the ith universe, NI(Ui) is the normalized inflation of the ith universe, r1 is a random number between 0 and 1.
j k x is the jth parameter of kth universe from roulette selection.
To avail the changes of each universe and to improve the inflation rate of wormholes, assuming that wormholes are present in between the universe and best universe. This can be shown below: where j x represents the jth parameter and travelling distance rate (TDR), wormhole existence probability (WEP) are coefficients. The lower and upper bounds are shown as lbj and ubj for the jth variable. And r x are random numbers between [0,1].
The objects will randomly move to the best universe through wormholes irrespective of inflation rate;
Multiverse Optimization Method for Parameter Estimation
The solution to any optimization problem involves artificial intelligence, which is used in various applications. This optimization includes probabilistic models of a natural occurring phenomenon to determine the optimal solution for a function. Though it gives a solution it does not guarantee the best one but it will converge quickly to the best.
Multiverse optimization (MVO) is one of the evolutionary techniques developed by Mirjalli, inspired by the concept of cosmology. The theory behind this is the white hole, wormhole and black hole and these are mathematically modelled in constructing the MVO. In this algorithm, the population is divided into two phases, i.e., exploration and exploitation. The white and black hole concepts are used to explore the search spaces. The wormholes will guide the MVO in exploiting the search space.
Here each solution resembles the universe, now each solution is assigned with some inflation rate and correlating with fitness function. The following rules are used in MVO: When inflation rate is high, then probability of white hole is high; When inflation rate is high, then black hole probability is low; When the universe is having higher inflation rate, then it sends the object through white holes; The objects will randomly move to the best universe through wormholes irrespective of inflation rate; When inflation rate is low, then it receives the object from black holes.
For exchanging the objects in the universe, a roulette wheel mechanism was used. For every iteration the universe was sorted and the inflation rates were calculated to select the white hole among them. Let us consider: where d represents the number of variables and n is the number of universe (solutions); x r NI Ui (4) where j i x is jth parameter of ith universe and Ui gives the ith universe, NI(Ui) is the normalized inflation of the ith universe, r1 is a random number between 0 and 1.
j k x is the jth parameter of kth universe from roulette selection.
To avail the changes of each universe and to improve the inflation rate of wormholes, assuming that wormholes are present in between the universe and best universe. This can be shown below: where j x represents the jth parameter and travelling distance rate (TDR), wormhole existence probability (WEP) are coefficients. The lower and upper bounds are shown as lbj and ubj for the jth variable. And r x are random numbers between [0,1].
When inflation rate is low, then it receives the object from black holes.
For exchanging the objects in the universe, a roulette wheel mechanism was used. For every iteration the universe was sorted and the inflation rates were calculated to select the white hole among them. Let us consider: where d represents the number of variables and n is the number of universe (solutions); where x j i is jth parameter of ith universe and Ui gives the ith universe, NI(Ui) is the normalized inflation of the ith universe, r1 is a random number between 0 and 1. x j k is the jth parameter of kth universe from roulette selection.
To avail the changes of each universe and to improve the inflation rate of wormholes, assuming that wormholes are present in between the universe and best universe. This can be shown below: where x j represents the jth parameter and travelling distance rate (TDR), wormhole existence probability (WEP) are coefficients. The lower and upper bounds are shown as lb j and ub j for the jth variable. And r x are random numbers between [0,1]. The two main coefficients, WEP and TDR are used to define the existence in the universe and the travelling speed. This distance rate is a factor by which the object can be moved around the best universe. The expression for WEP is shown below: where the min value is taken as 0.2 and the max value to be 1, l indicates the iteration number and L is the maximum iteration number. Now, for TDR a similar mathematical equation is given below: where p is taken as 6, defined for exploitation, the higher this value the faster the search. For improving the fitting curve, the variation between the Weibull distribution and actual wind speed distribution should be minimized. The objective function to be considered for minimization is given by: where f real (v i ) gives the frequency of the wind speed class and f weibull (v i ) gives the Weibull probability density function. The total number of wind speed classes is given by n.
In this present work multiverse optimization was implemented by taking 50 number of universes. In order to avoid premature convergence at initial stage the travelling speed and existence probability would vary. The lower and upper bound values were taken to be 0.4 and 0.9. The main reason to choose MVO was because it is a new meta-heuristic optimization technique inspired by the concept of cosmology and has not yet been tested for determining the Weibull parameters for wind energy applications.
Distribution Characteristics for Wind Data
The dynamic nature of wind can be studied with the application of the probability density function.
The probability density function f (v) gives an idea about the occurrence of wind velocity and the cumulative distribution F(v) tells whether the wind velocity is less than or equal to that wind velocity. There are several methods that can be applied to analyze the wind data for estimating the wind potential [13,14] for a particular region. It was found from the literature that Weibull and Rayleigh [15] are the most preferred methods for determining the wind energy potential. In order to implement the Weibull and Rayleigh distribution, we need to estimate the shape and scale parameter. Many researchers have done studies for evaluating the wind potential through different probability density function. The results showed that the Weibull and Rayleigh distribution have presented the wind distribution in a better manner [16][17][18][19][20][21][22][23][24].
Weibull Distribution
This distribution has been used for many years for fitting source data, i.e., actual wind data. The wind data characteristics in any region can be analyzed by using the probability distribution function [25,26].
Another approach is to follow the Rayleigh distribution, which has also been used as one of the statistical tools to analyze wind data. To perform both Weibull and Rayleigh we required shape and scale parameters (k and c) [27]. The shape parameter value decides the type of distribution whether it should be Weibull or Rayleigh. When the shape parameter is less than 2 then it takes a Weibull distribution. When it is exactly 2 it is known as the Rayleigh distribution, if it exceeds 3 it takes the Gaussian distribution [28].
The Weibull distribution function or Weibull probability density function is calculated as [29]: where, f (v) is the probability of wind speed, v is the wind velocity, k and c are the shape and scale parameters. k has no dimensional units whereas the c parameter has the dimensional unit as (m/s), similar to wind speed. On integrating the Weibull probability distribution, we get the Weibull cumulative distribution function [30] and it is expressed as: where α represents the highest wind speed under consideration and it changes according to the site.
The expected value of the wind speed otherwise known as the average wind velocity is obtained from the Weibull distribution parameters of k and c and is given by: where "Γ" is known as the gamma function and it is defined as: Now the standard deviation for the Weibull distribution is given as: After calculating the values of σ and v m , the shape and scale parameters can be evaluated as follows: From expression (14) k can be found and once k is calculated, c is determined from the following: Now the Weibull probability density function is found using Equation (9).
Rayleigh Distribution
It has already been mentioned that when shape factor "k" is 2 then such a distribution is known as the Rayleigh distribution [30]. Now after rearranging Equation (9), the Rayleigh probability density function is given by: where α is the scale parameter and its units are m/s and its value is given by: The Rayleigh cumulative distribution F(v) is calculated from the expression: In order to calculate the most probable wind speed (V MP ) and the most energy (V MaxE ) from the wind speed, it can be estimated from the equations as shown below: There are many methods to determine the shape and scale parameters and few methods are the maximum likelihood, method of moments, least squares, graphical method, etc. All the software and tools used for statistical analysis are provided in Appendix A.
Results and Discussion
The study was carried out for the data collected from the region of Tirumala for about six years from 2012-2017. The data consisted of wind speed and wind direction collected from different hub heights of 10 m and 65 m. The monthly mean wind speed at 10 m and 65 m is shown in Figure 1. The shape and scale parameters for all the months have been calculated by using MVO. Table 1 presents the comparison of different numerical methods for estimating the Weibull parameters. For verification of the results, the MVO algorithm was compared with the moment method [14], empirical method [14], maximum likelihood [20], equivalent energy method [14] and energy pattern factor method [20]. Some of the meta-heuristic algorithms like PSO [20], MFO and GWO were also performed and are shown in Table 1. The k and c parameters for the Weibull distribution were found using the moment method as expressed in Equations (21) and (22): A special case of the moment method is the empirical method and it is determined using Equations (23) and (24) [31]: The maximum likelihood needs extensive numerical iterations to find the k and c values and it is shown below in Equations (25) and (26) [31]: The equivalent energy method [20] aims to determine the Weibull parameters by adjusting the wind data. The name is derived from the equivalence among the Weibull energy density curve and observed data energy density. From the below Equations (27) and (28), k and c values were estimated.
The energy pattern factor method [24] is related with the average wind speed and is defined by Equations (29)-(31): The efficiency of meta-heuristic and other methods are determined using the correlation (r) and root mean square error (RMSE) equations as shown below: As shown in Table 1, MVO was compared with other numerical methods and a meta-heuristic technique for estimating the Weibull parameter and its r and RMSE were obtained. Among these methods the energy pattern method had the least k value of 1.5142 and its c value as 5.305. Other numerical methods generated different k values that does not fit the real wind pattern and the error was also more. When the RMSE value is low, then we can say that the real wind pattern fits the Weibull density function respective to the k and c values. In Table 1, after performing a statistical analysis it was found that MVO had the least error of about 0.0052 and it was less than 1%. The correlation value r was reaching to 100%, i.e., 0.995 for the study case. These values obtained from the proposed method were better than other numerical methods and meta-heuristic techniques. On comparing the GWO, PSO, MFO and MVO it was evident that MVO was performing better than other methods. The convergence curve was provided for the proposed method in comparison with the other method, it showed that MVO converged with less % error and in the minimum number of iterations as shown in Figure 3. As shown in Table 1, MVO was compared with other numerical methods and a meta-heuristic technique for estimating the Weibull parameter and its r and RMSE were obtained. Among these methods the energy pattern method had the least k value of 1.5142 and its c value as 5.305. Other numerical methods generated different k values that does not fit the real wind pattern and the error was also more. When the RMSE value is low, then we can say that the real wind pattern fits the Weibull density function respective to the k and c values. In Table 1, after performing a statistical analysis it was found that MVO had the least error of about 0.0052 and it was less than 1%. The correlation value r was reaching to 100%, i.e., 0.995 for the study case. These values obtained from the proposed method were better than other numerical methods and meta-heuristic techniques. On Table 2 shows the monthly k and c parameter, mean wind speed, standard deviation and the variance for both 10 m and 65 m height wind speed data. After calculating the k and c parameter for respective months, the most probable wind speed was calculated using Equation (19) and the maximum energy carried by the wind was obtained from Equation (20). These values were analyzed and presented in Table 2. From Table 2 it could be stated that the maximum average wind speed was about 5.12 m/s observed in the month of December at 10 m and in the same month about 6.621 m/s at the 65 m height, respectively. For the complete dataset, the shape parameter varies from 1.27 at 65 m of hub height to 1.68 at 10 m of hub height. The shape and scale parameters for the entire dataset were found to be 1. Figure 5a, the most prevalent wind speed was 4 m/s that occurred monthly with some percentage variation. This trend continued until 12 m/s to 15 m/s of wind speed. Later the frequency fell to zero, it meant that there was less chances of occurrence of wind speed beyond 16 m/s. Though it may exist but it did not influence the overall system performance. The Weibull cumulative distribution is shown in Figure 5b, which is the sum of individual Weibull distributions to reach 100%. The common point where all the graphs were meeting was at 16 m/s. In Figure 5b, between 8 m/s to 16 m/s all the wind was available and later it reached 100%, which meant that all the wind was available less than 16 m/s and after this there was less frequency of occurrence. Rayleigh frequency distribution and Rayleigh cumulative frequency distributions are shown in Figure 6a,b. From this study, it revealed that the maximum probability for the wind fell in the range of 3 m/s to 6 m/s. The Rayleigh cumulative distribution also showed that 50% of the wind speed was between 3 m/s to 6 m/s. The frequency distribution was low for the wind speeds above 12 m/s. Therefore, we could say that the wind velocity above 12 m/s had less probability in this region. From Figure 6a,b the yearly frequency distribution was varying from 2012 to 2017 and the trend changed accordingly with the geographical parameters. The average wind speed lied in between 4 m/s and 6.8 m/s so we could conclude that there was sufficient wind potential in the region for installing small capacity wind turbines for extracting power from WECS. It could also be observed that the wind pattern varied as time progressed. This also reflected the effect of climatic conditions on the wind potential.
The Weibull probability density function was used to find the frequency of wind speed at different levels. After calculating the shape and scale parameters from the MVO method, the Weibull distribution and cumulative Weibull distributions were drawn. The graphs were obtained for all the years from 2012 to 2017 and were analyzed. From this distribution, it was easy to find the wind speed magnitude, which was more prevalent in this region. The frequency of occurrence of wind was slowly improving from 2012 to 2017 that could be clearly seen from the frequency of wind speed. From Figure 5a, the most prevalent wind speed was 4 m/s that occurred monthly with some percentage variation. This trend continued until 12 m/s to 15 m/s of wind speed. Later the frequency fell to zero, it meant that there was less chances of occurrence of wind speed beyond 16 m/s. Though it may exist but it did not influence the overall system performance. The Weibull cumulative distribution is shown in Figure 5b, which is the sum of individual Weibull distributions to reach 100%. The common point where all the graphs were meeting was at 16 m/s. In Figure 5b, between 8 m/s to 16 m/s all the wind was available and later it reached 100%, which meant that all the wind was available less than 16 m/s and after this there was less frequency of occurrence.
Rayleigh frequency distribution and Rayleigh cumulative frequency distributions are shown in Figure 6a,b. From this study, it revealed that the maximum probability for the wind fell in the range of 3 m/s to 6 m/s. The Rayleigh cumulative distribution also showed that 50% of the wind speed was between 3 m/s to 6 m/s. The frequency distribution was low for the wind speeds above 12 m/s. Therefore, we could say that the wind velocity above 12 m/s had less probability in this region. From Figure 6a,b the yearly frequency distribution was varying from 2012 to 2017 and the trend changed accordingly with the geographical parameters. Rayleigh frequency distribution and Rayleigh cumulative frequency distributions are shown in Figure 6a,b. From this study, it revealed that the maximum probability for the wind fell in the range of 3 m/s to 6 m/s. The Rayleigh cumulative distribution also showed that 50% of the wind speed was between 3 m/s to 6 m/s. The frequency distribution was low for the wind speeds above 12 m/s. Therefore, we could say that the wind velocity above 12 m/s had less probability in this region. From Figure 6a,b the yearly frequency distribution was varying from 2012 to 2017 and the trend changed accordingly with the geographical parameters. Figure 7. It could be seen that most of the wind speed occurred at less than 10 m/s. The Weibull cumulative distribution revealed that the maximum wind speed would fall in the range of 2 m/s to 12 m/s. Now, comparing these two distributions with real time wind data, it nearly coincided and hence concluded that the Weibull parameters fitted the actual wind pattern well. In Figure 7, the brown color bars indicate that the cumulative distribution values had reached maximum and it settled to 100%. Table 3 shows the distribution of wind speed in terms of the Weibull and Rayleigh distribution. In Table 3 the Weibull and Rayleigh frequency distributions were scaled to the maximum value, and the cumulative distributions were scaled to 10 for better understanding. From this analysis, we can say that wind speed less than 9 m/s was more prevailing at the 10 m height and 3 m/s to 11 m/s at the 65 m height. There was an increase of wind speed distribution once the height increased so the more turbine height, the larger the power generation. Here it could also be concluded that Weibull and Rayleigh are a much similar distribution with a small change in the shape parameter value. Both the distributions were reaching the value of 10 approximately when it crossed 17 m/s of wind velocity. The real values and Weibull distribution value were better fitted and close to each other as we can see from Table 3. Once if we look into the real wind speed values at 10 m of hub height, all the values fell under the range of 12 m/s only, which is similar to the Weibull distribution. The same could be seen for 65 m, here the wind data ranged under 16 m/s due to the increase in hub height. Therefore, the Weibull parameter obtained from MVO was better suitable in understanding the wind pattern and wind potential at any location if we have good wind data. Now the actual wind speed at 10 m and 65 m hub heights was compared with the Weibull and Rayleigh distribution as shown in Figure 7. It could be seen that most of the wind speed occurred at less than 10 m/s. The Weibull cumulative distribution revealed that the maximum wind speed would fall in the range of 2 m/s to 12 m/s. Now, comparing these two distributions with real time wind data, it nearly coincided and hence concluded that the Weibull parameters fitted the actual wind pattern well. In Figure 7, the brown color bars indicate that the cumulative distribution values had reached maximum and it settled to 100%. Table 3 shows the distribution of wind speed in terms of the Weibull and Rayleigh distribution. In Table 3 the Weibull and Rayleigh frequency distributions were scaled to the maximum value, and the cumulative distributions were scaled to 10 for better understanding. From this analysis, we can say that wind speed less than 9 m/s was more prevailing at the 10 m height and 3 m/s to 11 m/s at the 65 m height. There was an increase of wind speed distribution once the height increased so the more turbine height, the larger the power generation. Here it could also be concluded that Weibull and Rayleigh are a much similar distribution with a small change in the shape parameter value. Both the distributions were reaching the value of 10 approximately when it crossed 17 m/s of wind velocity. The real values and Weibull distribution value were better fitted and close to each other as we can The wind power density at 10 m and 65 m are shown in Figure 8a,b. From Figure 8a it could be inferred that the wind power density slowly increased from 4 m/s to 8 m/s, achieving maximum value, and later decreased to the reference level from 18 m/s. Compared to 10 m height, wind power density was having a much higher magnitude at 65 m. For 10 m, the power density tended to reach zero from 18 m/s but at 65 m the power density had a finite value until 24 m/s, so the wind power density could be improved by increasing the height of wind turbine hub from ground level. It could also be stated that due to an increase in the clearance area by removing the obstacles and landslide the wind circulation tended to improve yearly. The wind power density is improving from 2012 to 2017 for both 10 m and 65 m due to improvement in the clearance. In order to quantify the Weibull and Rayleigh distributions with respect to actual data, few statistical analyses were performed.
The correlation coefficient ( 2 R ) [38] value showed that the data is close enough to fit the regression line. The correlation parameter can be written as: where yi is the real time observed wind data in the ith bin and xi gives the wind value from the Weibull distribution, zi is the mean value of yi and xi and n is the number of bins.
The chi-squared test ( 2 χ ) [4] was used to determine if there existed any difference between the expected and observed frequencies. In this statistical hypothesis test, the distribution was a chisquared distribution when the null-hypothesis was true. The chi-square expression is as shown below in Equation (35): The probability distribution is said to be accurate when The Kolmogorov test [4] is a nonparametric test to compare the sample with a reference probability density function. For a given cumulative distributive function ( ) F v , the Kolmogorov test can be expressed as: In the Equation ( In order to quantify the Weibull and Rayleigh distributions with respect to actual data, few statistical analyses were performed. The correlation coefficient (R 2 ) value showed that the data is close enough to fit the regression line. The correlation parameter can be written as: where y i is the real time observed wind data in the ith bin and x i gives the wind value from the Weibull distribution, z i is the mean value of y i and xi and n is the number of bins.
The chi-squared test (χ 2 ) [4] was used to determine if there existed any difference between the expected and observed frequencies. In this statistical hypothesis test, the distribution was a chi-squared distribution when the null-hypothesis was true. The chi-square expression is as shown below in Equation (35): The probability distribution is said to be accurate when R 2 is large with least χ 2 . The Kolmogorov test [4] is a nonparametric test to compare the sample with a reference probability density function. For a given cumulative distributive function F(v), the Kolmogorov test can be expressed as: In the Equation (36) Another parameter, i.e., F-test [33] is used to compare the model for fitting the dataset and also to identify whether it best fits. The generalized F-test [33] expression can be written as given below in Equation (38): where, Y i and Y are the ith group mean and overall mean and K represents the number of groups, whereas Y ij is jth observation from ith in k groups and N is the overall size.
In order to express the accuracy as a percentage of error we go for the mean absolute percentage error (MAPE) [33] and its mathematical equation is shown below: If, suppose we have to express the accuracy in same units then we prefer for mean absolute error (MAD) [29]: For fitting the time series values and to measure the accuracy mean squared deviation (MSD) [32] is preferred and it is shown below: In the above expressions, i.e., Equations (39)-(41), A t and F t are the actual and Weibull values with n as the total number of samples.
The t-stat [3] is a measure, which gives information about the ratio of estimated value to the standard error, and its expression is: Now, the data taken for study was subjected to all these statistical analyses to determine various parameters and these are tabulated as shown in Table 4. A model fits the data well when the difference of the observed value and the models predicted values are very small. In this case the R-squared measured the closeness of the data to the fitted regression. It is also gives the coefficient of determination so, the higher the R-squared value, the better the model fits the data. There is one limitation in R-squared, i.e., it does not indicate whether the model is adequate and sometimes a good model also might have a low R-squared value. Here in this study, the R-squared value lied between 0.90455 to 0.9898. Therefore, the model fitted the data well with R-squared values reaching nearer to 1, i.e., 100%.
Sum of squares (SS) deviations represents the sum of squared difference from the mean. In Table 4, SS was maximum in the month of August and minimum in the case of February. As the SS was low then we could say that, the values were closer to the mean, otherwise they were farther away from the mean so, the February month was having a low SS, which reflects that these values were closer to the mean. Another statistic applied was the F-test, when the calculated F value is high then we have to reject the null hypothesis. It was used to compare the models that were fitted to a dataset and to identify the model that best fitted the population from the data sample.
The MAPE is a measure of accuracy in predicting the values when we have a set of measured values. The percentage error measured how close the data points were present with respect to the predicted values. In Table 4, the percentage error was maximum for the month of December with 25.471% and least in the month of August with 6.4362%. As the percentage errors were less, it could be stated that the Weibull distribution nearly fitted the data. The mean absolute deviation (MAD) will measure the average distance from each value to the mean. It gives information on how the values are spread out in the sample space of the dataset. On observing the MAD values from Table 1, it is clear that all were less than 5 so, the Weibull distribution and the actual wind data were closely bound to each other.
Mean signed difference (MSD) is another statistic, which gives information on how well the estimate will fit the quantities that they were supposed to estimate. From Table 4, MSD values were 2.1627 for August followed by 2.671 for May and 3.371 April. Therefore, the estimated values were close to the measured values that had to be estimated and the maximum MSD was about 13.940 for December and 9.910 for January, which revealed that these values were unable to estimate better.
The chi-square, Kolmogorov and t-test were also performed to verify the Weibull distribution to fit the actual measured wind data. The chi-square values were maintained in between 0.018 to 0.073, which measures how much the actual observed data and expected values are present. KOL values ranged from 0.085 to 0.140 and were obtained by comparing the sample with the reference probability distribution. A T-test was used to know if there were any difference among the mean of the two groups of data.
The overall wind speed and wind directions are presented in Table 5 in the form of bin-sector and its distributions is shown in Figure 9. After analyzing Table 5 it is clear that the wind direction was most dominant between the sector 270-280 • and 120-130 • , having a frequency of 8.77% and 6.34%. It also suggested that the wind speed ranged from 2 m/s to 10 m/s in the above-mentioned sector directions. Whereas low wind speed flowed in the sector 170-180 • from 0-4 m/s. Wind direction study showed that majority of the wind was flowing in the sector 260-280 • that corresponded to SSW and NNW. The wind speed and direction graph for the months from January to June is shown in Figure 10.
The wind direction from 0-110 • had a wind speed less than 14 m/s and beyond this there was no occurrence of any wind. From 110-200 • the available wind speed was only below 12 m/s. There was large wind potential in the direction of 210 • to 290 • with a maximum wind speed reaching to 22 m/s, so if you install the wind turbines in this direction there is a possibility to increase the power generation. In future, short-term wind forecasting [34,35] can be performed by using an artificial neural network approach in combination with results obtained from the Weibull distribution and parameter estimation though nature inspired algorithms. The overall wind speed and wind directions are presented in Table 5 and Figure 9 in the form of bin-sector. After analyzing Table 5 it is clear that the wind direction was most dominant between the sector 270-280° and 120-130°, having a frequency of 8.77% and 6.34%. It also suggested that the wind speed ranged from 2 m/s to 10 m/s in the above-mentioned sector directions. Whereas low wind speed flowed in the sector 170-180° from 0-4 m/s. Wind direction study showed that majority of the wind was flowing in the sector 260-280 0 that corresponded to SSW and NNW. The wind speed and direction graph for the months from January to June is shown in Figure 10. The wind direction from 0-110° had a wind speed less than 14 m/s and beyond this there was no occurrence of any wind. From 110-200° the available wind speed was only below 12 m/s. There was large wind potential in the direction of 210° to 290° with a maximum wind speed reaching to 22 m/s, so if you install the wind turbines in this direction there is a possibility to increase the power generation. In future, short-term wind forecasting [34,35] can be performed by using an artificial neural network approach in combination with results obtained from the Weibull distribution and parameter estimation though nature inspired algorithms. large wind potential in the direction of 210° to 290° with a maximum wind speed reaching to 22 m/s, so if you install the wind turbines in this direction there is a possibility to increase the power generation. In future, short-term wind forecasting [34,35] can be performed by using an artificial neural network approach in combination with results obtained from the Weibull distribution and parameter estimation though nature inspired algorithms.
Comparison of Wind Farm Locations in India
The installed wind capacity by state as of 31st March 2018 in India is shown in Figure 11. In India, Tamilnadu is the top state in wind power generation and its contribution is around 29% of India's total. The importance of renewable energy and its need was realized in advance and established a new agency in 1985. The capacity of the Muppandal wind farm is 1500 MW and the total installed capacity in Tamilnadu is 7633 MW. Gujarath is focussing on enhancing renewable energy that has led to a sharp rise in wind power capacity. Based on government data the wind generation capacity has increased ten times in just six years. Next is the Mahastra with an installed capacity of 4655.25 MW. All major manufacturers of wind turbine including Vestas, Gamesa and Regen have their presence in Maharastra. In Kerala about 55 MW is installed and the first wind farm was installed in 1997 at Kanjikode in the Palakkad district. Recently offshore wind farms have started plans from 2010 and a 100 MW plant is located off the Gujarath coast that began in 2014. In order to identify potential sites GWEC developed R&D activities.
Comparison of Wind Farm Locations in India
In India, Tamilnadu is the top state in wind power generation and its contribution is around 29% of India's total, wind generation for various states is shown in Figure 11. The importance of renewable energy and its need was realized in advance and established a new agency in 1985. The capacity of the Muppandal wind farm is 1500 MW and the total installed capacity in Tamilnadu is 7633 MW. Gujarath is focussing on enhancing renewable energy that has led to a sharp rise in wind power capacity. Based on government data the wind generation capacity has increased ten times in just six years. Next is the Mahastra with an installed capacity of 4655.25 MW. All major manufacturers of wind turbine including Vestas, Gamesa and Regen have their presence in Maharastra. In Kerala about 55 MW is installed and the first wind farm was installed in 1997 at Kanjikode in the Palakkad district. Recently offshore wind
Conclusions
The study was done on wind data for a period of six years from 2012 to 2017 in the Tirumala region in India. Wind speed, wind direction, its frequency distribution, mean wind speed and k and c parameters for the Weibull distribution were calculated. The monthly wind direction was studied with a wind rose graph. The complete statistical analysis resulted in framing the following conclusions: The shape and scale parameter (k and c) for the dataset was calculated and the observed data was fitted for Weibull distribution.
The highest mean wind speed was calculated as 6.621 m/s at 65 m and 5.12 m/s at 10 m for the month of December.
Annual mean wind velocity was calculated as 3.31 m/s and 4.36 m/s at 10 m and 65 m heights, respectively.
Wind speed with most energy (VmaxE) was calculated as 11.43 m/s and 15.16 m/s at 10 m and 65 m, respectively. The most probable wind speed was calculated as 2.45 m/s and 4.02 m/s at 10 m and 65 m, respectively.
From the cumulative frequency distribution it was found that the probability of wind speed for 4 m/s was predominant in the selected region.
The proposed method had few limitations such as computational complexity and it depended on the number of iterations, number of universes and universe sorting mechanism. The limitations also included the parameters to be estimated suffered from consistency when the measured wind data was not good. In order to achieve better results the measuring instrument must be properly calibrated before analyzing the data. The proposed MVO technique depended on the geographical location and also on the maximum and minimum wind speed limits. In future work, to forecast the wind speed, the MVO technique should be first implemented to find the Weibull parameters from the wind speed data and create hourly wind speed randomly. Now by applying randomly generated wind data to ANN for matching actual wind speed until the forecasted errors are minimized.
Author Contributions: All authors are involved developing the concept to make the article error free technical outcome for the set investigation work.
Conclusions
The study was done on wind data for a period of six years from 2012 to 2017 in the Tirumala region in India. Wind speed, wind direction, its frequency distribution, mean wind speed and k and c parameters for the Weibull distribution were calculated. The monthly wind direction was studied with a wind rose graph. The complete statistical analysis resulted in framing the following conclusions: The shape and scale parameter (k and c) for the dataset was calculated and the observed data was fitted for Weibull distribution.
The highest mean wind speed was calculated as 6.621 m/s at 65 m and 5.12 m/s at 10 m for the month of December.
Annual mean wind velocity was calculated as 3.31 m/s and 4.36 m/s at 10 m and 65 m heights, respectively.
Wind speed with most energy (VmaxE) was calculated as 11.43 m/s and 15.16 m/s at 10 m and 65 m, respectively. The most probable wind speed was calculated as 2.45 m/s and 4.02 m/s at 10 m and 65 m, respectively.
From the cumulative frequency distribution it was found that the probability of wind speed for 4 m/s was predominant in the selected region.
The proposed method had few limitations such as computational complexity and it depended on the number of iterations, number of universes and universe sorting mechanism. The limitations also included the parameters to be estimated suffered from consistency when the measured wind data was not good. In order to achieve better results the measuring instrument must be properly calibrated before analyzing the data. The proposed MVO technique depended on the geographical location and also on the maximum and minimum wind speed limits. In future work, to forecast the wind speed, the MVO technique should be first implemented to find the Weibull parameters from the wind speed data and create hourly wind speed randomly. Now by applying randomly generated wind data to ANN for matching actual wind speed until the forecasted errors are minimized.
Author Contributions: All authors are involved developing the concept to make the article error free technical outcome for the set investigation work.
|
2019-06-26T14:23:34.309Z
|
2019-06-05T00:00:00.000
|
{
"year": 2019,
"sha1": "189b077b83a19f70cb3da61f2cf40ea76eab414b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/11/2158/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7a3e7e4d4d2c7bd60de56067a78c49b2991304cb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
210807635
|
pes2o/s2orc
|
v3-fos-license
|
Study on reflection characteristics of plasma and metallic periodic structure composite frequency selective device
Plasma technology is widely used in the electromagnetic field. In this work, a novel composite frequency selection (CFS) device composed of plasma and metallic periodic structure is proposed. This paper demonstrates the reflection coefficient of the proposed CFS at 12-18 GHz. The dispersion of absorption effect in the frequency domain is also mentioned. A simulation result of the electric field distribution of CFS device is presented to give a tentative explanation.
Introduction
The application of plasma technical in the electromagnetic field is expanding widely, such as frequency filters, frequency converters [1], plasma antenna [2][3][4][5], etc. A plasma periodic structure is composed of plasma and other medium or vacuum. There are a few known plasma periodic structure applications, for instance, the plasma lens [6] and the plasma antenna [7]. However, one of the problems that limit the application of plasma periodic structure is the excessive transmission of plasma. When the frequency of electromagnetic waves is higher than the plasma frequency, the electromagnetic waves can propagate in the plasma. Therefore, for the higher frequency range of incoming electromagnetic field (EMF), the periodic structure of the plasma does not completely prevent electromagnetic waves from transmitting except for sustaining higher electron density with higher energy consumption and even unacceptable higher temperature. Consequently, the basic idea of this paper is to incorporate the cut-off characteristics of Frequency Selective Surface (FSS) with metallic periodic structures to increase the stopband feature of the combined system with more limited side-effects and remain the active reconfigurability of plasma periodic structure.
The Frequency Selective Surface (FSS) is generally composed of periodic conductive patches or aperture elements for reflecting, transmitting or absorbing electromagnetic waves. FSS is widely used in the electromagnetic field, such as radome [8][9][10][11], interference suppression in WLAN communication [12], electromagnetic shielding rooms [13], information security [14], etc. In this work, electromagnetic waves that pass through the FSS interact with the plasma. This may change the frequency selection characteristics of the frequency selective device. The passband and stopband of the FSS can be changed by changing the geometry of the periodic structure of the metallic pads on the FSS. Therefore, the FSS absorb electromagnetic waves in some specific frequency bands, making it possible to solve the problem of excessive transmission of plasma periodic structure to some extent. To our best knowledge, there are few reports on combining plasma periodic structure with FSS, which makes it worth exploring. In order to study the coupling characteristics of the plasma periodic structure and the metallic periodic structure, we do experiments on a three-layer composite frequency selection (CFS) device composed of FSS, plasma and metallic plate. A reflectivity experiment was performed in a microwave dark chamber. The background reflectivity is 20 to 30dB lower than the averaged reflectivity of the device under test (DUT). The experimental results show that the coupling effect of plasma and FSS in the three-layer structure cause a significant change in the reflection characteristics of the DUT. It produces a -15dB absorption peak at 17.2 GHz, which is not available for plasma or metallic periodic structure alone. The dispersion of absorption effect at 8.33GHz is also found. To study the mechanism of the coupling of plasma with FSS, a model is built which is consistent with the DUT. It is expected to explain the relationship between electric field distribution and resonance characteristics.
Experimental
The three-layer CFS device is shown in Figure 1(b), including FSS, plasma, and metallic plane. The FSS is made of a printed circuit board (PCB) with a size of 0.4 mm × 250 mm ×170 mm. Each periodic unit contains four metallic pads, and the geometry of the pad is shown in Figure 1(a). It consists of a trapezoid merged with four crescent shapes at each boundary. The periodical metallic pads are used to generate the interference wave to couple with the backscattering EMF from the structured plasma reflector below the FSS. The basic consideration of the FSS unit structure is to form a typical cross-shape with crescent borders to enhance the electric field convergence effect in the exterior region of every pad. In the following, the electric field distribution on the surface of the FSS unit and the plasma region have been investigated based on the Drude's description of the plasma. Each crescent shape is determined by two arcs, whose distance from the center to the trapezoidal side is shown in Figure 1(a). A periodic unit consists of these pads in a definite orientation, as shown in Figure 1 3 The reflectivity experiment system is shown in Figure 2. The reflectivity experiment is conducted in a microwave dark chamber. Two standard gain horn antennas are connected to the two ports of the network analyzer respectively. One of the horn antennas emits electromagnetic waves, which are reflected by the DUT and received by another horn antenna. The reflectivity of the DUT is obtained by a network analyzer. The DUT is placed on a well leveled platform, the distance between the platform and the horn is 1 m, and the horn is aligned with the center of the DUT. The DUT is shown in Figure 2. Since the lamps are longer than the FSS, the part in excess are covered with absorbing material. The boundary of absorbing material dose not conflict with the boundary of FSS unit to keep its periodicity. The reflectivity of microwave dark chamber background and the metallic plate at the same location with the DUT is also measured. Therefore, the experiment results shown below are the relative values with the metal plate. The reflectivity of the plasma periodic structure in the 12-18 GHz band is measured. The array of 6 lamps is placed on a metallic plate (equal to the size of the FSS). The experiment results are shown in the black curve of Figure 3 (right). It can be seen that the DUT combing of the lamps and the metallic plate show a strong reflectivity, which is almost equivalent to a metal plate. We believe that this is because the transmittance of electromagnetic waves is high in the plasma, electromagnetic waves are less affected.
The reflection of the three-layer CFS device is also measured. Figure 3 shows the reflectivity curves of the DUT when the lamps is turned on and off (red and blue curve). When there is no plasma, the DUT shows an absorption peak at 17.2 GHz, but with a shallow depth. When the lamps are turned on, the absorption peak depth is significantly increased, the deepest point reaching -21dB. A similar resonance phenomenon did not occur when the plasma and FSS were measured separately. We believe that it is because the coupling between the plasma and the FSS causes the deepening of the absorption peak. It is believed that the plasma enhances the propagation of electromagnetic waves between the FSS and the metallic plate. More energy of the electromagnetic wave is dissipated on the FSS and the plasma, affecting the reflection characteristics of the DUT.
A similar experiment at 8-12GHz is also performed. In Figure 3(left), the black line shows the reflection curve of the two-layer structure of plasma and metallic plate. It has an absorption peak of about -18dB at 8.33GHz. This is the performance of the plasma periodic structure. In the experiment of the three-layer DUT, the absorption peak at this frequency was reduced to -15dB, but the bandwidth was increased. The experiment results of the DUT did not show strong frequency selection characteristics, but a dispersion of the absorption effect.
As can be seen in Figure 3, the curves in the Ku band and the X band are not connected. This may be due to the complexity of the time domain features near the peak with a different performance of the two
Discussion
Two phenomena were found in reflectivity experiment in Figure 3. In Ku band, plasma periodic structure does not show frequency selective characteristic. While it changes significantly when FSS added, a deep absorption peak appears at 17.2 GHz. In the X band, plasma periodic structure shows a strong frequency selective characteristic. But it presents a dispersion of the absorption peak when covering it with the FSS.
To better understand the experiment phenomena, a simulation model is built and whose structure is same as the actual DUT. It is found that the characteristics of the plasma have a great influence on the propagation of electromagnetic waves therein. The physical parameters of the plasma may vary widely, which causes a problem in accurately analyzing the propagation of electromagnetic waves in the plasma in this field. In addition, the plasma has a considerable of frequency dispersion, and electromagnetic waves of different frequencies would propagate differently. Moreover, the plasma is generally non-uniform, which also affects the propagation of electromagnetic waves therein. Although it is difficult to characterize plasma with a complete precise numerical model, this method is commonly used for qualitative mechanism analysis in the field. To simplify the calculation, it is assumed that the plasma is uniform and can be characterized by the Drude's model. Besides, the plasma parameters have also been diagnosed through microwave diagnostic method, with a result of electron density from 0.9E+17 to 7.1E+17, the plasma frequency from 1.6E+10 to 4.8E+10, and the collision frequency from 2E+10 to 8E+10. The Drude's model is used to describe the interaction between the EMF and the plasma. According to the diagnosis, the plasma frequency is set to 2E+10, and collision frequency is set to 5E+10. The diameter of the cylinder is 25.72 mm. Figure 4 shows the electric field distribution of FSS at 16 GHz, 17 GHz and 18 GHz in a CFS device. At 16 GHz and 18 GHz, the FSS electric field is stronger on average, and the electric field intensity at the junction of the lumped elements in the same periodic cell is high. At the absorption peak of 17 GHz, the electric field on the FSS is weak, but the electric field intensity between the periodic units is higher than that of interior region of periodic unit. According to the transmission line theory, some of the roles of the FSS have been understood as the behavior of an impedance matching network. The role of the plasma could be interoperated as a parallel connection of capacitance and resistance. The value of the capacitance and resistance are related to plasma electron density. The input impedance has changed significantly compared to the case with FSS only. The impedance matching characteristics may change. It results in an absorption peak at a frequency point where the impedance well matches. At 17 GHz, the scattering field of the plasma coupled with the electromagnetic waves improves the impedance matching characteristics of the FSS, resulting in an absorption peak.
For small size FSS, the boundary effect caused by the cut-off of the periodic structure in the real-world approximation is not negligible. It can be seen that at all three frequencies, the electric field strength at the edge of FSS is greater than that at the center of the FSS. The scattering of FSS edges are mainly caused by the radiation generated by surface current. When the current reaches the edge, the mismatched radiation occurs, which is manifested due to the boundary effect of the FSS. [15] In addition, it is worth noting that at some frequencies, the electric field distributions of the upper-lower edges and the left-right edges of the FSS are not the same. It may be related to the arrangement of the lumped elements. Figure 5. Electric field distribution of FSS in a CFS device without plasma at 17 GHz.
The scene of turning off the lamps is also calculated, as shown in Figure 5. The results at 16 GHz and 18 GHz are not shown because they are similar with the scene of turning on the lamps. At 17 GHz, the intensity of the electric field on the FSS without plasma is greater than the case with plasma, which is consistent with the experiment. By comparing the cases with or without plasma, we found that plasma can significantly reduce the electric field intensity at the center of the FSS but have a weaker effect on the boundaries of the DUT. This may be explained by reason that the traveling wave and boundary diffraction is not much affected by plasma.
Conclusions
In summary, a novel composite frequency selection device incorporated with plasma and metallic periodic structure is proposed. This paper shows the reflectivity of the CFS device in the reflectivity experiment, a -21dB absorption peak at 17.2 GHz, which is not available for plasma or metallic periodic structure alone. When the plasma itself has an absorption peak, FSS causes the dispersion of absorption effect in the frequency domain. The distribution of the electric field on the FSS in a CFS device is also revealed by numerical calculations. At the absorption peak of 17 GHz, where the impedance well matches, the electric field on the FSS is weak, while it is strong at 16 GHz and 18 GHz. The boundary effect significantly affects the electric field distribution of FSS too. The application of plasma in the electromagnetic field is increasingly valued [16]. However, there is a problem about the cutoff characteristics. This paper presents a new possibility to improve the cut-off characteristics of plasma-based parodic structures. It is also possible to dynamically change the resonance characteristics of the CFS device. The relationship between the electric field distribution in different component of the CFS and their reflectivity is also discussed. In the future, we will optimize the design through transmission line theory.
|
2019-10-17T09:06:30.352Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "3bd14e21b5b3852f01dd8e535a890b5957298b8f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1324/1/012076",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "401e93ffcacbdf4093e7f0155c5a942797103b31",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
246375394
|
pes2o/s2orc
|
v3-fos-license
|
THE INFLUENCE OF INCOME LEVEL, FAMILY EDUCATION, FINANCIAL LITERATURE ON RETIREMENT PLANNING MEDIATED BY SAVING BEHAVIOR
The pension program has three functions, namely the insurance function, the savings function, and the pension function. The pension program has an insurance function because it can help provide guarantees to participants in overcoming the risk of losing income when they are in retirement age and or caused by death. The pension program has a savings function because you are required to pay contributions when you are in the program. The pension program has a pension function because it can provide benefits that participants will receive when they retire, which can be received periodically throughout life and so on. Considering these conditions, this study aims to analyze how much influence the level of income, family education, financial Literacy is mediated by saving behavior on retirement planning. This study uses a quantitative method, namely a questionnaire, which contributes to this research as many as 195 respondents from private employees of Bank BPR UMKM Jawa Timur. This research uses the SPSS analysis tool to measure Validity and Reliability and PLS-SEM. This study reveals that income levels affect retirement planning, family education affects retirement planning, saving behavior affects retirement planning, and financial Literacy affects retirement planning. Family education and financial Literacy affect retirement planning mediated by saving behavior.
Introduction
The development of economic life today can cause a shift in the value of life in society and the mindset in human life and behavior, which expects better welfare in the future. Every individual thinks about welfare at work and thinks about welfare when in old age or retirement. Changes in mindset about life due to globalization will continue until later. In the past, parents felt that as a reward, a child should be able to look after, care for, and support parents when they were no longer in their productive age and were already sluggish in their activities. This was related to the financial education provided by parents. So that everyone should be able to manage their finances to get maximum results in the future, insurance in old age must help provide welfare in old age within the time frame of old age, which will be enjoyed by those who are still young. People must be selective in choosing investments and know the benefits and risks. People must form a portfolio by selecting a combination of some assets to minimize risk, which is called selective diversification. The basic form of old-age insurance is planning a retirement program.
A human being in his life is like a wheel or a globe that is sometimes above and sometimes below. Several things must be considered and prepared from the start so that everything we hope will run according to reality. One of them is investment financial management. Investment is a form of activity to place funds in one or more assets (assets) during a specific period to obtain income and/or increase value. Investment can also be interpreted as a commitment to sacrifice current consumption (sacrifice of current consumption) to increase consumption in the future, or investment can be understood as delayed consumption.
The pension program has three functions, namely the insurance function, the savings function, and the pension function. The pension program has an insurance function because it can help participants overcome the risk of losing income at retirement age and or caused by death. The pension program has a savings function because you are required to pay contributions when you join the program. The pension program has a pension function because it can provide benefits that participants will receive at retirement that can be received periodically throughout their lives (https://www.akuntansi complete.com).
In uncertain financial conditions, the public must seek extensive information regarding old-age investments. Seeing the phenomenon of pension fund investment which is the third pillar of Wealth Management, namely Wealth Distribution and transition, it is necessary to research the factors that cause people to care about retirement planning. Some of these factors include the level of income, namely, a person's ability to save and earn a financial income within a certain period. This can affect several things, such as the attitude of saving, namely the readiness and ability of individuals to save, financial Literacy, which means one's knowledge and skills in managing, handling, and managing. Various problems in finances and family education are the education of parental skills that will affect children's behavior in adulthood and parent-child relationships in general, which will change the attitudes and thoughts of children in the future, both physically and mentally. financial, scientific, or social, some of these factors influence one's retirement planning
Research Methods
The population in this study were private employees of MSME Bank BPR East Java. The data collection method in this research is this research by distributing online questionnaires, and the total respondents involved in this study were 195 respondents.
The independent variables in this study are income level, family education, Financial Literacy. The mediating variable in this study is the attitude of saving, and the dependent variable in this study is Retirement Planning.
In collecting data, validity and reliability tests will be carried out to ensure no errors in the data or the data can be said to be reliable and valid. If the tested data is accurate and dependable, the analytical technique used is Partial Least Square (PLS-SEM).
Figure 1.
Research Model This research was conducted in Indonesia, East Java, with a sample of 195 employees of PT Bank BPR UMKM East Java, with data collection using an online questionnaire created through Google Form. Based on the results that have been obtained through descriptive and statistical analysis using SPSS 23.0 and WarpPLS 7.0 applications, the following results are obtained:
Income level significantly affects Retirement Planning for PT Bank BPR UMKM East Java employees.
The current study results show that income levels have a significant effect on retirement planning; this is following the findings (MacLeod et al., 2012in Shilery Tan et al., 2018. Expose that people who work are less educated and earn lower incomes, making them less financially ready to retire. In particular, women have fewer retirement savings than men, resulting in less preparation for retirement. Younger age groups are more vulnerable to poor retirement planning due to improper financial management where they are not yet financially secure when they move. The older they are, just thinking about retirement planning.
The conclusion from the results of respondents' answers that the level of income has a significant effect on retirement planning which, following the theory described above, explains that when a person earns low income in every daily activity, the person will feel less ready to set aside finances in preparing and planning retirement finances. because someone thinks that the money made today is only enough to make a living today, not for the future
Family Education has a significant effect on Retirement Planning for PT Bank BPR MSME employees in East Java
The current study results show that family education has a significant effect on retirement planning. This is following the findings. The research results conducted by Fatemeh Kimiyaghalam et al. (2017) revealed that parents substantially affect children's behavior. Family life educators and financial management educators should take this point and educate parents through their lesson plans. In addition, parents should encourage children to talk about their financial problems, save their pocket money, and involve them in family financial decision-making. Parents should learn that their strategies for dealing with economic issues and their pattern of saving for the future may be the first finance lessons for children. On the other hand, financial advisors and planners must identify their niche market to issue their professional services on financial planning for Retirement Planning Strategy From the explanation above, it can be concluded that there are similarities in research results where current research reveals that family education directly has a significant effect on retirement planning where according to what has been described in theory above, family education on financial management should always be taught to families, especially to children where parents should encourage children to talk about their financial problems, how to save their pocket money and should involve them in making family financial decisions so that from early age families and children can make professional decisions on financial planning for Retirement Planning Strategies
Family education has a significant effect on the saving attitude of PT Bank BPR UMKM East Java employees
The current study results show that family education significantly affects saving behavior, following the findings of Webley and Nyhus (2006) in Fatemeh Kimiyaghalam et al. (2017), explaining that parental behavior has a significant relationship with children's attitudes rather than behavior. They. They showed that parental behavior (such as talking about monetary problems with children) and parental orientation (consciousness, future orientation) had a weak influence on children's finances. Saving decreases at an older age so that parents' behavior will protect the child's attitude in the future. Therefore if the financial system of a family is good, it will teach children to have an attitude of saving from an early age From the explanation above, it can be concluded that family education is closely related to the attitude of saving, so in this case, education needs to be instilled from an early age so that it can form attitudes and character from an early age, especially in the financial sector when financial education in the family has been familiarized with children from an early age. will produce a good saving attitude and become a character in children to be able to save from what they produce regularly
Financial Literacy has a significant effect on the saving attitude of PT Bank BPR UMKM East Java employees
The results of the current study show that financial Literacy has a significant effect on saving behavior, this is following research showing that many people have a low level of financial Literacy, and this problem has a relationship with a lack of financial planning and insufficient savings for retirement) Several studies confirms the high correlation between financial Literacy and a set of financial behaviors related to savings, wealth, and portfolio choices so that someone who has good financial Literacy will affect his saving attitude to secure his wealth and portfolio (Lusardi & Mitchell, 2011, 2014in Fatemeh Kimiyaghalam et al., 2017 From the explanation above, it can be concluded that financial Literacy is very important to foster attitudes and behavior in saving when someone's financial Literacy is excellent, they will be more aware of their saving behavior and attitude from an early age to secure their wealth for the future.
Financial Literacy has a significant effect on retirement planning for PT Bank BPR UMKM East Java employees
The results of the current study show that Financial Literacy has a significant effect on Retirement Planning, this is following the findings (Lusardi & Mitchell, 2011, 2014in Fatemeh Kimiyaghalam et al., 2017, which also says that many studies show that many people have a high level of financial Literacy. Low, it will impact the lack of financial planning and insufficient savings for retirement. Then this explains that a person's level of financial Literacy will have a direct impact on retirement planning, From the explanation above, it can be concluded that when a person has good financial Literacy and a high level of financial Literacy, it will directly have an impact on retirement planning, how that person makes money, manages asset portfolios, develops wealth through investment instruments and so on so that he is better prepared in his career. planning for retirement
The attitude of saving has a significant effect on retirement planning for employees of PT Bank BPR MSMEs in East Java
The results of the current study show that saving behavior has a significant effect on retirement planning, and this is following the findings of Fatemeh Kimiyaghalam et al. (2017), revealing results showing that saving attitudes can explain 25% of retirement planning, in this case, projecting a person's behavior to planning for retirement this is the fruit of the findings in this study because it shows individual behavior and individual attitudes can lead to the implementation of actions from what is usually done so that saving behavior has a relationship with retirement planning where someone aware of keeping in his youth will have a positive impact on retirement planning. what is planned in old age, or can be called retirement From the explanation above, it can be concluded that a person's behavior and a person's attitude have a linear relationship or are directly proportional to what he plans to do when someone has good attitudes and behavior in saving, it can be ascertained that someone is already planning something to support himself in the future. retirement age
Family education has a significant effect on retirement planning and is mediated by the saving attitude of PT Bank BPR UMKM East Java employees
The results of the current study show that family education has a significant effect on retirement planning mediated by saving behavior; this is following the findings of research conducted by Lynch and Netemeyer (2014) in Fatemeh Kimiyaghalam et al. (2017) conducted a meta-analysis of the relationship between financial literacy and education of family finance and behavioral finance in 168 papers covering 201 papers of previous studies. They found that interventions to improve financial Literacy only explained 0.1% of the variance in financial behavior, with a weaker effect in the low-income sample; in addition to that, parental behavior had a significant relationship with children's attitudes than theirs. They showed that parental behavior (such as talking about monetary problems with children) and parental orientation (consciousness, future orientation) had a weak influence on children's finances. Saving decreases at an older age so that parents' behavior will protect the child's attitude in the future; therefore, if the family's financial system is good, it will teach children to save from an early age. Adams and Rau (2011) in Fatemeh Kimiyaghalam et al., (2017) also conclude, "Perhaps one of the strongest findings in the literature is that financial Literacy (cognitive factor) plays a key role in financial preparation for retirement.
From the explanation above, it can be explained and concluded that saving behavior is quite strong in mediating the influence of Family Education on Retirement Planning for PT Bank BPR MSME employees in East Java. This means that employees must have good Savings Behavior in order to prove that family education is important and can affect the Retirement Planning of PT Bank BPR MSME employees in East Java.
Financial Literacy has a significant effect on retirement planning and is mediated by the saving attitude of employees of PT Bank BPR MSMEs in East Java
The results of the current study show that financial Literacy has a significant effect on retirement planning mediated by saving behavior, this is following the findings of research conducted by Lynch and Netemeyer (2014) in Fatemeh Kimiyaghalam et al. (2017) conducted a meta-analysis of the relationship between financial Literacy and family finance education and behavioral finance in 168 papers covering 201 papers of the previous study. They found that interventions to improve financial Literacy explained only 0.1% of the variance in financial behavior, with a weaker effect in the low-income sample. Adams and Rau (2011) in Fatemeh Kimiyaghalam et al. (2017) also conclude, "Perhaps one of the most robust findings in the literature is that financial Literacy (cognitive factor) plays a crucial role in financial preparation for retirement.
From the explanation above, it can be concluded that saving behavior is quite strong in mediating the influence of Financial Literacy on the Retirement Planning of PT Bank BPR MSME employees in East Java. This means that employees must have good Savings Behavior to prove that Financial Literacy is critical and can affect the Retirement Planning of PT Bank BPR MSMEs in East Java. Thus, financial Literacy directly affects one's retirement planning, and indirectly Financial Literacy with the mediating role of Saving Behavior is possible for one's retirement planning.
The results of the current study show that financial Literacy has a significant effect on retirement planning mediated by saving behavior; this is following the findings of research conducted by Lynch and Netemeyer (2014) in Fatemeh Kimiyaghalam et al. (2017) conducted a meta-analysis of the relationship between financial Literacy and family finance education and behavioral finance in 168 papers covering 201 papers of the previous study. They found that interventions to improve financial Literacy explained only 0.1% of the variance in financial behavior, with a weaker effect in the low-income sample. Adams and Rau (2011) in Fatemeh Kimiyaghalam et al. (2017) also conclude, "Perhaps one of the most robust findings in the literature is that financial Literacy (cognitive factor) plays a crucial role in financial preparation for retirement.
From the explanation above, it can be concluded that saving behavior is quite strong in mediating the influence of Financial Literacy on the Retirement Planning of PT Bank BPR MSME employees in East Java. This means that employees must have good Savings Behavior to prove that Financial Literacy is critical and can affect the Retirement Planning of PT Bank BPR MSMEs in East Java. Thus, financial Literacy directly affects one's retirement planning, and indirectly Financial Literacy with the mediating role of Saving Behavior is possible for one's retirement planning.
Conclusion
The results of this study can be an additional basis in one's decision making in planning for retirement where when one decides on the desire and determination to plan for retirement in old age, later several aspects must be prepared from an early age, even passed down and taught to the next generation, aspects that are mandatory. Be prepared that a sufficient level of income, the importance of education in the family, especially in terms of finances, qualified financial Literacy as well as preparing a liquid investment portfolio instrument but not eroded by inflation such as investing in stocks, gold, property, and so on, but when financial Literacy, income level, and good family education are not balanced by good individual saving behavior, so there will be no structured retirement planning, therefore the need for good saving behavior so that it can support Literacy in the future. Income, income level, and family education affect good retirement planning as desired.
|
2022-01-29T16:02:29.209Z
|
2021-12-31T00:00:00.000
|
{
"year": 2021,
"sha1": "72b32df2845f118432c063c9a1992f27a7bd4316",
"oa_license": "CCBY",
"oa_url": "https://online-journal.unja.ac.id/jbsmr/article/download/15237/12743",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "39675dbec3d31e002e2e4671328d9b06a635b9f6",
"s2fieldsofstudy": [
"Economics",
"Education"
],
"extfieldsofstudy": []
}
|
254698455
|
pes2o/s2orc
|
v3-fos-license
|
Right to buy … time to move? investigating the moving behaviour of right to buy owners in the UK
Part of the political argument in favour of the right to buy (RTB) was that it would stimulate the economy by encouraging the inter-regional mobility of those in public sector housing. This is the first study to examine whether RTB-owners are indeed more mobile than those in social housing. Using longitudinal data from the British household panel survey and panel regression models we show that the probability of a RTB-owner making a long distance move falls between that of social renters and owner occupiers. However, the difference between RTB-owners and homeowners or social renters is not significant. Social renters are significantly less likely to move over long distances than traditional owners. The results also suggest that RTB-owners are less likely than traditional owners to move for job related reasons, but more likely than social renters.
transformations of the British social housing market. Since it was introduced, over 2.7 million public sector dwellings have been sold to sitting tenants at prices well below market value, transferring wealth from the state to private households (Jones and Murie 2006). The primary reason for these sales was to stimulate homeownership and to respond to the desire of some tenants to own their properties. Although it was never part of the original discussion on the RTB, the political argument was used by some that the policy would stimulate the economy by encouraging the inter-regional mobility of those in the public sector (Boyle 1997). Tony Durant, member of parliament for the conservatives said in 1980 ''The exercise of this right will also bring about greater mobility. One of the fundamental troubles of our economy in this country is that people find it very difficult to move'' (Hansard 1980). As part of the same discussion, the Earl of Mansfield said ''If people wish to move, because they get a better, or different, job in another part of the country, what better chance have they of availing themselves of such opportunities if they own their own homes and can sell them. This is an important part of the thinking'' (Hansard 1980).
According to the discussions in Parliament, social housing was seen as a major barrier to spatial mobility ''there is overwhelming evidence that owner-occupiers are more mobile than are public tenants. Public tenants often have to stay where they are by virtue of the local authority's allocation policies. So in a situation in which we want to encourage the maximum mobility of labour […] that maximum mobility depends to an extent on ownership of homes'' (The earl of mansfield in Hansard 1980). Various studies in Europe and the US show that homeowners are less mobile than renters (e.g. Rossi 1955;Boyle 1993;Boheim and Taylor 2002;Helderman et al. 2004Helderman et al. , 2006, but these studies often conflate private and public renting. Hughes and McCormick (1981, 1987 found for the UK that living in public housing, rather than home-ownership, is the major tenure-related barrier to inter-regional mobility. Those resident in public housing were more likely to move residence over short distances than those in owner-occupied housing, but much less likely to migrate over long distances and these results were confirmed in a number of later studies (Coleman and Salt 1992;Boyle 1995Boyle , 1997. This lack of long distance mobility among social renters likely reflects a range of factors. First, social renters may be more likely to work in jobs that are only advertised locally, rather than nationally (Saunders 1985). Second, they have fewer resources to allow a move into private renting or owner occupied housing elsewhere. Third, the administrative controls imposed upon the distribution of public housing make it difficult for tenants to move between local authorities. Public housing policies were designed to house those most in need of public housing, but they prioritised the needs of those from within their own jurisdictions above the needs of those from other local authority areas. These policies therefore enabled high levels of short distance migration between council houses within their areas, as people's circumstances altered, but tended to restrict (long distance) migration between areas (Boyle 1995).
Thatcher's government was particularly concerned about the lack of job related migration among those in public housing. Neoclassical labour market theory identifies migration as an important instrument for individual career advancement (see Blau and Duncan 1967;van Ham 2002;Mulder and van Ham 2005), and this should have the effect of diminishing regional differences in wages and unemployment (Sjaastad 1962). Any barriers to people's ability to migrate can therefore harm both the economy and individual careers, making this issue a major policy concern (Boheim and Taylor 2002;van Ham 2002). Thus, Thatcher linked the inability to move to national economic performance: ''Frequently investment goes where there are skilled people wanting work. But there must be some mobility. If today people aren't willing to move as their fathers did, the economy can't thrive'' (Thatcher 1980). It was imagined that freeing up the housing market by removing the debilitating effect of public housing policies would help to reduce constraints on mobility (Black and Stafford 1988). However, discussions in parliament reflected that not everyone was convinced this would work. In 1980 Lord Monson said ''…will giving the right to buy encourage labour mobility? Perhaps, but not, I think, to the extent imagined'' (Hansard 1980). Lord Drumalbyn added ''No one can say whether it will increase mobility, but it could at least set the scene to make it easier for people to move'' (Hansard 1980). And Jack Straw added ''few households that buy as sitting tenants would be likely to otherwise move away; nearly all would have remained local authority tenants for the rest of their lives'' (Hansard 1980). Despite the discussions in parliament on the potential effect of the RTB on mobility, there has been no research investigating whether social renters became more mobile after buying their house. There have been a number of studies investigating the resales of former council dwellings (Forrest et al. 1995(Forrest et al. , 1996Williams and Twine 1994;Chaney and Sherwood 2000;Pawson and Watkins 1998a, b), but these studies focused on the characteristics of the buyers of these dwellings, while we are interested in the sellers (those who bought the dwelling from the social landlord under the RTB). This is the first study to examine whether the RTB legislation did indeed 'free-up' those in public housing who bought their homes. We compare the mobility behaviour of RTBowners with social renters, private renters and 'traditional' homeowners. Unfortunately, due to data constraints (mainly small numbers, see data and methods section), it is not possible to investigate moves for job reasons separately. Instead we look at moves over short and long distance separately.
Literature review
The large volume of houses sold under the RTB since 1980 has dramatically altered the UK housing market (Jones and Murie 2006). The RTB initially gave only those living in council housing the right to buy their dwelling and was later extended to tenants of other non-charitable social landlords, such as housing associations. Over the years, changes in policies and regional variations in the policies have incrementally introduced a high level of complexity into the RTB legislation (see Jones and Murie 2006 for an excellent overview). One of the most recent changes is that in november 2010 members of the Scottish parliament have voted to end the RTB for new council and social housing tenants because of housing shortages in the social sector.
Over the past 25 years, there have been large fluctuations and regional differences in the number of sales, with peaks in 1982 and 1989. The RTB caused the distribution of dwellings by tenure to change radically: in 1981 57.6 % of all dwellings were owneroccupied and by 2003 this had risen to 72.3 %. As a result of the RTB and changes in the provision of social housing in the UK, the share of local authority rented dwellings decreased from 29.2 to 13.0 % and the share of housing association dwellings increased from 2.2 to 7.4 %.
The consequences of the RTB legislation have been studied intensively in the late 1980s and the early 1990s. There are roughly two strands of literature on the RTB. The first focusses on those who bought their dwelling under the RTB and the second focusses on resales of former public sector homes. The literature on the selective nature of sales under the RTB has shown that the RTB has tended to involve better-off tenants, the more desirable properties, in the more desirable areas. More specifically, during the early years of the RTB, the majority of tenant purchasers were middle-to old-aged married couples with non-dependent children (Forrest and Murie 1988). They also tended to be from higher social classes with white collar, skilled or semi-skilled occupations (Williams et al. 1987), with at least one and often two earners in the household (Kerr 1988;Lynn 1991;Forrest and Murie 1984a, b;Jones and Murie 2006). Almost half of the buyers had been tenants for 20 years or more and previous moves within the council sector had allowed them to secure relatively desirable dwellings. Few of these initial RTB purchasers stated that they wished to move on in the near future and many expected to stay in their house for the rest of their lives (Forrest and Murie 1984a, b;Foulis 1985). A survey by James et al. (1991) suggested that the RTB option was used by many tenants to secure their future in an area, rather than as a means of escape from 'welfare' housing or, indeed, as a means of enabling future mobility. It is now well recognized that a combination of factors has created an increased residualisation and stigmatization of the remaining council sector, has led to a shortage of social housing for relets in some regions, and has left a concentration of social housing in poor quality unpopular estates (Burrows 1999;Forrest andMurie 1988, 1990;Pawson and Bramley 2000;Jones and Murie 1998).
A second strand of literature investigates the resales of former public sector dwellings (see Forrest et al. 1995Forrest et al. , 1996Williams and Twine 1994;Chaney and Sherwood 2000;Pawson and Watkins 1998a, b). Although this strand of literature is related to the topic of the present study, it approaches the subject from a different angle. We are interested in the behaviour and characteristics of those who bought their house under the RTB and subsequently moved on. The resales literature takes as a starting point the former public sector dwelling and investigates who subsequently buys these dwellings from the original RTB purchaser, and what the impact of these resales is on local communities, especially rural areas. Some of the resales literature also (briefly) investigates the impact of the RTB on the vendors' subsequent housing career (for example Pawson and Watkins 1998a), but do not compare these careers with the careers of traditional home owners and renters. The resales literature does give some insight into the mobility of RTB owners. In a study using a nationally representative survey of households in over 3,000 dwellings which had been resold after the initial sitting-tenant purchased it under the RTB, Forrest et al. (1996) report that based on their own estimations, by the end of 1991 about 14 % of the properties sold under the RTB in England had been resold.
Those suggesting that tenants who bought their dwelling under the RTB might be among the least mobile council tenants cast doubts over the supposition that the sale of council housing to existing tenants would increase working class mobility (Boyle 1995;Diets and Haurin 2003). Thus, it was entirely possible that the barriers to mobility caused by mechanisms to allocate social housing might simply be replaced by the barriers to mobility which arise from homeownership (see Forrest 1987;Oswald 1999;Nickell 1998;Van Ommeren et al. 2000 on homeownership). Mobility of homeowners is strongly dependent on booms and busts in the housing market (Forrest and Murie 1992) and regional house price differentials. Hamnett (1992) suggested that high prices in the South east were likely to be a more important factor overall in restricting labour mobility than council policies. Besides these general barriers to mobility, those who bought their house under the RTB might in addition find it difficult to find a buyer for their property because of the quality of the dwellings, or because the neighbourhood they are located in.
Perhaps surprisingly, then, there has been no detailed research on the mobility behaviour of those who bought their house under the RTB. If the RTB indeed stimulated interregional mobility, we would expect those who bought their house as sitting tenants to be more mobile over long distances than those who remained in social housing. On the other hand, selective sorting into the RTB programme of tenants with the strongest desire to stay put, might cause those who bought their house to be the least mobile. In any case, we should expect low mobility in the first years after sitting tenants bought their house because anti-speculation penalties applied if households moved in the first 3-5 years. We should also expect a drop in short distance mobility of those who bought under the RTB, compared to council tenants, because they now face the same barriers to mobility as traditional homeowners. We might also find regional and temporal differences in the effect of the RTB on migration as regional differences in the supply and cost of housing are known to have an effect on interregional migration, and because the supply and costs of housing change over time (Dieleman 2001).
Data and methods
To test whether there are differences in the mobility behaviour of RTB owners and those in other tenures, longitudinal data was needed. The suitability of several nationally representative longitudinal studies containing information on both the RTB and mobility were assessed for this study, and three data sources were considered. The first is the ONS longitudinal study of England and Wales (ONS-LS), which includes a 1 % nationallyrepresentative sample of the England and Wales population. The ONS-LS follows people between 1971 and 2002, covering the entire RTB period, and has a very large sample size (over 900,000 study members in total). Using census data it is in theory possible to identify whether people bought their home under the RTB (by identifying tenure changes without physical moves), but a major disadvantage of the data is that information on tenure is only available at 10 year intervals. As a result it is only possible to identify immobile RTB owners (those who can be observed in two census years) and because of this limitation we concluded that the usefulness of the ONS-LS data for our study was too limited.
The second dataset we explored was the national child development study (NCDS). The sample consists of all children born in 1 week in march 1958 in Great Britain. The 2000 wave sample size was just over 16,000 (including around 1,000 immigrants born in the sample week who were added in waves 1-3). Only waves from 1978 (respondents were aged 20) and onwards are relevant because we can assume that respondents did not start their independent housing careers before this year. In both 1981 (at age 23) and 1991 (at age 33) respondents were asked whether they bought their house as sitting tenants. After 1991 there is no information on buying as a sitting tenant. Although the NCDS data is much richer than the ONS-LS data, the usefulness of the data for our purposes is limited because also here we observe a selection of stayers.
The third dataset we explored was the British household panel survey (BHPS) which is a nationally representative sample of 5,500 households (10,300 interviewed individuals) drawn in 1991 from 250 areas in Great Britain. Additional booster samples of 1,500 households for Scotland and Wales and 2,000 households for Northern Ireland were added to the main sample in 1999 and 2001 respectively, resulting in a total sample size of around 10,000 households. In the BHPS the same individuals are re-interviewed each successive year on many topics. For each wave, information was available on actual moving behaviour, reasons for moving, the region in which people live, and individual and household characteristics. A potential problem when studying migration with panel data is that those who move are more likely to leave the panel compared to those who stay. Buck (2000) has shown that although this problem is present in the BHPS, its effect on the study of migration is limited because migrant attrition is relatively small. After careful assessment, the BHPS was the best available dataset for our purposes. The main shortcoming of the BHPS was that the number of RTB owners identified, and therefore the number of moves by RTB owners, is relatively low, which has consequences for the robustness of the models (see discussion in the results section and the conclusion). However, the BHPS data is the best possible data source to study the mobility behaviour of RTB owners over a longer period of time.
Residential mobility decisions are made by households rather than individuals (Coulter et al. 2012), and therefore we have created an household-year file based on the BHPS. Households including one person were given the characteristics of the individual making up the household. For couple households the situation was more complicated. Because the household reference person in the BHPS is not constant over time (even if household composition does not change), and is biased towards males, a random person was selected from couples for the first year that they were in the BHPS, resulting in 101,206 household year records. We identified RTB-owners in two different ways. For the first wave (Wave 1991) we used the question ''At the time you bought this house/flat were you already living here as a tenant?'' If yes and the vendor was a social landlord, such as a local authority or a housing association, we assumed that respondents had bought their house/flat under the RTB. For subsequent waves (Wave B to O 1992 onwards) we identified RTB-owners by observing tenure change for non-movers. If tenure changed from social renter to homeowner without a move respondents were identified as sitting tenants who became RTBowners. This exercise resulted in a total of 4,430 RTB-owner years, or 4.45 % of all household years.
The main dependent variable indicates whether a household has not moved, moved over short distance, or moved over long distance between two interview years. We defined long distance moves as moves over 35 km. We also tried alternative specifications such as 40 and 50 km, with little change to our modelling results. 35 km can be regarded as long distance because for most people this moves them out of their daily activity space (van Ham 2002). We had a concern that moves caused by union dissolution would bias the outcomes of our models if the rate of union dissolution varied between tenures. This appeared to be the case in our data: RTB-owners were much less likely to split up than traditional owners or social renters. When separation/divorce occurred the household was removed from the sample so that moves resulting from separations were not counted. In the BHPS moves were recorded in 6.6 % of the household-years (this is lower than average population mobility due to the removal of moves associated with separation/divorce).
As expected, of all household years in the data, private renters move most often (moves in 23.9 % of household years), followed by 'other' renters (16.8 %), and social renters (7.2 %). Traditional owners (5.1 % of household-years) and RTB-owners (3.6 %) moved the least. Only 14.6 % of moves in the BHPS are over long distances. Social renters moved the least over long distance (6.4 % of moves), followed by private renters (11.8 %), RTBowners (11.9 %), traditional owners (18.7 %) and 'other' renters (25.6 %). We identified only 137 moves by RTB-owners in the BHPS data. Arguably this is a very low absolute number, which has consequences for the robustness of our analyses, and which will be reflected in the standard errors of the RTB parameters. Although we are probably pushing the boundaries of the BHPS data, we feel that the analyses are still worthwhile as this is the best possible dataset available to study the mobility behaviour of RTB owners over a longer period of time.
We used panel logistic regression to model the probability of moving. We estimated two sets of models. In the first set the dependent variable indicated whether the household moved over short distance (1) versus did not move (0). In the second set of models the dependent variable indicated whether the household moved over long distance (1) versus did not move (0). The independent variables in our mobility models were lagged by 1 year (i.e., we took the characteristics of the household in the year prior to the move because conceptually, pre-move characteristics should be better predictors of moving than postmove characteristics). The independent variables used were: age of oldest partner in the household; tenure; economic activity in five categories (single employed, single nonemployed, couple both employed, couple non-employed, couple employed/non-employed); highest educational qualification in household; log of household income (corrected for inflation using ONS inflators); self-reported health based on daily activities among those in the household; room stress based on the number of rooms and the size of the household; age of the youngest child in the household. We also added three regional housing market characteristics (using Samples of Anonymised Records Areas, which roughly resemble Local Authorities) to the BHPS dataset: median and mean house prices; percentage social renting; and working age unemployment rates. See Table 1 for summary statistics of the main variables.
There are a range of selection issues which might bias the outcomes of our models. We set out to use formal statistics to control for selection bias, but the number of RTB-owners in our data was very low, and there were too many selection mechanisms potentially relevant simultaneously (selection into social housing, into the RTB, out of the RTB), which were impossible to disentangle. We therefore focused on the most important selection issue, which is structural to the BHPS data: differences between respondents who were already RTB-owners at the start of the panel (1991) and those who became RTBowners during the panel.
As indicated above, we used two ways to indentify RTB-owners in the BHPS. The problem with those identified in 1991 as RTB-owners (referred to as 'pre 1991 RTB-owners' in the rest of this paper) is that they are a selective sample of stayers: households who bought their house under the RTB and can still be observed in that dwelling in 1991. All those RTBpurchasers who had since moved on to another dwelling before 1991 could no longer be identified as (former) RTB-owners (most are likely to have moved on to other owneroccupied dwellings and will be counted among the traditional owners). Thus, 'stayers' are over represented among 'pre 1991 RTB-owners' compared to those who became RTBowners during the panel (whom we could follow up, whether they moved or stayed). Including this relatively immobile group may have caused an underestimation of the mobility of RTB-owners. However, because we are dealing with a relatively low number of RTB-owners in our sample, we were keen to retain them in the analysis and we therefore ran all our models for four different research populations: (1) all cases (66,622 household years); (2) all cases except pre-1989 RTB-owners (these are more likely to be a selection of stayers than 1989-1991 RTB-owners who bought more recently before the panel started) (64,393 household years); (3) all cases except 'pre 1991 RTB-owners' (for those who became RTBowners during the panel purchase year is known) (63,574 household years); (4) only households in social renting in 1991 some of whom became RTB-owners during the panel (purchase year is known) (15,178 household years). These different samples allow us to provide a more comprehensive assessment of the likely effects of the RTB on migration. It is not possible to say which one of the four research populations is the 'correct' one as each has advantages and disadvantages. Populations 3 and 4 offer probably the most robust modelling opportunities. In the models using research populations 1 and 2 we also included a dummy variable identifying 'pre-1991 RTB-owners'. We used this dummy variable to assess whether the 'pre-1991 RTB-owners' behaved in a different way than those who became RTB-owners during the panel study period.
Results
To gain more insight into possible selection bias originating from selective entry into the RTB we first modelled the probability of becoming a RTB-owner. We found that higher income social renters were the most likely to become a RTB-owner (results not shown). Economic activity status also contributed to the probability of becoming a RTB-owner: dual earner couples were the most likely to become RTB-owners, followed by couples with one earner and employed singles, then followed by non-employed couples and nonemployed singles. Those with a medium to higher level of education were more likely than those with a low level of education to become RTB-owners. Finally, poor health has a significant negative effect on the probability to become a RTB-owner. The results show that financially stable and healthy social renters were the most likely to buy their house under the Right to Buy. These findings are consistent with the existing literature (see for an overview Jones and Murie 2006). Because the selection mechanism into the RTB is likely not to be independent from the selection mechanism into long distance migration (income, economic activity, education and health all influence long distance migration as well) this might lead us to overestimate the effect of the RTB on long distance mobility. Table 2 presents the results from a series of panel logistic regression models of the probability to move over short distance (reference category are non-movers). As discussed above, we used four different research populations to gain insight into the extent to which the 'pre 1991 RTB-owners' are a selected group of stayers. We also included a dummy variable identifying 'pre 1991 RTB-owners' in Models 1 and 2, but this variable was not significant. We also ran Models 1 and 2 without the 'pre 1991 RTB dummy' and although the parameter estimates of the main RTB dummy variable changed somewhat, the overall conclusions and significance levels stayed the same.
The main variable of interest in Table 2 is tenure, with traditional owners as the reference category. In Models 1-3, social renters, private renters and other renters were significantly more likely to move over short distances than traditional owners. RTB-owners did not differ significantly from traditional owners. Further testing showed that RTBowners were significantly different from social renters (p \ 0.01), providing some indication that once social renters become RTB-owners they behave more like traditional owners, at least with regard to short distance mobility. In Model 4, we compared RTBowners and social renters; once again RTB-owners were significantly less likely to move over short distances than social renters. It is noteworthy that the tenure parameters are almost identical between Models 1 and 3, indicating that the different definitions of the research populations did not have an effect on the modelling outcomes. This suggests that selection bias is not affecting our estimation of short distance mobility. Table 2 also contains a set of control variables and all the results are in line with what we expected based on the residential mobility literature (see for an overview van Ham 2012). The probability of moving decreased with increasing age of the oldest household member (Models 1-4). With increasing household income the probability of moving decreased (Models 1-3). However, for social renters the probability of moving increased with income (Model 4), presumably because higher incomes provide social renters with more opportunities to improve their situation. Singles were more likely to move over short distances than couples, and couples consisting of two non-employed people were the least likely to move (Models 1-3). The probability of moving over short distances increased with level of education in all four models while the presence of children over the age of 5 had a negative effect on short distance moves (Models 1-4). Room stress-defined as the number of rooms per person-had a negative effect on mobility. The more rooms per person, the less likely the household was to move. Having health problems had a positive effect on the probability of moving over short distance. Most of these moves will be associated with finding a dwelling that better fits the health situation of the household. Finally, households with one or more members from non-white ethnic minority groups were less likely to move than all white households, consistent with findings elsewhere (van Ham and Feijten 2008; van Ham and Clark 2009). Table 3 presents a set of four panel logistic regression models of the probability to move over long distance (reference category are non-movers). Again, results are presented for the four different research populations. Also in Table 3 the dummy variable identifying 'pre 1991 RTB-owners' was not significant. The tenure variable shows that social renters were significantly less likely to move over long distances than traditional owners. The signs for the RTB-owner parameters were negative, but insignificant (Models 5-7). The findings indicate that also over long distance, RTB-owners behaved more similar to traditional owners than to social renters, although a more detailed test showed that the parameters of RTB-owners and social renters were not significantly different. This indicates that over long distances, the moving behaviour of RTB-owners was in between the behaviours of traditional owners and social renters. In Model 8, only including social renters and during-panel-RTB-owners, RTB-owners were also not found to be different from social renters in their long distance moving behaviour. The effects of the control variables in the models of long distance were similar in direction to those in the models of short distance mobility. The main differences were that non-employed singles were the least likely to move over long distance, and households with children under the age of 5 were less likely to move than other households (with and without children). In the long distance models, the effects of ethnicity and health were not significant.
We also ran models (results not shown) which took into account the fact that RTBowners-who bought their dwellings with considerable discounts (up to 70 %)-faced a penalty if they moved within 3-5 years (penalty periods differed over time and geographically) after buying their house as sitting tenants. We excluded all moves within the first few years after becoming a RTB-owner or after the last move (for other tenures) to make RTB-owners and others tenures more comparable. These models showed similar outcomes to those presented in Tables 2 and 3. We also looked at the regional and temporal differences in short and long distance mobility rates (results not shown). We followed two routes to investigate potential regional Source Author's calculations using data from the BHPS * p \ 0.10; ** p \ 0.05; *** p \ 0.01 Investigating the moving behaviour 139 Constant -3.128 *** -3.252 *** -3.161 *** -6.865 * differences in mobility. First, we included interaction terms between region dummies (10 regions based on government office regions) and tenure dummies in the models. Second, we added regional characteristics to the household-year file using special licence Local Authority District (LAD) data. We hypothesised that in regions with high house prices, RTB-owners would be more likely to move over longer distances (to other regions where houses were relatively cheaper). We also hypothesised that in regions with high levels of unemployment RTB-owners would be more likely to move over longer distances (to regions with better employment prospects). The models including interaction effects between region and tenure did not change the effects of tenure on short and long distance moves. Most of the region dummies were not significant and we found no specific effects for RTB-owners. Londoners were significantly less likely to move short distances than people resident in the reference category of 'rest of South East'. Those from Wales, Scotland and Northern Ireland were the least likely to move over short distances. The least likely to move over long distances were those from the West Midlands, the North West and Tyne and Wear and the rest of the North East. To model the effects of regional characteristics on mobility behaviour we used multilevel models. House prices and unemployment levels had almost no effect on the tenure parameters. As expected, the higher house prices were in a region, the more likely people were to move out over a long distance (small effect). We found no specific effects for RTB-owners. To investigate whether RTB- Source Author's calculations using data from the BHPS * p \ 0.10; ** p \ 0.05; *** p \ 0.01 Investigating the moving behaviour 141 owners behaved different over time, we included period (year) dummies in the model and interacted these with tenure dummies (results not shown). Possibly due to small numbers we did not find any significant and systematic effect of period on moving behaviour. Finally, we explored moving reasons by tenure to get more insight in the moving behaviour of RTB-owners. Table 4 shows that among movers, other renters were the most likely to move for employment reasons (35.2 %), followed by private renters (12.3 %), traditional owners (10.0 %), RTB-owners (7.5 %) and social renters (4.8 %). RTB-owners once again fell between traditional owners and social renters. A v 2 test showed that the differences between the tenure groups are significant (p = 0.000). Table 5 gives an overview of moving reasons based on another BHPS question with more categories. We recoded the original 35 moving reasons in the BHPS into fewer categories using the criteria of Boheim and Taylor (2002). Overall, 'housing related issues' were the most important reason for moving (almost 40 % of moves) for all tenures, followed by 'neighbourhood related issues' (13.7 %). RTB-owners were more likely to move for neighbourhood related reasons (23.3 %) than those in any other tenure, including social renters (18.7 %). This is surprising, as we know that the uptake of the RTB was more common among those living in better properties in more popular neighbourhoods. RTB-owners also gave health related reasons as an important reason for moves (12.8 %), second only to social renters (16.4 %). This could reflect the health status of those in social housing and/or the fact that RTBowners were relatively old compared to those in other tenures. RTB-owners were the least likely to move for housing-related issues (30.2 %), probably reflecting that most of them bought their dwelling while they were reasonably satisfied with it.
Conclusion
This is the first study to use complex longitudinal data to examine whether the moving behaviour of RTB-owners is different from those living in other tenures. We used the BHPS in an innovative way to identify those who became RTB-owners during the panel. The number of RTB-owners found in the data was small, thereby pushing the limits of what can be done with the data. The small numbers of RTB-owners, and consequently the small number of long distance movers, might have influenced the parameter of the RTB-owner variable, which was never significantly different from the parameter for the traditional owners. However, the standard errors are still relatively small and the results found are in line with what could be expected. As a consequence of the small numbers, the results of this study must be interpreted with some caution. At the same time, a careful review of alternative datasets has shown that the BHPS was the best data available for this study as it uniquely allowed us to follow RTB-owners over time. So despite the shortcomings of the data for this study, there is no other dataset available which would allow us to do the same analyses.
Our results demonstrated that RTB-owners were about as likely to move over short distances as were traditional homeowners, but less likely than social renters. So after buying their house, RTB-owners showed more resemblance with the group they joined (homeowners) than the group they had departed (social renters). There are two opposing explanations for this finding: either RTB-owners are relatively satisfied with their dwelling, so there is no need for moving, or they are even more 'trapped' in their dwelling than they were before they bought it, because now they face the same moving restrictions as traditional homeowners. Homeowners are known to be less mobile than (social) renters over short distances because they live in better dwellings and because the costs associated with moving are much higher for owners than for renters. Our results suggest that RTB-owners are slightly more likely to move over long distances than social renters, and slightly less likely than traditional homeowners, although the differences were not significant. So these results need to be interpreted with caution. We have also shown that the uptake of the RTB was very selective, with mainly employed tenants with relatively high incomes and no health problems using the RTB. This suggests that RTB-owners have certain measured and unmeasured characteristics which would have made them more likely to move over long distance anyway. The combination of possible selection bias, and the lack of statistically significant differences, lead us to conclude that it is questionable whether the RTB had a causal effect on the long distance mobility behaviour of RTB-owners.
Despite large differences between regional housing markets in the UK, our study did not find any regional differences in the moving behaviour of RTB-owners. We also found no temporal effect on the moving behaviour of RTB-owners. A possible cause is the relatively low number of RTB-owners and moves by RTB-owners in our dataset.
Finally, our analyses of moving reasons showed that RTB-owners are less likely than traditional owners to move for job related reasons, but they are more likely to move for job related reasons than social renters. Again, they take a middle position between traditional owners and social renters. RTB-owners were found to be more likely to move for neighbourhood related reasons than those in any other tenure, including social renters. This is surprising since previous research has shown that RTB purchases tended to involve the best properties in the most favoured neighbourhoods. It might be the case that RTB-owners want to move to nicer neighbourhoods, possibly neighbourhoods with a higher share of owner-occupied dwellings, in order to confirm their new status as homeowners (compare Michelson 1977). To gain more insight into the role of the neighbourhood in the moving behaviour of RTB-owners, future research should look in more detail at housing satisfaction, moving desires and housing ambitions of RTB-owners.
|
2022-12-16T14:12:30.843Z
|
2012-06-13T00:00:00.000
|
{
"year": 2012,
"sha1": "bd37c698d2916605424e593b647fb07298eaffd1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10901-012-9296-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "bd37c698d2916605424e593b647fb07298eaffd1",
"s2fieldsofstudy": [
"Economics",
"Sociology"
],
"extfieldsofstudy": []
}
|
258835346
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and risk factors for postnatal mental health problems in mothers of infants admitted to neonatal care: analysis of two population-based surveys in England
Background Previous research suggests that mothers whose infants are admitted to neonatal units (NNU) experience higher rates of mental health problems compared to the general perinatal population. This study examined the prevalence and factors associated with postnatal depression, anxiety, post-traumatic stress (PTS), and comorbidity of these mental health problems for mothers of infants admitted to NNU, six months after childbirth. Methods This was a secondary analysis of two cross-sectional, population-based National Maternity Surveys in England in 2018 and 2020. Postnatal depression, anxiety, and PTS were assessed using standardised measures. Associations between sociodemographic, pregnancy- and birth-related factors and postnatal depression, anxiety, PTS, and comorbidity of these mental health problems were explored using modified Poisson regression and multinomial logistic regression. Results Eight thousand five hundred thirty-nine women were included in the analysis, of whom 935 were mothers of infants admitted to NNU. Prevalence of postnatal mental health problems among mothers of infants admitted to NNU was 23.7% (95%CI: 20.6–27.2) for depression, 16.0% (95%CI: 13.4–19.0) for anxiety, 14.6% (95%CI: 12.2–17.5) for PTS, 8.2% (95%CI: 6.5–10.3) for two comorbid mental health problems, and 7.5% (95%CI: 5.7–10.0) for three comorbid mental health problems six months after giving birth. These rates were consistently higher compared to mothers whose infants were not admitted to NNU (19.3% (95%CI: 18.3–20.4) for depression, 14.0% (95%CI: 13.1–15.0) for anxiety, 10.3% (95%CI: 9.5–11.1) for PTS, 8.5% (95%CI: 7.8–9.3) for two comorbid mental health problems, and 4.2% (95%CI: 3.6–4.8) for three comorbid mental health problems six months after giving birth. Among mothers of infants admitted to NNU (N = 935), the strongest risk factors for mental health problems were having a long-term mental health problem and antenatal anxiety, while social support and satisfaction with birth were protective. Conclusions Prevalence of postnatal mental health problems was higher in mothers of infants admitted to NNU, compared to mothers of infants not admitted to NNU six months after giving birth. Experiencing previous mental health problems increased the risk of postnatal depression, anxiety, and PTS whereas social support and satisfaction with birth were protective. The findings highlight the importance of routine and repeated mental health assessments and ongoing support for mothers of infants admitted to NNU. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-023-05684-5.
Introduction
Each year, approximately 1 in 7 infants born in the United Kingdom (UK) is admitted to a neonatal unit (NNU), which provides integrated services to sick and preterm infants who are in need of specialist medical care [1]. In addition to fear and anxiety surrounding the health of their infant, the NNU can also be a stressful environment for parents [2]. The physical appearance of the ill infant, exposure to intrusive medical equipment, along with potential miscommunications with staff can all contribute to parental distress [3]. Notably, some mothers of NNU infants report feelings of hopelessness and a sense of "loss of the maternal role, " along with doubts regarding their ability to care for their baby [3]. The confluence of these environmental and psychological factors may increase the risk of mental health problems among parents of infants admitted to NNU [2]. Perinatal mental health problems affect not only the mother but may also negatively impact infants' health and development across the life course [4]. Previous research suggests that maternal mental health problems are associated with a decrease in the quality of parent-child relationships, poor infant outcomes such as impairments in neurocognitive and motor development, along with delays in language development [4].
According to results from a systematic review conducted by Vigod et al., [5] prevalence of postnatal depression among mothers of infants admitted to NNU ranges from 28 to 70%, depending on the diagnostic criteria used. In another systematic review by Malouf et al., [2] prevalence of postnatal anxiety among parents whose infants were admitted to NNU was as high as 42% one month after birth, and prevalence of postnatal post-traumatic stress (PTS) one month after birth was similar at 40%. Although these rates gradually decreased over time, both anxiety and PTS rates remained more prevalent and persistent in parents of infants admitted to NNU, as compared to figures reported for women in the general perinatal population [2]. Past research has estimated that mothers of infants admitted to NNU experience approximately 2-3 times higher prevalence of postnatal depression, anxiety, and PTS compared to the general perinatal population [6,7]. However, high heterogeneity and small sample sizes across studies examining mental health problems in mothers of NNU infants limit the reliability of prevalence estimates [2].
In addition to the limitations of existing evidence, few studies have reported prevalence of comorbidity of perinatal mental health problems among parents of infants admitted to NNU. Mental health problems commonly co-occur; a meta-analysis conducted by Falah-Hassani et al. [8] estimated that 13.1% of women in the general perinatal population experience comorbid depressive and anxiety symptoms within 8 weeks after childbirth. One small study conducted in the USA by Das et al. [9] reported comorbid postnatal depression, anxiety, and stress in 15.6% of a sample of mothers of infants admitted to NNU. However, the overall evidence for the prevalence of comorbid mental health problems in mothers of NNU infants is sparse.
Factors such as history of mental health problems, lack of social support, unplanned pregnancy, stressful life events, and interpersonal violence have been consistently linked with increased postnatal anxiety and depression [7] in the general perinatal population. Recent immigration, childcare stress, along with low partner support have also been linked to an increased risk of comorbidity of mental health problems [8]. Additionally, results from a meta-analysis by Grekin and O'Hara demonstrate that major depression and traumatic childbirth experiences are associated with increased risk of PTS among postnatal mothers [10]. However, it is unknown whether these and/or other unique factors are associated with mental health problems in mothers of infants admitted to NNU, as there is limited research exploring risk factors for postnatal depression, anxiety, PTS, and comorbidity of these mental health problems in this specific population [6].
Therefore, the aims of this study are to: 1) provide upto-date prevalence estimates of postnatal depression, anxiety, PTS, and comorbidity of these mental health problems in mothers of infants admitted to NNU, compared to the prevalence rates among mothers whose infants were not admitted to NNU; 2) explore sociodemographic, pregnancy-and birth-related factors associated with postnatal depression, anxiety, PTS, and comorbidity of these mental health problems for mothers of infants admitted to NNU. Such data are critical in guiding future research, informing clinical guidelines as well as policy decisions. Gong et al. BMC Pregnancy and Childbirth (2023) 23:370 Methods
Study design and sample
This study was a secondary analysis of data from the 2018 and 2020 National Maternity Surveys (NMS), which were cross-sectional, population-based postal surveys carried out by the National Perinatal Epidemiology Unit (NPEU) in England [11,12]. The NMS aim to assess women's experiences of maternity care around the time of pregnancy and birth, where participants have the choice to complete the survey: 1) on paper; 2) online; or 3) by telephone with an interpreter. Details about the larger study have been described elsewhere [11,12]. Briefly, a sample of 16,000 women in 2018 were randomly sampled using birth registration records by the Office for National Statistics (ONS), while 16,050 women were sampled in 2020 [11,12]. All women in the sample lived in England at the time of survey completion, were aged 16 and over, and had given birth during a two-week interval in October 2017 (for the 2018 survey) or May 2020 (for the 2020 survey) [11,12]. These specific time intervals were selected to identify women who had given birth six months prior to survey administration [11,12]. Women whose baby had died were not included in the study [11,12]. NNU admission (yes, no) was assessed through a direct question within the NMS.
Postnatal depression, anxiety, and PTS
Postnatal depression, anxiety, and PTS were assessed within the NMS using standardised self-report measures. Postnatal depression was assessed using the Edinburgh Postnatal Depression Scale (EPDS). The EPDS consists of 10 items each scored from 0-3, with a total score of 30, where a higher number indicates greater severity of symptoms [13]. To identify individuals with 'probable depression, ' a cut-off value of ≥ 13 was used, with a sensitivity of 79% and specificity of 89% [13]. Women scoring above the threshold were categorized as having probable depression.
Postnatal anxiety was assessed using the two-item Generalized Anxiety Disorder scale (GAD-2), which asks respondents to recall the frequency with which they have experienced the following anxiety symptoms over the past two weeks: 1) "feeling nervous, anxious, or on edge;" 2) "not being able to stop or control worrying. " Items are scored from 0 to 3 (0 = not at all, 3 = nearly every day) [14]. A total score of ≥ 3 is used to classify women with clinically significant symptoms of anxiety, which corresponds with 86% sensitivity and 83% specificity [14].
The Primary Care Posttraumatic Stress Disorder Screen for DSM-IV (PC-PTSD-IV) was adapted and integrated into the 2018 NMS to identify women with PTS [15]. The tool encompasses four items: 1) re-experiencing traumatic events; 2) emotional numbing; 3) avoidance; and 4) hyperarousal. The Primary Care PTSD Screen for DSM-5 (PC-PTSD-5) was used in 2020 as this tool reflects more recent DSM-5 criteria [16]. Compared to the PC-PTSD-IV, the PC-PTSD-5 includes an additional item on feelings of guilt for causing the event(s) to happen [16]. All items are scored dichotomously (0 = no, 1 = yes) with a pre-determined threshold of ≥ 3, which corresponds with a sensitivity and specificity of 78% for the PC-PTSD-IV, compared to a sensitivity of 93% and specificity of 85% for the PC-PTSD-5 [15,16]. Women who scored above this threshold were categorized as experiencing PTS.
Within this study, comorbidity of mental health problems is defined as individuals who scored above the thresholds, as detailed above, for two or three mental health problems (postnatal anxiety, depression, and/or PTS).
Risk factors
Potential risk factors for postnatal mental health problems in mothers of infants admitted to NNU were identified from the literature and those for which data were available from the NMS were included in this study. The sociodemographic factors included were: age group (≤ 24 years, 25-29 years, 30-34 years, 35 + years); country of birth (UK, outside UK); ethnicity (White-British, White-Other, Mixed, Asian, Chinese, Black, Other); level of area deprivation measured by the index of multiple deprivation (IMD) grouped into quintiles (1 [least socioeconomically advantaged] to 5 [most socio-economically advantaged]); age when leaving full-time education (≤ 16 years, 17-18 years, ≥ 19 years); living with partner (yes, no); long-term mental health problem (yes, no); and social support (measured on a Likert scale from 0 [not at all supported] to 6 [completely supported]). The pregnancy-and birth-related factors included were: parity (primiparous, multiparous); multiplicity (singleton, multiple birth); pregnancy planning (planned, unplanned); satisfaction with labour and birth (scores of 0 [least satisfied] to12 [most satisfied] on the 6-item birth satisfaction scale-revised indicator (BSS-RI) (2020 survey) or on 6 of the 10 items on the BSS-R (2018 survey) [ ; length of NNU stay (24 h or less, 25 h to 7 days; 8 to 28 days; more than 28 days); mode of birth (vaginal, instrumental, planned caesarean, unplanned caesarean); self-reported antenatal anxiety (yes, no); self-reported antenatal depression (yes, no); smoking during pregnancy (yes, no); household or passive smoking during pregnancy (yes, no). Ethnicity was collapsed into "White-British" or "other" for the regression analyses. Further details on the assessment of these factors are available in the published NMS reports [10,11].
Statistical analysis
Datasets from 2018 and 2020 were cleaned prior to merging. Descriptive statistics were used to describe the characteristics of survey participants and to estimate the prevalence of postnatal depression, anxiety, and PTS for mothers whose infants were admitted to NNU (N = 935), with 95% confidence intervals (CI). The proportion of mothers whose infants were admitted to NNU having one postnatal mental health problem (depression, anxiety, or PTS), two postnatal mental health problems (any two of depression, anxiety, or PTS), and three postnatal mental health problems (depression, anxiety, and PTS) was also estimated. For comparison, corresponding prevalence estimates were calculated for mothers whose infants were not admitted to NNU (N = 7,604).
While prevalence estimates included a comparison between mothers of infants admitted to NNU and mothers of infants not admitted to NNU, analysis of risk factors was restricted to mothers of NNU infants. As study outcomes were not rare (prevalence > 10%), analyses were conducted using modified Poisson regression to estimate risk ratios (RR) [18]. Modified Poisson regression was used to estimate the unadjusted associations between risk factors and each mental health outcome separately (postnatal depression, anxiety, and PTS) among mothers of infants admitted to NNU. For each outcome, factors significant at the univariable stage (p < 0.10) were subsequently fitted in multivariable models. Multivariable model building was conducted in three stages. First, multivariable models were adjusted for sociodemographic risk factors only. Next, models were adjusted for pregnancy-and birth-related risk factors only. Finally, both models were combined, removing factors that were not statistically significant (p > 0.05). Therefore, only risk factors significant (p < 0.05) after mutually adjusting for all other variables were retained in the final models. All models were adjusted for survey year.
Multinomial logistic regression was used to investigate which sociodemographic and pregnancy-and birthrelated risk factors were associated with having one postnatal mental health problem or two or three postnatal mental health problems among mothers of infants admitted to NNU, using unadjusted and adjusted odds ratios (OR). The same univariable and multivariable model building process described above was repeated for multinomial logistic regression.
All analyses were conducted in Stata version 17 and survey weighted commands were used to reduce the effect of non-response bias [11,12]. Less than 5% of data were missing for the majority of variables [11,12], thus complete case analysis was performed.
Characteristics of survey participants
Questionnaires were returned by 9,120 women (4,509 in 2018 and 4,611 in 2020), corresponding to response rates of 29.0% in 2018 and 28.9% in 2020 [16,17]. Complete data on NNU admission and postnatal depression, anxiety, and PTS were available for 8,539 women. Table 1 presents the baseline characteristics of these women overall and according to whether they did (N = 935) or did not (N = 7,604) have infants admitted to NNU. The majority of respondents were between 30 and 34 years of age (34.2%) and born within the UK (72.3%). Approximately three quarters of respondents self-identified as White-British (70.3%) and 58.5% were at least 19 years of age when they left full-time education. Most respondents reported living with their partner at the time of survey administration (82.6%) and 33.9% lived in the most socioeconomically advantaged areas (top two quintiles of IMD). Around half of respondents were first-time mothers (45.9%) and the majority reported planned pregnancies (74.6%) along with singleton births (98.2%). More than half of respondents had a vaginal delivery (59.3%), and most reported full-term (93.3%) infants with normal birthweight (93.5%).
Although sociodemographic characteristics were similar for mothers of infants admitted to NNU and mothers of infants not admitted at baseline, the samples differed on several pregnancy-and birth-related factors. Unsurprisingly, compared to mothers of non-NNU infants, mothers of infants admitted to NNU were more likely to be first-time mothers (58.0%) and to have had an unplanned caesarean (27.1%). Infants admitted to NNU were also more likely to be very preterm (6.1%) or preterm (25.5%) and to have low (22.9%) or very low birthweight (6.1%), compared to infants not admitted to NNU.
Factors associated with postnatal depression six months after giving birth for mothers whose infants were admitted to NNU
Regression analyses were carried out for postnatal depression, anxiety, and PTS separately. Table 3 shows the prevalence of postnatal depression for mothers whose infants were admitted to NNU, according to sociodemographic and pregnancy-and birth-related factors, along with unadjusted and adjusted RRs. At the univariable stage, factors significantly associated with postnatal depression were social support, having a long-term mental health problem, satisfaction with labour and birth, length of NNU stay, Factors associated with postnatal anxiety six months after giving birth for mothers whose infants were admitted to NNU Table 4 describes the prevalence of postnatal anxiety for mothers whose infants were admitted to NNU, according to sociodemographic and pregnancy-and birth-related factors, along with unadjusted and adjusted RRs for the association between risk factors and postnatal anxiety. At the univariable stage, the factors significantly associated with postnatal anxiety were maternal age, country of birth, age when leaving education, social support, having a long-term mental health problem, satisfaction with labour and birth, pregnancy planning, antenatal anxiety, antenatal depression, and smoking during pregnancy. Factors associated with postnatal PTS six months after giving birth for mothers whose infants were admitted to NNU Table 5 shows the prevalence of postnatal PTS for mothers whose infants were admitted to NNU, according to sociodemographic and pregnancy-and birth-related factors, in addition to unadjusted and adjusted RRs for the association between risk factors and PTS six months after giving birth. At the univariable stage, the factors significantly associated with postnatal PTS were country of birth, ethnicity, social support, having a long-term mental health problem, pregnancy planning, satisfaction with labour and birth, gestational age, length of NNU stay, mode of birth, antenatal anxiety, antenatal depression, and smoking during pregnancy. In the final multivariable model, having a long-term mental health problem (aRR 2.14; 95% CI 1.43, 3.21) and antenatal anxiety (aRR 2.62; 95% CI 1.71, 4.01) remained significantly associated with PTS after mutually adjusting for all significant risk factors. Having social support (aRR 0.86; 95% CI 0.78, 0.94) and higher satisfaction with labour and birth (aRR 0.89; 95% CI 0.85, 0.94) were also protective against postnatal PTS at six months after birth. As long-term mental health problems and antenatal anxiety were the strongest risk factors for all three outcomes, the degree of overlap between these factors was calculated. Of the 925 women who had their baby admitted to neonatal care and who indicated whether or not they had long-term mental health problems and antenatal anxiety, 7.9% reported a long-term mental health problem and antenatal anxiety, 4.9% reported a longterm mental health problem but no antenatal anxiety, 12.9% reported antenatal anxiety but no long-term mental health problem, and 74.4% reported no long-term mental health problem or antenatal anxiety (Table S1 in Supplementary File 1).
Factors associated with having one, two, or three postnatal mental health problems for mothers whose infants were admitted to NNU six months after giving birth Tables S2 and S3 (see Supplementary Files 2 and 3) summarise the unadjusted and adjusted odds ratios for the associations between sociodemographic and pregnancyand birth-related factors and one postnatal mental health
Summary of findings
Results from this representative population-based study suggest that approximately one in four mothers whose infants were admitted to NNU (23.7%) experienced postnatal depression six months after birth, while 16.0% and 14.6% experienced anxiety and PTS respectively. A third (31.1%) of mothers whose infants were admitted to NNU experienced at least one mental health problem, with 8.2% experiencing two mental health problems and 7.5% experiencing all three mental health problems six months after birth. Study findings suggest that the strongest risk factors associated with postnatal depression, anxiety, PTS, and comorbidity of these mental health problems include having a long-term mental health problem and antenatal anxiety. Having a self-reported long-term mental health problem was associated with an approximate n (unweighted) uRR / aRR unadjusted risk ratio / adjusted risk ratio * Statistically significant (p < 0.1) ** Mutually adjusted for social support, long-term mental health problem, satisfaction with labour and birth, antenatal anxiety, in addition to survey year # Entered into regression analysis as a continuous variable, present Total Number (Median, IQR) NS risk factor not statistically significant (p < 0.05) after multivariable model building two-fold increase in postnatal depression, anxiety, and PTS, and an eight-fold increase in comorbidity of mental health problems. Compared to women who did not report antenatal anxiety, women who reported anxiety experienced an approximate two-fold increase in postnatal depression, along with a three-fold increase in postnatal anxiety and PTS six months after childbirth. Women with antenatal anxiety also experienced six times greater odds of having comorbidity of postnatal mental health problems compared to women who did not report antenatal anxiety. There was a degree of overlap between long-term mental health problems and antenatal anxiety. However, the results of the multivariable analyses for all three outcomes showed that long-term mental health problems and antenatal anxiety are strong and independent risk factors for postnatal depression, anxiety and PTS. Study findings also suggest that having greater social support and higher satisfaction with labour and birth are both protective against all three postnatal mental health problems and comorbidities. Every unit increase in social support (on a scale from 0 to 6) was associated with a 22% lower risk of postnatal depression, 13% lower n (unweighted) % (weighted) * Statistically significant (p < 0.1) ** Mutually adjusted for social support, long-term mental health problem, satisfaction with labour and birth, antenatal anxiety, in addition to survey year # Entered into regression analysis as a continuous variable, present Total Number (Median, IQR) NS risk factor not statistically significant (p < 0.05) after multivariable model building uRR / aRR unadjusted risk ratio / adjusted risk ratio risk of postnatal anxiety, 14% lower risk of postnatal PTS, and 35% lower risk of comorbidity of these mental health problems six months after giving birth. Every unit increase in satisfaction with labour and birth (on the BSS-RI scale from 0 to 12) was associated with a 6-11% decrease in risk of postnatal depression, anxiety, and PTS and a 19% decrease in risk of comorbidity of these mental health problems six months after giving birth.
In comparison to current study findings, a recent metaanalysis conducted by Malouf et al. [2] estimated a 26.3% prevalence of anxiety and 24.5% prevalence for PTS in mothers whose infants were admitted to NNU between one month to one year postnatal. These prevalence rates are higher than our current study results, but discrepancies could be explained by differences in study sample, assessment tools, as well as time of assessment. However, all estimates in the review were pooled from a small number of studies with high heterogeneity, where individual prevalence ranged from 6.9% to 42.5% for anxiety, and 5.8% to 58.8% for PTS from one month up to one year after birth [2]. Das et al. [9] found that 15.6% of mothers whose infants were admitted to NNU reported symptoms of comorbid postnatal depression, anxiety, and stress in a sample of 118 women in the USA. Although researchers also employed survey-based methodology, higher prevalence rates of triple-comorbidity in this study compared to current study findings could be explained by the small sample size and timing of survey administration, where mothers were approached and screened prior to their infants being discharged from the NNU.
Having a long-term mental health problem and antenatal anxiety were the strongest risk factors predicting postnatal depression, anxiety, PTS, and comorbid mental health problems in the current study. Results from a meta-analysis demonstrated that history of psychiatric illness is a strong risk factor predicting postnatal depression within the general perinatal population [19]. Current results are also corroborated by other research indicating that prior history of depression and anxiety are important risk factors predicting postnatal depression, anxiety, as well as PTS for mothers of infants admitted to NNU [7,20]. In contrast, a prospective cohort study conducted by Rogers et al. [21] found that prior history of mental health concerns, especially anxiety and depression, did not predict postnatal depression and anxiety for mothers of preterm infants admitted to the NNU. However, Zaers et al. [22] explained that underreporting prior mental health concerns and the use of self-reported measures instead of clinical interviews could explain this lack of association. Similarly, researchers also found antenatal anxiety to be the most significant risk factor predicting depression, anxiety, and PTS at six months after birth [22]. Despite being a risk factor predicting postnatal depression, anxiety, and PTS in the perinatal population, antenatal depression was not found to be associated with postnatal anxiety, depression, or PTS in the current study. It is hypothesized that assessment of antenatal depression with a single, direct question may have introduced underreporting. As noted by Fellmeth et al., [23] some mothers might be reluctant to disclose or self-identify as depressed when faced with a direct question, due to stigma surrounding perinatal depression. Although antenatal anxiety was also assessed using a single question, underreporting might be less as the term anxiety is more often used in everyday language, thus it may not have the same pathological connotation as "depression. " Social support and satisfaction with labour and birth were found to be protective against individual and comorbid mental health problems. Lack of social support has been frequently cited in previous research as a significant risk factor predicting postnatal depression and anxiety for mothers whose infants were admitted to NNU, where those who report higher levels of perceived or self-reported social support show fewer symptoms of both postnatal depression and anxiety [24]. Although there are fewer studies examining PTS in mothers of NNU infants compared to postnatal depression or anxiety, studies have concluded that increased social support is associated with decreased PTS, both for mothers of NNU infants and for postnatal mothers in general [25]. Similar to current study results, maternal satisfaction with childbirth has been associated with decreased symptoms of postnatal depression, anxiety, and birthrelated PTSD in the general perinatal population [26,27]. However, there is a lack of research examining birth satisfaction and postnatal mental health outcomes for parents of infants admitted to NNU.
Strengths and limitations
This study is the first to use a large population-based sample to explore the prevalence and risk factors for postnatal depression, anxiety, PTS, and comorbidity of these mental health problems among mothers of NNU infants, compared with mothers of infants not admitted to NNU from the same population. Previous studies examining mental health outcomes in mothers whose infants were admitted to NNU have included small samples. For example, in a recent meta-analysis [2], the sample sizes of included studies ranged from only 29 to 600 participants. Our study provides a significantly larger sample size, enabling more reliable prevalence estimates. Furthermore, information was available on non-respondents to the survey, which enabled the calculation of survey weights to mitigate the effects of non-response bias.
As this study employed a cross-sectional design, causality between risk factors and mental health outcomes cannot be determined. Participants were asked to recall experiences during pregnancy, which introduces possible recall bias given that surveys were distributed six months postnatally. Women experiencing poor mental health at the time of survey administration may have been more likely to report negative experiences and poorer antenatal mental health. Women who were still in the hospital following extremely preterm birth, whose babies were still admitted to NNU, and those experiencing severe mental health problems at the time of survey administration may also have been less likely to respond. Further, although the included risk factors were identified by a rigorous literature search, this study is a secondary analysis of NMS data, and some risk factors for postnatal depression, anxiety, and PTS identified by previous studies were not assessed in the NMS. For example, previously cited factors such as life stress, intimate partner violence, and previous miscarriage, were not included in the NMS [5,6,10]. Furthermore, we were unable to differentiate between different types of NNU, and it is possible that the level of care provided by the NNU might affect maternal mental health outcomes. Another limitation of the current study is that prevalence estimates were based on self-report measures instead of the recommended gold standard clinical interview [28]. In addition, there is conflicting evidence for the performance of the GAD-2 in perinatal populations [29,30] and limited evidence for the PC-PTSD-5. However, all of the self-report measures selected are validated tools with established cut-off points, facilitating comparison with other studies. Furthermore, the National Institute for Health and Care Excellence (NICE) in the UK specifically recommends the EPDS and GAD-2 for use during pregnancy and the postnatal period [31].
Implications for research and practice
Our findings highlight the importance of mental health screening for mothers of infants admitted to NNU and postnatal mothers more broadly. Routine screening of mothers whose infants were admitted to NNU may be an effective way of identifying those women with poor mental health. NICE recommends assessing mental health symptoms of all women during their first visit to the general practitioner during pregnancy and in the early postnatal period. However, this postnatal visit for mothers may be missed if their infant(s) remains within the NNU. Parents of infants admitted to NNU may require repeated mental health assessments over the longer-term [32]. Additionally, mental health screening during pregnancy should also aim to identify risk factors for postnatal mental health problems including low social support and pre-existing or ongoing mental health problems to ensure that those women at risk receive support and adequate follow up.
A recent systematic review outlined the feasibility of a universal screening program for postnatal mood and anxiety disorders among caregivers of infants admitted to NNU using short screening tools [33]. Given the routine use of short screening tools such as the GAD-2 [14] for anxiety or the Whooley [34] questions for depression within clinical settings, future studies could evaluate the feasibility and effectiveness of combining such tools to inform the creation of a standardized, universal mental health screening program for parents of infants admitted to NNU. In addition to mental health screening, supportive and preventive interventions for mothers and caregivers of NNU infants also require more attention [33]. As both social support and long-term mental health problems emerged as significant risk factors for all mental health outcomes within this study, emphasis could be placed on peer support and family-centered programs, as well as interventions targeting mothers with pre-existing mental health conditions.
Prior research on peer support programs within the NNU demonstrated that receiving peer support has been found to decrease parental stress, anxiety, and depression through increasing empowerment, confidence, and adaptive coping [35]. Family-integrated care is also increasingly emphasized within NNUs, where healthcare should be provided in the context of the patient, family, as well as their community [36]. Results from a randomized controlled trial demonstrated that family-integrated care is effective at reducing NNU-related stress and anxiety for mothers [34]. Therefore, peer support or familyintegrated mental health support programs could also be incorporated into routine care for mothers of infants admitted to NNU, with emphasis on mothers with identified risk factors.
Conclusion
Results from this study demonstrate consistently higher prevalence of postnatal depression, anxiety, PTS, and comorbidity of these conditions among mothers whose infants were admitted to NNU, compared to mothers whose infants were not admitted. Mothers with a longterm mental health problem and antenatal anxiety were at higher risk of experiencing postnatal mental health problems. However, women with greater social support and higher satisfaction with labour and birth reported lower risk of postnatal mental health problems. In addition to routine and repeated mental health screening for caregivers within the NNU, women with identified risk factors should be identified early and offered appropriate support and follow-up.
|
2023-05-23T13:16:20.093Z
|
2023-05-22T00:00:00.000
|
{
"year": 2023,
"sha1": "b73625fa42766e6c64133edb980390e842d732bc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "cb17126a83ced9fb126234aa1488a7502d51ba2d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213077574
|
pes2o/s2orc
|
v3-fos-license
|
Physical and chemical research of the microstructure of the cement composite filled with allumosilicate and glass microphases
Using the physical and chemical analysis, the possibility to obtain a dense and crack-resistant Portland cement-based composite material filled with aluminosilicate and glass microspheres and a complex additive which purposefully enhances each of the mentioned properties have investigated. During the molding of a composite material, in which particles of a filler and a binder have the same size in microns, micro-reinforcing fibrous additives will not be effective because their particles are larger than filler’s particles. To specify the conditions for the formation of a micro-reinforcing layer of etryngite needles and for increasing the density of the cement matrix structure, relevant theoretical and experimental studies were carried out and a complex additive consisting of calcium nitrate and chloride, carboxymethylcellulose and plasticizer was proposed. Based on the physical and chemical analysis performed, it has been found that adding a complex additive facilitates the compacting of the cement matrix structure through the additional synthesis of crystalline hydrates: ettringite, calcium hydrochloroaluminate, calcium hydroxychloride, and calcium hydrosilicates of the felt-like structure, which improves physical and mechanical properties of a Portland-based composite material.
Introduction
To obtain a dense and crack-resistant composite material based on a cement binder having a good substrate-gripping capacity, it is necessary to include certain additives purposefully enhancing each of the mentioned properties in its composition [1,2]. During the molding of a composite material, in which particles of a filler and a binder have the same size (namely, a thermal insulation material based on Portland cement and fillers consisting of hollow microspheres), micro-reinforcing fibers will not be effective because the size of their particles exceeds several times the size of filler particles. To specify the conditions for the formation of a micro-reinforcing layer of etryngite needles and for increasing the density of the cement matrix structure, relevant theoretical and experimental studies were carried out
Main content of the work
Structural transformations occurred during the hydration of a binder with the proposed complex additive were studied by the method of infrared spectroscopy (IRS) and with the use of an electronic microscope. This allowed us to reveal the nature of intermolecular and intramolecular interactions and to obtain information about the structure of compounds. Three samples in the form of a mixture solidified within 28 days were prepared for the study. Sample No. 1: Portland cement; sample No. 2: Portland cement with glass microspheres and a complex chemical additive; sample No. 3: Portland cement with aluminosilicate microspheres and a complex additive ( Figure 1).
The infrared spectrum of sample 1 contains bands specific to certain components of clinker components. Bands specific to belite (840 cm -1 ) and alite (880 to 950 cm -1 ) are typical for cement. IR spectra of all three samples of cement have bands of aluminates in the range 450 to 550 cm-1 and bands of aluminoferrites in the range 600 to 700 cm -1 , as well as valence vibration bands in the range 1000 to 1150 cm -1 .
The IR spectrum of sample No. 2 shows the decreased absorption band at 500 and 1220 to 1260 cm -1 . Increased absorption in the range of wavenumbers 1010 to 1100 cm-1 is explained by the presence of carbonates, hydrosilicates and ettringite, the band 1425 to 1450 cm -1 shows the presence of Ca(OH) 2 or calcium hydrosilicates. The IR spectrum of sample No. 3 is similar to the spectrum of sample No. 2, but its band for 1500 cm -1 is more distinct, and this may show the presence of Ca(OH) 2 or high-basic calcium hydrosilicates; this is also valid for the band 3423 available for all samples. Vibration bands 700 to 800 and 2928 cm -1 specific to calcite [7][8] are also present in spectra of samples No. 2 and No. 3.
All these spectral changes indicate that connections Si-О and Ca-О become damaged during the interaction of the mineral phases of clinker with water and a complex chemical additive.
Electron microscope images show most clearly the relative arrangement of structural elements in the cement composite material, the morphology of the phases, sizes and shapes of crystals and their binding [9-10]. To study structural features of a composite material in the process of its formation at the microlevel, a composition containing Portland cement, aluminosilicate and glass microspheres and a complex chemical additive was selected. The electronic photographs (Figure 2) show the microstructure of a composite material. The image shows that diameters of the hollow microspheres may range from 10 to 150 microns and that the microspheres may be bound with crystalhydrate structures of cement stone. The structure of the material is dense due to the well-balanced placement of microspheres of different diameters; there are no cavities and cracks. The selected fragment of the image clearly shows that a microsphere has a multi-chamber capsular structure giving the possibility to detect internal stresses and to effectively redistribute them between the cement matrix and the filler. This ensures increased strength parameters of a composite material and increased crack-resistance of a coating. Plates and blocks of calcium hydrosilicates and star-shaped aggregations of calcium hydrochloroaluminates (see in the center of the image) are being formed on the ettringite frame connecting microspheres with its needles: crystals of calcium hydroxychloride shaped as elongated prisms and calcium hydronitroaluminates shaped as hexagonal thin petals (Figure 3). Sometimes separate calcite cubes and lime plates may be formed.
Conclusion
Therefore, based on the performed physical and chemical analysis, it has been found that adding a complex additive facilitates the compacting of the cement matrix structure through the additional synthesis of crystalline hydrates: ettringite, calcium hydrochloroaluminate, calcium hydroxychloride, and calcium hydrosilicates of the felt-like structure, which improves physical and mechanical properties of a Portland cement-based composite material.
|
2020-01-02T21:57:18.876Z
|
2019-12-19T00:00:00.000
|
{
"year": 2019,
"sha1": "86770b5e41d24b6e0faf7717e0ad23cba99cf5f1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/708/1/012079",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d34fc8140125b6e3e30a4eea656522767a0997aa",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
238060699
|
pes2o/s2orc
|
v3-fos-license
|
Heat Transfer Performance of Different Fluids During Natural Convection in Enclosures with Varying Aspect Ratios
. The heat transfer process takes place in numerous applications through the natural convection of fluids. Investigations of the natural convection heat transfer in enclosures have gained vital importance in the last decade for the improvement in thermal performance and design of the heating/cooling systems. Aspect ratios (AR=height/length) of the enclosures are one of the crucial factors during the natural convection heat transfer process. The investigated fluids consisting of air, water, engine oil, mercury, and glycerine have numerous engineering applications. Heat transfer and fluid flow characteristics are studied in 3-dimensional rectangular enclosures with varying aspect ratios (0.125 to 150) using computational fluid dynamics (CFD) simulations. Studies are carried out using the five different fluids having Prandtl number range 0.01 to 4500 in rectangular enclosures with the hot and cold surface with varying temperature difference 20K to 100K. The Nusselt number and heat transfer coefficients are estimated at all conditions to understand the dependency of ARs on the heat transfer performance of selected fluids. Temperature and velocity profiles are compared to study the flow pattern of different fluids during natural convection. The Nusselt number correlations are developed in terms of aspect ratio and Rayleigh number to signify the natural convection heat transfer performance.
Introduction
The mechanism of heat transfer persuaded by buoyancy force, which causes movement of fluid due to change in density by temperature gradient, is known as natural convection. This buoyancy-driven heat transfer process in enclosures has a variety of industrial applications in heat transfer processes. These applications involve mainly in thermal management systems for heating or cooling of chemical processes and electronics. Examples of heat transfer process by natural convection includes heat dissipation systems for power transformers, solar energy collectors, heating through electric baseboard heaters, cryogenic storages, double-pane windows, cooling of electronic chips and transistors through heat sink systems, chemical vapor deposition, nuclear power plants and many more. Natural convection heat transfer rates are relatively less than heat transfer through forced convection; however, the natural convection process does not involve any additional energy requirement for pumping, and the heat transfer process is dependent on the geometry and fluid properties.
The mechanism of natural convection heat transfer process has been widely studied over the years due to extensive applicability in the field of energy conservation. Experimental and numerical studies have been carried out in different geometries and scenarios [1][2][3]. However, limited studies are available in 3D vertical enclosures with varying aspect ratios (ARs Ganguli et al. [7] investigated the flow patterns during natural convection in air, water, mercury, and silicon oil in vertical slots with the aspect ratio (AR) of 20. Instability mechanism and multicellular patterns were observed in vertical rectangular slots at different Prandtl numbers (Pr). Nishimura et al.
[10] studied the natural convection behavior of fluids numerically and experimentally in rectangular enclosures with multiple vertical partitions. The concept was applicable to reduce heat transfer rate up to 70-90% by introducing 2-5 partitions between the hot and cold surfaces. In the later studies, Turkoglu and Yücel [11] developed a computerbased SIMPLE algorithm to investigate the dependency of aspect ratios, number of partitions, and Rayleigh number on the heat transfer rate. It was reported that the Nusselt number decreases with increasing partition numbers. It was also observed that the Nusselt number values decrease with increasing aspect ratio. Long and Zhang [12] experimentally investigated the natural convection behavior of helium at supercritical conditions in a vertical cylinder. It was found that supercritical helium showed better heat transfer performance than conventional gases in vertical, closed cylinders with AR of 27. In another study by Altaç and Kurtul [13], natural convection heat transfer was numerically studied in tilted rectangular enclosures with angle 0o-90o. The studies were carried out for Rayleigh number of range 105-107 and aspect ratio of 1 and 2. It was observed that heat transfer rates increase with increasing aspect ratio. Warren et al. [14] described a comprehensive natural convection characteristic in the vertical and horizontal geometries. It was reported that convective heat transfer was improved with decreasing aspect ratios. The cavities with high aspect ratios showed lower Nusselt number values than the squared cavities. Experimental investigations were carried out by Kamotani et al. [15] for enclosures with varying aspect ratios. They found that the Nusselt number is independent of the Prandtl number for AR=1 but highly dependent on the Prandtl number for AR <1. In another numerical study [16], it was observed that the Nusselt number and Rayleigh number are directly proportional to the Prandtl number for laminar conditions in square cavities. Arici et al. [17] investigated heat transfer and fluid flow characteristics in double, triple and quadruple pane windows for a variety of aspect ratios and temperature gradients. In their numerical study, they observed that gap width is an important energy-saving factor, and the effect is more noticeable at lowtemperature gradients. Ganguli et al. [18] investigated the heat transfer behavior of air during natural convection across cold and hot surfaces in enclosed rectangular geometries with a wide range of aspect ratios. The flow pattern, temperature distributions, and velocity profiles were examined using 2-dimensional computational fluid dynamics (CFD) simulations. They found that the flow patterns are highly dependent on the aspect ratios of enclosures.
This research presents the investigation of natural convection behavior of different crucial heat transfer fluids, i.e., air, water, engine oil, mercury, and glycerine. These fluids are encountered in numerous engineering applications. A great number of air-based cooling processes are found in the home, commercial, and industrial appliances. The natural convection by airflow is considered as the most economical heat transfer process. The buoyancy movements in air-cooling systems are much higher than the liquids-based cooling media due to the properties of air. Therefore, accurate simulations of air-cooling processes require proper wall boundary conditions for the velocity components. The typical examples include a hot-air balloon, central heating in buildings, convection in the oven, chimney effect, and ice melting processes. Water-based natural convection heat transfer processes used in various types of heat exchangers, cooling jackets, radiators, and thawing of frozen materials. Due to certain limitations in air-cooling systems, water-based cooling systems have known to have a feasible and economical solution for high heat flux applications due to the high specific heat of the water. The engine oil has less density than water and can operate at higher temperatures (<100°C) compared to water. The main applications of engine oil include distribution transformer and engine cooling management. The two other fluids chosen in this research are glycerine and mercury, which have a critical significance in the chemical and nuclear industries. The knowledge of natural convection behavior is a crucial aspect for the storage, transportation, and processing of these fluids.
Limited investigations are carried out on the fluid flow and heat transfer characteristics using 3dimensional CFD simulations. The natural convection heat transfer in enclosures of the fluids have significance towards numerous engineering application, and a wide research gap is found in the literature. 3-Dimensional models of rectangular vertical enclosures can predict natural convection behavior with more accuracy and reliability than 2-Dimensional models due to edge effects of the enclosure. CFD analyses are performed to investigate fluid flow and heat transfer characteristics for a wide range of aspect ratios. The Nusselt number and heat transfer coefficients for each fluid at varying conditions are estimated and compared. A generic Nusselt number correlation is proposed for different fluids, which is based on the simulated results.
Methodology
Heat transfer and fluid flow characteristics are numerically studied during natural convection heat transfer for five different fluids in the rectangular enclosures. The geometry consists of a rectangular enclosure with one hot and one cold surface, shown in Fig. 1. Insulated properties are implied to the other four surfaces of the enclosure. Heat transfer performance for five different fluids is investigated, which are having comparable properties with air, water, engine oil, mercury, and glycerine, respectively. All fluids are present in the ANSYS database, and the built-in thermophysical properties of the fluids are used in the simulation. The studied fluids are having Prandtl number range of 0.5 ≤ Pr ≤ 4500. Simulations are performed at varying aspect ratios (0.125 ≤ AR ≤ 150) and temperature difference (20 K ≤ ΔT ≤ 100 K) across the hot and cold surface. The lengths (L) and the heights (H) of the enclosures are varied to get 18 geometries with different aspect ratios in the range of 0.125 to 150. The width (W) of the enclosure is kept constant for studied geometries. The CFD simulations are performed in ANSYS (Fluent). Fundamental equations and literature are used to obtain thermophysical data for all fluids [19,20]. During numerical calculations, all fluids are assumed as a single phase. For better accuracy, fine meshing is used near the edges of the rectangular geometry with bias meshing technique. The Boussinesq approximation is used, which implies that the fluid movement is mainly due to buoyancy force. The steadystate simulations are performed with the acceleration of gravity in the y-direction (negative). Pressure based solver is applied along with the absolute velocity formulation. Energy equation, laminar model equations, and surface-to-surface equations are considered. For buoyancy calculations, the Body force weighted method is implied for pressure-based solutions. Convergence criteria is set for scaled residuals of energy, velocity, and continuity models. For discretization of momentum and energy, First-order upwind and Third-order Monotone Upstream centered Schemes for Conservation Laws (MUSCL) are applied.
Flow patterns at varying ARs
Numerical studies are carried out for the investigation of the fluid flow behavior of different fluids at varying temperature gradients. The simulations are performed for all aspect ratios of the rectangular enclosures. It is observed that the aspect ratio has a significant impact on the average velocity of fluid during natural convection. An increase in aspect ratio leads to intensifying the average velocity of the fluid. Velocity contours at isosurface of five fluids are presented in Fig. 2 at a low aspect ratio (0.125), and a temperature difference of 80 K. Contours with different scales are compared for the complete understanding of the flow patterns and velocity distribution generated in various fluids. Comparative velocity contours at iso-surface of fluids during natural convection in enclosures at higher aspect ratios are presented in Fig. 3.
Heat transfer characteristics at varying ARs
The Nusselt numbers and heat transfer coefficients are estimated for different fluids at varying aspect ratios and temperature differences. It is found that the heat transfer characteristics are significantly influenced by the height and length of the rectangular enclosures. Heat transfer coefficients are observed to be higher in the enclosures with high aspect ratios for all fluids, shown in Fig. 4. However, the Nusselt number values are found to be decreasing as the aspect ratio of the enclosure is increased, shown in Fig. 5. This behavior is attributed to the conductive dominant heat transfer characteristics of the fluid at high aspect ratios. This phenomenon was described extensively by Ganguli et al. [18]. They concluded that better heat transfer performance is obtained in enclosures with low aspect ratios due to multicellular flow pattern formation.
Heat transfer properties of different fluids are estimated and analyzed in the middle of the rectangular enclosure (0.025 m) by creating an iso-surface on z-coordinate. The impact of the enclosure's size on the Nusselt number is extensively analyzed for different fluids. It is observed that, besides aspect ratio, the Nusselt number values are highly dependent on the temperature between the hot and cold walls and thermophysical properties of the fluid. The effect of aspect ratio on Nusselt number is shown in Fig. 5 Fig. 6 and Fig. 7 for low and high aspect ratios, respectively. It is observed that heat transfer in rectangular enclosures with the low aspect ratio (0.5) falls in boundary layer flow regimes, shown in Fig. 6. In this type of flow regime, most of the fluid exhibits conductive heat transfer and only thin boundary layers near the wall surface show conductive heat transfer. Similar behavior is observed for five fluids in low aspect ratios during natural convection. Temperature distribution in the enclosure with an aspect ratio of 18 for different fluids is shown in Fig. 7
Correlation for Nusselt number
Extended simulations are performed for different aspect ratios than previous work [21] to obtain complete data sets for the Nusselt number and the Rayleigh number for five different fluids. Correlations are developed for the Nusselt number as a function of Rayleigh number and aspect ratio. The proposed correlation for different fluids is given in eq. 1. Nu = a1(AR) a2 (ln Ra) a3 (1) The correlation is proposed for five different fluids with varying temperature differences between the hot and cold surface (20K≤ T≤100 K), Prandtl number (0.01≤Pr ≤4500) and aspect ratios (8≤AR≤150). Correlation coefficients, Average Absolute Deviation (AAD) and the Sum of Squared Errors (SSE) for different fluids are tabulated in Table 1. Parity plots are presented in Fig. 8 to examine the performance of correlation.
|
2021-08-27T16:43:41.427Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "bb3c780b280a06d1693b773693649d54006ec337",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/63/e3sconf_icpeam2020_03010.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "93f7d7b6d972a05ce6edae191e05a34827a96b74",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
8985671
|
pes2o/s2orc
|
v3-fos-license
|
Life without white fat: a transgenic mouse.
We have generated a transgenic mouse with no white fat tissue throughout life. These mice express a dominant-negative protein, termed A-ZIP/F, under the control of the adipose-specific aP2 enhancer/promoter. This protein prevents the DNA binding of B-ZIP transcription factors of both the C/EBP and Jun families. The transgenic mice (named A-ZIP/F-1) have no white adipose tissue and dramatically reduced amounts of brown adipose tissue, which is inactive. They are initially growth delayed, but by week 12, surpass their littermates in weight. The mice eat, drink, and urinate copiously, have decreased fecundity, premature death, and frequently die after anesthesia. The physiological consequences of having no white fat tissue are profound. The liver is engorged with lipid, and the internal organs are enlarged. The mice are diabetic, with reduced leptin (20-fold) and elevated serum glucose (3-fold), insulin (50- to 400-fold), free fatty acids (2-fold), and triglycerides (3- to 5-fold). The A-ZIP/F-1 phenotype suggests a mouse model for the human disease lipoatrophic diabetes (Seip-Berardinelli syndrome), indicating that the lack of fat can cause diabetes. The myriad of consequences of having no fat throughout development can be addressed with this model.
White adipose tissue (WAT) is the major organ for regulated storage of triglycerides for use as metabolic energy. WAT helps control energy homeostasis, including food intake, metabolic efficiency, and energy expenditure, via its secreted hormone, leptin, and possibly additional unknown hormones. The quantity of body fat varies widely in mammals, ranging from 2% to >50% of body mass, typically from 10% to 20% in mice and humans. Much of this variability can be observed within a single individual, highlighting the delicate balance of factors controlling fat deposition. The huge variation in fat mass is unlike that of any other organ in the body and is determined by both an individual's genetic background and environmental factors including diet and physical activity (Comuzzie and Allison 1998;Hill and Peters 1998). Excess body fat, or obesity, is a major health problem, particularly in America, increasing the risk of diabetes, hypertension, and coronary artery disease (Thomas 1995). The mechanisms by which obesity causes these diseases, however, are unclear. To understand better the contribution of adipose tissue to diabetes and metabolism, it would be valuable to examine a mouse with no adipose tissue. To this end, we produced a transgenic mouse with essentially no white adipose tissue and examined the contribution of WAT to energy metabolism, reproductive function, and disease susceptibility.
Mutant mice with either increased or decreased levels of WAT have been reported. For example, two mutations that disrupt signaling between WAT and the brain (ob/ ob and db/db, affecting leptin and its receptor, respectively) cause an increase in WAT amount leading to diabetes (Coleman 1978;Zhang et al. 1994;Chen et al. 1996). These mice have increased food intake and decreased physical and sympathetic nerve activity, all contributing to obesity. Adipose-specific expression of a diphtheria toxigene resulted in mice with either a severe phenotype including neonatal death or a mild phenotype, characterized by resistance to induced obesity or delayed loss of WAT at 10 months (Ross et al. 1993;Burant et al. 1997). These results suggest that WAT may be an essential organ for life. At present, there are no mice, from either knockout or transgene technologies that are devoid of WAT throughout development.
We have used tissue-specific expression of a dominant-negative protein transgene to ablate WAT in mice. Rather than use a toxigene, we chose instead to express a protein that inhibits the function of transcription factors critical for fat development. Adipocyte growth and differentiation (Wu et al. 1996;Mandrup and Lane 1997) follow patterns of sequential B-ZIP protein expression, including AP-1 (Distel et al. 1987;Stephens et al. 1992) and C/EBP family proteins (Christy et al. 1989;McKnight et al. 1989). C/EBP␣ overexpression in cell culture caused adipose differentiation (Freytag et al. 1994;Lin and Lane 1994). Knockout mice lacking C/EBP␣ die shortly after birth with no discernable WAT (Wang et al. 1995). The double knockout of C/EBP and C/EBP␦ also results in reduced fat mass (Tanaka et al. 1997), all suggesting that C/EBP family proteins are important for fat development.
In this study, we used 7.6 kb of the aP2 enhancer/ promoter to target adipocyte-specific transgene expression of a dominant-negative protein termed A-ZIP/F. This dominant-negative protein inhibits the DNA binding and function of B-ZIP proteins in both the C/EBP (Vinson et al. 1993) and AP-1 families of transcription factors. The resulting adult transgenic mice have essentially no white fat and dramatically reduced amounts of brown fat that is inactive. Here, we describe the profound physiological consequences of having no WAT and introduce a mouse model of the human disease lipoatrophic diabetes, a genetic or autoimmune condition exhibiting an absence of WAT (Foster 1994;Seip and Trygstad 1996).
A-ZIP/F is a dominant-negative protein that inhibits the DNA binding of both C/EBP and AP-1
We designed a protein, A-ZIP/F, that inhibits the action of B-ZIP transcription factors by forming stable heterodimers (Fig. 1A). The dominant-negative A-ZIP/F is a 104-amino-acid protein consisting of an amino-terminal 9-amino-acid Flag epitope, a 13-amino-acid linker, a 21amino-acid designed acidic amphipathic helix, and a 61amino-acid-designed leucine zipper termed ZIP/F (Vinson et al. 1993;Krylov et al. 1995). A novel aspect of this dominant-negative protein is the acidic region that replaces the B-ZIP basic region. The acidic region heterodimerizes with endogenous B-ZIP basic regions, forming an coiled coil extension of the leucine zipper, thus stabilizing the complex 3-5 kcal/mole. This stabilization causes the dominant-negative protein to abolish B-ZIP DNA binding (Krylov et al. 1995Olive et al. 1997;Ahn et al. 1998).
The leucine zipper region determines the dimerization specificity of the dominant-negative proteins. The ZIP/F leucine zipper is based on C/EBP␣, but changes four amino acids in the g and e positions to glutamate. The ZIP/F zipper is reminiscent of the Fos zipper (O'Shea et al. 1989(O'Shea et al. , 1992, as both contain a large number of glutamates in these positions. Using electrophoretic mobility-shift experiments, we examined the ability of A-ZIP/F to inhibit DNA binding of different B-ZIP tran- Carboxyterminal to the activation region is the basic DNA-binding domain, which in the absence of DNA is unstructured. This is followed by the ␣-helical leucine zipper dimerization domain. The dashed lines represent physical interactions between the leucines in the d positions that are critical for forming a coiled coil dimer structure. (Middle) C/EBP homodimer bound to DNA. This binding causes the basic region to form an ␣-helical extension of the leucine zipper, which enhances stability of the dimer by ∼100-fold. (Right) Heterodimerization between a C/EBP␣ monomer and a A-ZIP/F monomer. The carboxy-terminal F-zipper interacts specifically with the C/EBP␣ (or Jun) leucine zipper whereas the amino-terminal acidic extension forms a coiled coil with the basic domain of B-ZIP proteins. This heterodimeric coiled coil structure is more stable than C/EBP␣ bound to DNA by a further ∼100-fold. (B) A-ZIP/F inhibits the DNA binding of C/EBP␣ (left) and Fos+Jun (middle), but not CREB (right) in gel-shift assays. (Lanes 1) No protein; (lanes 2) 10 nmoles of the indicated B-ZIP domain, which forms a complex with the labeled oligonucleotide probe; (lanes 3,4,5) B-ZIP domains as well as 1-, 10-, or 100-fold molar equivalent of recombinant A-ZIP/F protein, respectively. Equimolar A-ZIP/F abolishes the DNA-binding of both C/EBP␣ and Fos+Jun, but has no effect on CREB DNA binding, even in 100-fold molar excess. scription factors (Fig. 1B). A-ZIP/F, at an equimolar concentration, inhibited the DNA binding of the C/EBP␣ homodimer. A-ZIP/F also inhibited the DNA binding of AP-1, a heterodimer of the Fos and Jun B-ZIP domains. This promiscuous inhibition becomes zipperspecific if the acidic extension is placed onto the amino terminus of either the C/EBP or Fos leucine zipper (data not shown). A-ZIP/F does not inhibit the DNA binding of all B-ZIP factors. CREB DNA binding is not blocked, even at 100-fold molar excess (Fig. 1B).
Transgenic mouse lines
We expressed A-ZIP/F using a fat-specific promoter, reasoning that disruption of the function of C/EBP and JUN family members might disrupt adipocyte growth and differentiation as both have been implicated in its development. We used 7.6 kb of the enhancer/promoter to express Flag-tagged A-ZIP/F protein ( Fig. 2A). Three transgenic FVB/N mice were obtained from 30 live births. One did not transmit the transgene, one did not express the transgene, and the remaining expressing line, designated A-ZIP/F-1, is characterized in this report. The A-ZIP/F-1 line carries ∼16 copies of the transgene. Hemizygous females had reduced fertility and litter size and almost no pups survived until weaning. The A-ZIP/F-1 line is propagated by the breeding of hemizygous males with FVB/N females. The males show reduced fertility, but produce litters of normal size (9.2 ± 0.3 pups, mean ± S.E.M.; n = 29 litters). Homozygous mice have not been obtained, and all studies were performed with hemizygous mice.
A-ZIP/F mRNA and protein expression in the A-ZIP/F-1 line
Dissections of adult A-ZIP/F-1 mice revealed essentially no WAT and a severely reduced amount of brown adipose tissue (BAT; Fig. 2B and below). A-ZIP/F mRNA was restricted to the residual BAT and was of the predicted size of 1.3 kb (Fig. 3A). Muscle and heart contained traces of A-ZIP/F mRNA at 1% and 3% of the BAT level, respectively. We suggest that the muscle and heart signals are from residual adipose tissue associated with these organs as endogenous adipocyte-specific aP2 mRNA was present at similar levels in the muscle and heart samples. The absence of WAT precluded our ability to determine A-ZIP/F expression in WAT. A-ZIP/F mRNA signal is thus limited to those cells that normally express aP2, namely adipocytes (Bernlohr et al. 1985).
A-ZIP/F protein expression was measured by monitoring of the amino-terminal Flag epitope. Western blotting revealed a band of the expected 12-kD size in transgenic BAT but not in other tissues (data not shown) or wildtype BAT (Fig 3B). Quantitation, with a recombinant Flag standard containing the Fos leucine zipper, indicated that the residual adipose tissue contains ∼2 million A-ZIP/F protein molecules per cell (see Materials and Methods).
Residual BAT in A-ZIP/F mice
The residual adipose tissue in A-ZIP/F-1 mice is found at the anatomical sites expected for BAT (interscapular, neck, parasternal, and renal hilum). As the A-ZIP/F-1 animal ages, BAT shrinks. BAT performs thermogenesis and is most prominent in small mammals and the young of larger mammals (Nicholls and Locke 1984;Himms-Hagen and Ricquier 1998). Active wild-type BAT consists of mitochondria-rich eosinophilic cells containing multiple lipid droplets (Fig. 4A). In contrast, wild-type WAT cells are typically larger, contain few mitochondria, and a single large lipid droplet, pushing the nucleus to the cell periphery. The A-ZIP/F-1 adipose tissue showed sparse eosinophilic staining, a single lipid drop- let per cell, and peripheral nuclei, resembling WAT. However, this is also the appearance of dormant wildtype BAT, for example as seen in ob/ob mice (Trayhurn 1986). Thus A-ZIP/F-1 adipose tissue resembles inactive BAT. Uncoupling protein 1 (UCP1) is a molecular marker used to distinguish BAT from WAT (Bouillaud et al. 1985;Jacobsson et al. 1985). A-ZIP/F-1 adipose tissue contains low levels of UCP1 mRNA (Fig. 4B), as is found in inactive BAT (Bouillaud et al. 1985;Jacobsson et al. 1985). From the anatomical position, histology, and UCP1 expression, we conclude that the residual adipose tissue in the A-ZIP/F-1 mice is inactive BAT.
The residual A-ZIP/F-1 BAT contains 2 million molecules per cell of the dominant-negative protein that neutralizes C/EBP and Jun B-ZIP family members. UCP1, which is regulated by C/EBPs (Yubero et al. 1994) is dramatically underexpressed in A-ZIP/F-1 fat tissue. In contrast, other adipose genes reported to require C/EBP family members for efficient expression, including leptin, aP2, and PPAR␥ (Christy et al. 1989;He et al. 1995;Clarke et al. 1997;Mason et al. 1998), are not underexpressed (Fig. 4B). The complex response of BAT to the presence of the A-ZIP/F dominant-negative protein highlights the differential response of eukaryotic promoters to the transcription environment.
The developmental progress of the A-ZIP/ F-1 phenotype
During mouse development, BAT first appears at embryonic day 15 (Houstek et al. 1988), while WAT appears at birth (Ailaud and Hauner 1998). The absence of WAT and reduced BAT in adult A-ZIP/F-1 mice is attributable either to the lack of cell birth or to cell death. To address this question, mice at 17 days of gestation, and at 1, 3, 7, and 14 days postnatum were examined histologically. Eighteen hours after birth, control animals had abundant interscapular BAT and subcutaneous WAT (Fig. 5).
Transgenic A-ZIP/F-1 mice had about half the BAT of controls, and this BAT appeared similar to active wildtype BAT. WAT was not visible in the newborn A-ZIP/ (B) Expression of C/EBP regulated genes in adipose tissue. Three separate Northern blots containing total RNA from A-ZIP/F-1 interscapular BAT and from wild-type interscapular BAT and epididymal WAT were prepared and hybridized with DNA probes for UCP1, leptin, C/EBP␣, and PPAR␥. The UCP1, leptin, and C/EBP␣ blots contain 10 µg RNA per lane and the PPAR␥ blot, 5 µg of RNA per lane. Ethidium bromide staining is shown as a loading control. (A) Total RNA was isolated from wild-type BAT and the 11 indicated A-ZIP/F-1 tissues (interscapular fat is BAT), electrophoresed, blotted, and probed successively for A-ZIP/F and aP2. All lanes contain 10 µg of RNA, except for the interscapular fat, which contains 5 µg. Quantitation by PhosphorImager showed that A-ZIP/F expression in the heart lane is 3% of the level in BAT. Ethidium bromide staining confirmed RNA loading (bottom). (B) Anti-Flag antibody was used to probe expression of the Flag-tagged A-ZIP/F protein in BAT of A-ZIP/ F-1 and littermate wild-type mice. (DN) Mobility of the A-ZIP/F molecule. A recombinant standard (20, 5, 2, 0.5, and 0.2 pmole) was run for quantitation. Molecular mass is indicated on the right.
F-1 mice. We did not see any apoptosing cells at the five time points we examined, suggesting that WAT cells are never born.
Enlarged viscera in A-ZIP/F-1 adult mice
Adult A-ZIP/F-1 mice are distinguishable from wildtype littermate controls by their rough-appearing coat and increased abdominal girth. Dissections and histologic examinations performed on adult A-ZIP/F-1 mice aged 7 and 24 weeks and sex-matched littermates found no visible WAT (including subcutaneous, gonadal, perinephric, and mesenteric; Figs. 2B and 6A), although microscopically a few adipocytes were present in the region of the epididymal fat pads (Fig. 4, bottom). The epididymal fat in wild-type mice at 20 weeks averaged 2.8% of body weight, while none was visible in A-ZIP/F-1 animals (Table 1). The interscapular BAT in adult A-ZIP/ F-1 mice, although present, was reduced, weighing only 11% of control mice (Table 1).
Adult A-ZIP/F-1 mice grow heavier than their littermates, even without fat, which is typically 15% of body weight (Salmon and Flatt 1985;West et al. 1992). We measured organ weights in 20-week-old mice to characterize this difference (Table 1). The liver accounts for less than half of the increase with other viscera (including the kidneys, heart, and spleen) contributing to the increase.
The livers in adult A-ZIP/F-1 mice were enlarged 2.1fold and were lighter in color and density (Table 1; Fig. 6B). Histological examination showed oil red O-stained lipid droplets of various sizes within the liver. The lipid accumulated in a zone 3/centrilobular pattern with both micro-and macrosteatosis (Fig. 6C). No fibrosis, cirrhosis, or inflammatory infiltrates were associated with the steatosis. Serum chemistries confirmed that the mice did not have acute hepatitis, with normal range alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, and bilirubin, total and direct. Quantitation of triglycerides showed that the A-ZIP/F-1 mouse liver contains 6.8-fold more triglycerides compared with age-matched wild-type littermates (58 mg/gram liver in A-ZIP/F-1; 18 mg/gram liver in wild type). Quantitative immunoblot analysis showed increases in the enzymes involved in both lipogenesis and lipolysis. Of the 19 species of fatty acid-metabolizing enzymes (Aoyama et al. 1998) examined, only the protein levels of three peroxisomal enzymes (peroxisomal bifunctional protein, composed of hydratase and 3-hydroxyacyl-CoA dehydrogenase; D-type peroxisomal bifunctional protein; and peroxisomal 3-ketoacyl-CoA thiolase) were increased about threefold. These changes, however, should have minimal effects on lipolysis, because neither peroxisomal acyl-CoA oxidase, the rate-limiting enzyme in peroxisomal fatty-acid -oxidation (Aoyama et al. 1994), nor the mitochondrial enzymes that catalyze long-chain fatty-acid -oxidation are affected in the A-ZIP/F-1 mice. On the other hand, the levels of three rate-limiting fatty-acid synthesis-related enzymes (acetyl-CoA carboxylase, fatty-acid synthetase, and ATP-citrate lyase) increased 2.9-, 2.9-, and 3.8-fold, respectively. These increases in rate-limiting enzymes contribute greatly to enhanced lipogenesis, suggesting a mechanism for the fatty livers of A-ZIP/F-1 mice.
The pancreatic islets were hypertrophic and hyperplastic (Fig. 6D). Histological immunostaining for insulin demonstrated a large increase in the number of  cells. Foam cells (phagocytic cells loaded with lipid) were observed in the lungs of some animals.
Early death of A-ZIP/F-1 mice
We examined the survival and growth of the A-ZIP/F-1 mice to determine the consequences of having no fat throughout development. At birth, transgenic mice account for half of the pups, indicating no prenatal death. However, by weaning at 3-4 weeks, only 30% (62 of 210) of the surviving pups were transgenic, females outnumbering males (1.23 to 1.00). Survival after weaning was also reduced (Fig. 7). The A-ZIP/F-1 mice were susceptible to death after anesthesia, with 25% mortality (5 of 20) after Avertin anesthesia (0.375 mg/gram intraperitoneally), compared to no deaths in the controls.
At birth, the A-ZIP/F-1 mice are of normal size, but by 1 week, they are only half the size of their wild-type littermates and at weaning are 80% wild-type weight (Fig. 7). By weeks 8-11, they weigh the same as littermates and eventually weigh more. Adult A-ZIP/F-1 mice were polyphagic, eating 1.7 times more than littermate controls (Table 2). When A-ZIP/F-1 mice were fed the same amount of food as their littermates, they lost 19% of their body weight in 2 days. The transgenic mice had polyuria, with greatly increased urine output. A-ZIP/F-1 mice also showed polydypsia, drinking four times more than their littermates (32 ml/day, compared to 7 ml/day; Table 2). The polyphagia, polyuria, and polydypsia ob- Data are mean ± S.E.M. with five mice in each group. Animals were 20-week-old males from two litters. The body mass index (BMI) is the weight/(length squared). The organ weights were measured as a percentage of the animal's body weight. For epididymal (Epi) WAT, we estimate that 10 mg of tissue is the limit of detection. Ratio is the A-ZIP/F-1 value expressed as a fraction of the wild-type value. Probabilities (P) are from two-tailed t-test.
Adult A-ZIP/F-1 mice are diabetic
Glucose levels were measured to determine whether the adult mice are indeed diabetic, as suggested by the clinical symptoms. Hyperglycemia, the defining feature of diabetes mellitus, is present with serum glucose being increased 3.4-fold to 788 mg/dl in males and 3.1 fold, to 670 mg/dl, in females (Fig. 8). Glucose was also present in the urine (4+ glucosuria). The degree of hyperglycemia was severe, at the upper end of the glucose levels observed in murine models of diabetes. As expected from the hyperglycemia, the A-ZIP/F-1 mice were hyperosmotic (356 vs. 334 milliosmolar). Hyperglycemia can be accompanied by either low or high blood insulin levels, depending on whether the pancreatic islet  cells stop producing insulin (type 1 or juvenile diabetes) or target tissues fail to respond to high insulin levels (type 2 or noninsulin-dependent diabetes). In A-ZIP/F-1 mice, the serum insulin levels were massively increased, 58-fold in males and 442-fold in females (Fig. 8). These insulin levels are higher than typically found in other models of diabetes, such as the ob/ob and db/db mice (Coleman 1978). We measured triglyceride, free fatty acid, and -hydroxybutyrate concentrations in the plasma of A-ZIP/ F-1 and control mice. Triglycerides, which are carried in the circulation as lipoprotein particles, were elevated 3.0-and 5.3-fold in male and females, respectively (Fig. 8). Cholesterol levels were minimally elevated [188 ± 14 mg/dl vs. 167 ± 12 mg/dl (n = 5 for both)]. Free fatty acids are a circulating energy source. A-ZIP/F-1 mice have a 1.5-to 2.5-fold elevation in their free fatty acids levels in the fed state (Fig. 8). -Hydroxybutyrate, a ketone, is produced by the liver from free fatty acids at times of low glucose, low insulin, and high glucagon levels. Ketones are used as an energy source, particularly by the brain. Plasma -hydroxybutyrate levels were unchanged in the A-ZIP/F-1 mice (Fig. 8). Urine ketones were absent (data not shown), confirming the lack of ketosis in these mice despite the elevated free fatty acids levels. Taken together, these data demonstrate that A-ZIP/F-1 mice are hyperglycemic with severe insulin resistance, very high insulin levels, and without elevated ketones. These are the hallmark characteristics of type 2 diabetes.
Development of the diabetic phenotype
The A-ZIP/F-1 line is remarkable for its lack of WAT starting from birth. To see whether diabetes is present in young mice, we measured the glucose, insulin, and free Food and water intake were measured in female A-ZIP/F-1 and littermate mice using six mice per group. Mice (9-15 weeks old) were housed three/cage, and food consumption was measured daily for 10 days. Data are mean ± S.E.M. Water intake was measured in a separate experiment with individually housed mice. Ratio is the A-ZIP/F-1 value expressed as a fraction of the wildtype value. Probabilities are from two-tailed t-test. fatty acids levels weekly during the first 5 weeks (Fig. 9). Remarkably, the insulin levels were already 30-fold elevated at week 1. Free fatty acids were also increased by the first week, as was liver size (8.1% of body weight, compared with 3.5% in littermates). In contrast, glucose levels were normal at weeks 1 and 2, barely elevated at week 3, and only reached diabetic levels at 4 weeks of age. These results demonstrate that the hyperinsulinemia and increased free fatty acids precede the elevation in blood glucose by 3 weeks.
A-ZIP/F-1 mice have reduced leptin
Leptin is a hormone secreted by both white and brown adipose cells in proportion to fat mass. It regulates energy homeostasis by decreasing food intake and increasing activity, sympathetic tone, energy expenditure, and insulin sensitivity (for review, see Flier 1997). Circulating leptin levels in A-ZIP/F-1 mice were measured by Western blotting. In two pooled samples, leptin was reduced ∼10-fold to 0.6 and 1.0 ng/ml. Sixty to seventy percent of the A-ZIP/F-1 leptin was in a high molecular weight complex, as compared with 30% to 40% in control mice (Gavrilova et al. 1997; data not shown). Thus, the free, active leptin concentration in A-ZIP/F-1 mice is only ∼5% of wild-type mice. This leptin is produced in BAT (Fig. 4B).
Leptin is important for male and female sexual maturation and fertility (Chehab et al. 1996). Unlike ob/ob mice, which are totally deficient in leptin and remain sexually immature, A-ZIP/F-1 mice become fertile, but are not as fertile as wild-type mice. We measured the effect of leptin treatment on A-ZIP/F-1 mice, to investigate whether leptin deficiency accounts for their decreased fecundity. During an 8-day control interval, five male A-ZIP/F-1 mice, each housed with two wild-type females, produced a total of only two plugged females. Leptin treatment of the A-ZIP/F-1 mice did not alter the rate, with only two more plugged females during 14 days of treatment. Testosterone levels in A-ZIP/F-1 mice (with or without leptin treatment) were comparable with littermates. These data suggest that the poor fecundity of the A-ZIP/F-1 line is attributable to more than leptin deficiency.
A-ZIP/F-1 mice are unable to respond to fasting
Because the major function of WAT is to store energy for lean times, we studied the effect of fasting on A-ZIP/F-1 mice. During a 24-hr fast (Fig. 10), the A-ZIP/F-1 mice lost more weight than their littermates (6.7 vs. 3.5 grams). The glucose dropped slightly in the controls (210 to 134 mg/dl), but plummeted in the A-ZIP/F-1 animals (903 to 124 mg/dl) to wild-type levels. A drop in insulin drives the metabolic adaptation to fasting. Insulin levels in the A-ZIP/F-1 mice dropped from 359 to <3 ng/ml during the fast. Thus the pancreatic  cells, which are secreting very large amounts of insulin in the fed state, still retain their ability to decrease insulin secretion in response to decreased glucose.
Another striking result was the change in the free fatty acids levels during fasting. During fasting, WAT triglycerides are hydrolyzed and released as free fatty acids. The free fatty acids are metabolized to ketone bodies, such as -hydroxybutyrate, by the liver. In wild-type mice before fasting, the free fatty acids were 0.97 mM (higher than usual as a result of starting the fast at a relatively late time during the light cycle) and appropriately increased to 1.54 mM with fasting. In contrast, the A-ZIP/F-1 mice did not increase their free fatty acids, but paradoxically dropped them by 0.84 mM (from 1.14 to 0.30 mM). We attribute this drop to the inability of the A-ZIP/F-1 mice to replenish their circulating free fatty acids as a result of the lack of adipose stores. Indeed, the control mice increased their -hydroxybutyrate by 394%. In contrast, the A-ZIP/F-1 mice were unable to do this. Taken together, these data demonstrate that A-ZIP/F-1 are unable to respond appropriately to fasting, which we attribute to their lack of sufficient mobilizable lipid stores.
Discussion
This report presents a novel dominant-negative approach that inhibits gene expression. This approach was used to ablate fat tissue growth and differentiation. The domi- Figure 9. Development of diabetes in A-ZIP/F-1 mice after birth. Glucose, insulin, and free fatty acids were measured in A-ZIP/F-1 (᭹) and littermate (᭺) control mice. Samples were serum from exsanguination for weeks 1-3 and tail vein plasma for weeks 4 and 5. Data are mean ± S.E.M., n = 5-6 per group. nant-negative protein has properties not expected from simple deletions of existing genes or from genetic screens. The profound adverse physiological consequences of having no WAT and severely reduced BAT throughout mouse development are described.
Other lean mouse models exist. While the mechanism is obscure in some (Luetteke et al. 1993;Benson and Chada 1994), a number probably have alterations in adipose tissue formation or function (Kozak et al. 1991(Kozak et al. , 1996Ross et al. 1993;Katz et al. 1995;Kopecky et al. 1995;Levak-Frank et al. 1995;Wang et al. 1995;Cummings et al. 1996;Soloveva et al. 1997). In the experiments most relevant to ours, the aP2 enhancer/promoter was used to drive adipose expression of the diphtheria toxin A chain. High-level expression caused neonatal death, while lower levels produced loss of fat in aging mice (Ross et al. 1993;Burant et al. 1997). None of the previous models produced viable mice lacking WAT throughout development, suggesting that WAT is essential for life. The viability of the A-ZIP/F-1 mice, however, demonstrates that WAT is not essential for life.
Dominant-negative A-ZIP/F inhibits AP-1 and C/EBP transcription factors
WAT growth and differentiation are regulated by sequence-specific DNA binding proteins in the AP-1 and C/EBP B-ZIP families. AP-1 factors promote precursor cell proliferation (Distel et al. 1987;Stephens et al. 1992). C/EBP factors mediate adipocyte differentiation via a sequential pattern of expression beginning with C/EBP␦ and C/EBP and followed by C/EBP␣ (Mandrup and Lane 1997). The designed dominant-negative protein, A-ZIP/F, contains a promiscuous leucine zipper, termed ZIP/F, which heterodimerizes with either Jun or C/EBP family members. We do not know which of these families is critical for the phenotype. More selective B-ZIP dominant-negative proteins that inhibit only C/EBP or Jun families have been designed and transgenic mice generated. Two A-C/EBP lines have a adipocyte phenotype milder than A-ZIP/F-1 mice, suggesting that the inhibition of both Jun and C/EBP function is critical for the severe phenotype of the A-ZIP/F-1 mice.
Ablation of adipose tissue in A-ZIP/F-1 mice
The adult A-ZIP/F-1 mice contain inactive BAT that expresses 2 million copies of the A-ZIP/F protein per cell. Among C/EBP regulated genes, UCP1 expression is diminished, but leptin, aP2, and PPAR␥ are expressed at unaltered levels. These results suggest that not all C/EBP-driven promoters respond similarly to the dominant-negative action of A-ZIP/F. We suggest that this difference is attributable to the temporal lag or lead in the switching of these promoters compared with the aP2 promoter, which drives production of the A-ZIP/F mRNA. Promoters that are presumably switched on later in development than aP2 are more affected than those switched on earlier.
A similar temporal mechanism may help us understand why WAT is ablated in the A-ZIP/F-1 mice while BAT is merely reduced in amount and inactive. Appearing earlier in development, BAT grows to a point where its further maturation is interrupted by newly synthesized A-ZIP/F molecules, while WAT, appearing later in development, is affected immediately by A-ZIP/F molecules, precluding proliferation. Another possibility is that A-ZIP/F-1 mice might turn off BAT as a homeostatic response to an altered metabolic status. In this case, the residual tissue might approximate the response of normal BAT to an abnormal environment. The ab- Figure 10. Effect of fasting on A-ZIP/F-1 and wild-type mice. Male A-ZIP/F-1 and littermate wild-type controls, aged 23 weeks, were fasted for 24 hr (1:00 pm to 1:00 pm). Tail vein blood was obtained at the start (fed, solid bars) and conclusion (fasted, open bars) of the fast. Serum glucose, insulin, free fatty acids, and hydroxybutyrate were measured as detailed in Materials and Methods. Insulin was not measured in the wild-type mice and was undetectable in the fasted A-ZIP/F-1 mice (detection limit: 3 ng/ml). Data are mean ± S.E.M., n = 5 per group. (*) Difference at P ഛ 0.001, except for P = 0.02 for free fatty acids.
sence of WAT precludes a direct comparison of the BAT and WAT transcriptional milieus in A-ZIP/F-1 mice.
Physiology of the A-ZIP/F-1 line
The A-ZIP/F-1 phenotype includes poor fertility, organomegaly, elevated glucose, insulin, free fatty acids, and triglycerides, low leptin, and early death. The phenotype is attributed to the lack of WAT, disrupting energy storage and communication among the tissues of the body, including signals from both WAT and the rest of the body to WAT. Without WAT to take up and store free fatty acids/triglycerides derived mainly from diet and hepatic production, blood levels become elevated. Increased circulating free fatty acids have been proposed to cause increased blood glucose via the glucose-fatty acid cycle, in which muscle uses free fatty acids as a fuel in preference to glucose (Randle et al. 1963;McGarry 1992). The resulting increased blood glucose levels, in turn, lead to more insulin secretion and pancreatic islet -cell hypertrophy and hyperplasia. Free fatty acids, known ligands for transcription factors, might also cause insulin resistance via transcriptional mechanisms. The resulting extraordinarily high insulin levels are sufficient for cross-talk activation of IGF-1 receptors (Smith et al. 1988). IGF-1 receptor stimulation plausibly explains the enlarged organs and continued adult growth of the A-ZIP/F-1 mice.
The diabetes of the A-ZIP/F-1 mice is notable for its lack of ketosis, with low -hydroxybutyrate despite elevated free fatty acids. One explanation is that the high insulin levels communicate a fed status to the liver. Thus, enough of this insulin signal is transmitted to the hepatocytes to prevent ketogenesis. Direct studies of insulin signaling in A-ZIP/F-1 liver and muscle are in progress to determine whether there are different levels of insulin resistance in these tissues. Another possible explanation is that the livers in A-ZIP/F-1 mice are geared more toward lipogenesis than lipolysis, which might prevent ketogenesis even in the face of a sharp drop in blood glucose levels. It is interesting to note that the three rate-limiting fatty acid-synthesizing enzymes whose levels are elevated in A-ZIP/F-1 mice, are both transcriptionally and post-translationally regulated by insulin (Fukuda et al. 1997a,b;Gamble and Lopaschuk 1997).
Lack of adipose tissue in A-ZIP/F-1 mice causes leptin deficiency. Low leptin levels contribute to the insulin resistance, since leptin-deficient ob/ob mice are diabetic (Coleman 1978). Leptin also increases muscle glucose utilization (Kamohara et al. 1997), enhances insulin's inhibition of hepatic glucose production (Rossetti et al. 1997), and is required for sexual maturation and fertility (Chehab et al. 1996). However, injection of leptin did not restore A-ZIP/F-1 fecundity, suggesting that fat is also needed for reproductive function via leptin-independent mechanisms.
The inability of the mice to survive fasting confirms the role of WAT in supplying energy (as free fatty acids) for utilization during starvation. Recent experiments show that the A-ZIP/F-1 mice run out of energy reserves remarkably rapidly (O. Gavrilova, unpubl.). BAT thermogenesis is also severely compromised in A-ZIP/F-1 mice (O. Gavrilova, unpubl.).
A-ZIP/F-1 mice are a model for lipoatrophic diabetes
The A-ZIP/F-1 phenotype is strikingly similar to that of humans with severe lipoatrophic diabetes (particularly congenital generalized lipodystrophy, the Seip-Berardinelli syndrome). The human lipodystrophies are a heterogenous group of disorders characterized by decreased fat mass, insulin resistance, and elevated triglycerides (Foster 1994;Seip and Trygstad 1996). Both genetic and acquired forms (presumably autoimmune) are known. Lipodystrophic diabetes was recently recognized as a complication of protease inhibitor treatment of HIV infection (Carr et al. 1998).
The similarities of the A-ZIP/F-1 phenotype and lipoatrophic diabetes include nonketotic diabetes, hypertriglyceridemia, hepatic steatosis, alveolar foam cells, organomegaly, and hyperphagia (Table 3). Humans with severe lipoatrophic diabetes often die from complications of diabetes. A-ZIP/F-1 mice also die prematurely, possibly as a result of high glucose causing a hyperosmolar state. In addition to the similarities above, both lipoatrophic humans and A-ZIP/F-1 mice have an increased metabolic rate, which we attribute to their organomegaly (O. Gavrilova, unpubl.). A-ZIP/F-1 lipoatrophic diabetes is a consequence of the absence of fat, suggesting that a lack of fat is causative for the human disease.
Lipoatrophy is paradoxical, with the lack of fat causing diabetes. The usual scenario is that obesity causes type 2 diabetes. Thus it is important to examine the similarities between lipoatrophic and type 2 diabetes. As postulated above, increased free fatty acids may cause lipoatrophic diabetes and a role for free fatty acids has also been proposed in type 2 diabetes (McGarry 1992). Alternatively, lack of WAT in A-ZIP/F-1 mice may cause diabetes by a different mechanism from that which occurs in obese mice. The fact that the increased glucose in A-ZIP/F-1 mice first appears upon weaning suggests that it may be correlated with the metabolic changes that occur at this time. Availability of the A-ZIP/F-1 model will allow examination of these questions. The effect of limiting food intake on the A-ZIP/F-1 phenotype will be important to examine.
Beneficial effects of adipose tissue
Obesity is a major cause of death in America and is a significant risk factor for type 2 diabetes, hypertension, and coronary artery disease (Thomas 1995). Currently, a major health care goal is reduction of the prevalence of obesity in the human population. However, the A-ZIP/ F-1 line indicates that adipose tissue has some beneficial effects, particularly on growth, reproductive function, glucose metabolism, and the ability to tolerate fasting.
Transgenic mice
The plasmid directing fat-specific expression of A-ZIP/F was constructed by use of standard cloning procedures (Ausubel 1997) as follows. The 7621-bp aP2 gene enhancer/promoter (Spiegelman et al. 1983;Bernlohr et al. 1984;Cook et al. 1988) was obtained from Dr. M.D. Lane (Johns Hopkins University, Baltimore, MD) as p422-CAT Basic. The aP2 promoter, isolated as a HindIII-PstI (partial) fragment, was cloned into pBlue-scriptKS + (Stratagene) and denoted Bluescript aP2. A 298-bp PCR fragment contained A-ZIP/F (Flag-10-GGG-3heptad-F; previously, 3heptad-F; MDYKDDDDK*ASMTGGQQMGRDP-*GGGLARENEELEKEAEELEQENAE*LEQEVLELESRNDRLR-KEVEQLERELDTLRGIFRQLPESSLVKAMGNCA; Krylov et al. 1995). PstI (partial)-SmaI ends of A-ZIP/F was cloned into Bluescript aP2 to produce Bluescript aP2 A-ZIP/F. A KpnI site was included 3Ј of the PstI site to facilitate future cloning. Independently, a 1017-bp SmaI-BamHI fragment containing the SV40 splice site and poly(A) site (obtained from pRSVneo; Gorman et al. 1982) was cloned into pBluescriptKS + to produce Bluescript SV40 poly(A). A 7919-bp HindIII-SmaI fragment from Bluescript aP2 A-ZIP/F was cloned into Bluescript SV40 poly(A) to produce the final construct Bluescript aP2 A-ZIP/F SV40 poly(A). For microinjection, a 9036-bp DNA fragment containing the aP2 promoter, A-ZIP/F, and the SV40 splice, and poly(A) sites ( Fig. 2A) was obtained free of vector sequences by HindIII-NotI digestion and gel purification. Transgenic FVB/N mice (Taketo et al. 1991) were produced by microinjection into male pronuclei (Hogan et al. 1994) and screened by PCR on tail DNA with transgene-specific (x360, 5Ј-CTGTGCTGCAGACCAC-CATGG-3Ј, and x361, 5Ј-CCGCGAGGTCGTCCAGCCTCA-3Ј, giving a 332-bp product) and endogenous  maj -globin primers (Reitman et al. 1993). Litters were reared in heated cages until weaning. Mice were maintained on a 12-hr light/dark cycle and a standard pellet diet (NIH-07, 5% fat). They were bled via the tail vein in the nonfasting state, usually near the midpoint of the light period.
Assays
Glucose was measured with a Glucometer Elite (Bayer). Insulin was measured by radioimmune assay (RIA) (Linco, St. Charles, MO) with rat insulin as the standard. Triglycerides, free fatty acids, -hydroxybutyrate, and glycosylated hemoglobin were assayed with commercially available kits (Sigma and Boehringer Mannheim). Leptin was measured by Western blotting after immunoprecipitation ) because RIA (Linco) was not reliable at the low levels of the A-ZIP/F-1 samples. Bound leptin was measured by gel filtration (Gavrilova et al. 1997 A-ZIP/F protein concentration in A-ZIP/F-1 residual adipose tissue were determined by Western blotting with recombinant E. coli protein (Flag-10-4heptad-Fos; Olive et al. 1997) as a standard. The A-ZIP/F amino-terminal Flag epitope was detected by use of anti-Flag M5 mouse monoclonal antibody (Kodak). Tissue (50 mg) was homogenized with a polytron in 150 µl of ice-cold RIPA buffer (containing 100 mg/ml PMSF and 300 mg/ml aprotinin), incubated (on ice, 30 min), spun (15,000g, 20 min, 4°C), the disc of lipid floating at the top removed, and the supernatant collected and spun as before. These samples were snap frozen and stored at −80°C until use. A total of 60 µg of lysate protein was mixed with an equal volume of 2× SDS sample buffer, boiled for 5 min, and loaded onto a 15% SDS polyacrylamide gel. To determine the number of cells, DNA content was measured by DAPI fluorescence (Brunk et al. 1979), assuming 6 pg of DNA per diploid cell.
The effect of leptin on the fecundity of male A-ZIP/F-1 was measured with five male A-ZIP/F-1 mice (9-28 weeks old), each housed with two female mice. After an 8-day base line, males were injected daily with leptin (1 µg/g body weight, intraperitoneally) for 14 days. The females were examined for vaginal plugs daily and plugged females were replaced daily.
|
2018-04-03T00:00:40.077Z
|
1998-10-15T00:00:00.000
|
{
"year": 1998,
"sha1": "269b3f3fa2e0902d6aa699f4b078d129957af5b3",
"oa_license": null,
"oa_url": "http://genesdev.cshlp.org/content/12/20/3168.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1791aaba9c7aa7eccd059f17eb279584e9582923",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
104011277
|
pes2o/s2orc
|
v3-fos-license
|
Enhanced lithium storage performance of porous exfoliated carbon fibers via anchored nickel nanoparticles
Herein, flexible carbon fiber cloth (CFC) is modified by embedding Ni nanoparticles via a thermal reduction strategy, and it is used as a suitable anode material for lithium-ion batteries. Benefitting from the elemental interaction between Ni and carbon, the Ni-embedded CFC displayed higher lithium storage properties than pristine CFC and Ni-free porous CFC.
The rapid development of exible energy storage devices, especially exible lithium-ion batteries (LIBs), requires urgent development of exible electrodes. [1][2][3][4][5][6][7] The most common exible electrodes require the growth of active anode materials, such as carbon cloth, nickel foam, titanium mesh, etc., on exible current collectors. [8][9][10][11][12] This strategy is very effective in achieving high capacity exible anode but still exhibits relatively low volumetric capacity due to the thickness of current collectors. 13 Another interesting strategy used to develop exible anode materials is the utilization of different physical and chemical methods to synthesize exible graphene and carbon nanotubes, which can be used directly as anode materials for LIBs. [14][15][16][17][18] With substantial improvement and lightweight properties, such exible electrodes with poor mechanical strength, low exibility, poor cyclic stability and low rate capability greatly limit the practical applications. 13 Hence, it is imperative to develop exible anode materials with good mechanical strength, high exibility and enhanced lithium storage properties.
Recently, carbon ber cloths (CFCs) have been employed not only as substrates or current collectors for active materials but also as electrode materials in LIBs, 13 supercapacitors, 19,20 and water splitting. 21,22 Our previous reports have shown that the surface area of CFC can be enhanced via thermal etching. 13,21 However, the capacity obtained is not satisfactory enough. Moreover, the interaction of transition metals with carbon is said to improve the kinetics, introduce more active surface areas on carbon and enhance the power density of energy storage devices. 11,22,23 Motivated by our previous study, herein, Ni NPs were incorporated in CFC (Ni-PCFC) via a thermal reduction strategy, and Ni-PCFC was used as an anode material for LIBs. Aer thermal reduction and embedment of Ni NPs into CFC, the surface of the sample became porous, which was benecial for improving its storage performance. Hence, Ni-PCFC exhibited superior lithium storage properties when compared with pristine CFC and Ni-free porous CFC (PCFC). This study can create an opportunity for the use of carbon ber not only as a exible anode material for LIBs but also as a capacity contributor when it is used as a current collector for other active electrodes.
Ni-PCFC was synthesized by a simple hydrothermal reaction and annealing treatment. First, Ni(OH) 2 $H 2 O was grown on CFC (eqn (1)). CFC (ESI, Fig. S1 † and 1-I) was covered with Ni(OH) 2 $H 2 O nanosheets ( Fig. S2 † and 1-II). The Ni(OH) 2 $H 2 O-CFC nanosheets were then annealed in N 2 gas. At an annealing temperature of 500 C, NiO and Ni were formed (NiO@Ni-CFC), as shown in Fig. 1-III and eqn (2). The scanning electron microscopy (SEM) images of NiO@Ni-CFC are displayed in Fig. S3. † At this point, the nanosheets could still be maintained, but they became porous. By increasing the temperature, the morphology of nanosheets was damaged through thermal reduction, which allowed the carbon of the carbon ber to pyrolyze, and this also created nucleation and further anchoring of porous sites for the embedment of Ni NPs (Fig. 2a, the enlarged image can be found in Fig. S4 †). Ni-PCFC was formed ( Fig. 1-IV) according to eqn (3).
In addition, Ni-PCFC could be etched with concentrated HCl overnight to remove Ni NPs.
Hence, the Ni-free porous carbon ber (PCFC) was formed ( Fig. 1-V, 2b and S5 †). Transmission electron microscopy (TEM) analysis conrmed that Ni NPs were embedded in CFC (Fig. 2b). High-resolution TEM (HRTEM) image of Ni-PCFC displayed that CFC was porous and exfoliated (Fig. 2c). The energy spectrum (EDS) from Fig. 2c also revealed the presence of both Ni and C, conrming the successful formation of Ni-PCFC. The Ni and carbon contents of Ni-PCFC composite were determined using an inductively coupled plasma mass spectrometer (ICP-MS) and elemental analyzer, respectively. The results showed that the content of Ni was 6.0% in Ni-PCFC, whereas that of carbon was 93.86% in 10 mg cm À2 Ni-PCFC sample. Additionally, XPS analysis also showed that the atomic percentage of carbon was 89.78% and that of Ni was 10.22%.
It has been previously reported that the Brunauer-Emmett-Teller (BET) surface area measurement of pristine CFC is within 5-10 m 2 g À1 . 19,20 Compared to the BET surface area of pristine CFC, the BET surface area of Ni-PCFC reaches 109 m 2 g À1 (Fig. 3a) with a pore volume of 0.21 cm 3 g À1 (Fig. S6 †), and the values are even higher than the results of our previously reported study (97 m 2 g À1 and 0.13 cm 3 g À1 ) (Fig. 3a). 13 This result further suggests that Ni-PCFC can exhibit better electrochemical properties than CFC. According to Raman spectra analysis, the D : G peak of Ni-PCFC (2.34) is also higher than that of CFC (1.59). This indicates that the surface modication and deciency in Ni-PCFC is a result of Ni, and this can be benecial for improving storage performance. 13,24 Additionally, X-ray photoelectron spectroscopy (XPS) analysis conrms the presence of Ni in Ni-PCFC and absence of Ni in CFC (Fig. S7 † and 3c). Aer exfoliation of CFC, the intensity of C-C peak at 284.8 eV in the C 1s XPS spectra of Ni-PCFC is higher than that of CFC. Moreover, Ni-PCFC is characterized with additional HO-C]O peak at 291.0 eV, 19,20,25 further conrming that Ni-PCFC with suitable features can improve the electrochemical properties.
The lithium storage properties of Ni-PCFC were tested in a coin cell and compared with those of pristine CFC. The calculated charge/discharge capacities were based on the areas of the electrodes. The charge/discharge capacities were calculated in current  hour per area (i.e., mA h cm À2 ). Ni-free porous CFC (denoted as PCFC) was also tested for comparison to show the contribution of Ni NPs. Detailed results can be found in Fig. 4. The 1 st cyclic voltammetry (CV) curve of the electrodes showed that the Ni-PCFC curve area below 1.25 V was much larger than those of PCFC and CFC (Fig. S8a †), indicating the higher electrochemical surface area of Ni-PCFC than those of other electrodes. Additionally, Ni-PCFC was characterized with a cathodic peak at 1.06 V. Such a peak is assigned to the lithiation of Ni with porous carbon 23 and can lead to additional capacity. During the 2 nd cycle (Fig. 4a) and 3 rd cycle (Fig. S8b †), larger curve areas and an additional cathodic peak (shied to 1.31 V) could be observed for the Ni-PCFC electrode.
These results indicated the superior storage properties of Ni-PCFC when compared with those of the other electrodes. Such superior capacities could be observed in the charge/discharge proles of the electrodes. The Ni-PCFC electrode achieved a 2 nd cycle discharge capacity of 1.98 mA h cm À2 , which was higher than those of PCFC (1.48 mA h cm À2 ) and CFC (0.74 mA h cm À2 ) electrodes at a current rate of 0.5C (Fig. 4b).
The discharge capacity of Ni-PCFC remained at 1.93 mA h cm À2 (Fig. 4c) with 99% coulombic efficiency aer 100 electrochemical cycles (Fig. S9 †), this value was also higher than those of PCFC (1.34 mA h cm À2 ) and CFC (0.36 mA h cm À2 ). Moreover, the rate capability performance of Ni-PCFC up to 10C was also better than those of PCFC, CFC, (Fig. 4d) and previously reported porous CC. 13 The outstanding performance can be attributed to the electronic interactions between Ni NPs and exfoliated CF, leading to rapid diffusion of Li ions and electrons, which enhanced the electrode kinetics. This can be conrmed by the smaller charge transfer resistance (R ct ) in the Nyquist plot of Ni-PCFC when compared with those of CFC and PCFC (Fig. 4e). Due to the excellent kinetics of the Ni-PCFC electrode, a higher cyclic performance up to 150 cycles at a current of 5C was obtained, and the Ni-PCFC electrode could deliver a discharge capacity of 1.1 mA h cm À2 , which was 55% higher than that of PCFC (0.71 mA h cm À2 ) and 99% higher than that of CFC (0.01 mA h cm À2 ). Notably, the XRD spectra of Ni-PCFC before and aer electrochemical reactions remained the same, showing excellent phase stability (Fig. S10a †). Moreover, Ni NPs were well anchored on the porous exfoliated surface of CF, further indicating excellent morphological stability of Ni-PCFC (Fig. S10b-d †). Ni and PCFC both contributed towards lithium capacity. To conrm this phenomenon, Ni was dissolved to obtain bare PCFC. The lithium storage performance of PCFC was compared with those of Ni-PCFC and CFC. In Fig. 4a, Ni-PCFC is characterized with a cathodic peak as a result of the anchored Ni NPs, which was totally absent in pristine CFC as well as PCFC. Additionally, according to Fig. 4b-d of our manuscript, it could be observed that Ni-PCFC displayed superior storage performance owing to the anchored nickel. The anchoring of Ni on PCFC created strong electronic interactions between Ni NPs and PCFC, leading to rapid diffusion of Li ions and electrons, which enhanced the electrode kinetics. This indicated that both Ni and PCFC exhibited synergistic effect for enhancing the lithium storage capacity of the Ni-PCFC electrode. Ni-PCFC can be directly used as a exible anode material for the fabrication of exible LIBs.
In conclusion, lithium storage properties of carbon ber cloth (CFC) can be improved by embedding Ni NPs in porous exfoliated CFC (Ni-PCFC) via high temperature thermal reduction. Ni-PCFC displays better electrochemical properties than pristine CFC and Ni-free porous CFC (PCFC), due to electronic interactions between Ni NPs and PCFC, additional capacities and higher surface area. This study can create an opportunity for the development of high-capacity carbon-based electrode materials for exible LIBs.
Conflicts of interest
There are no conicts to declare.
|
2019-04-09T13:09:38.916Z
|
2018-05-09T00:00:00.000
|
{
"year": 2018,
"sha1": "de149b901bd33aa74c1bbce7453a1c47a235410b",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c8ra02529k",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bb16257a9a7b15a26e6d1c19f9923924ca63a93",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
118893757
|
pes2o/s2orc
|
v3-fos-license
|
The Evolution of Assembly Bias
We examine the evolution of assembly bias using a semi-analytical model of galaxy formation implemented in the Millennium-WMAP7 N-body simulation. We consider fixed number density galaxy samples ranked by stellar mass or star formation rate. We investigate how the clustering of haloes and their galaxy content depend on halo formation time and concentration, and how these relationships evolve with redshift. At $z=0$ the dependences of halo clustering on halo concentration and formation time are similar. However, at higher redshift, halo assembly bias weakens for haloes selected by age, and reverses and increases for haloes selected by concentration. The variation of the halo occupation with concentration and formation time is also similar at $z=0$ and changes at higher redshifts. In this case, the occupancy variation with halo age stays mostly constant with redshift but decreases for concentration. Finally, we look at the evolution of assembly bias reflected in the galaxy distribution by examining the galaxy correlation functions relative to those of shuffled galaxy samples which remove the occupancy variation. This correlation functions ratio monotonically decreases with larger redshift and for lower number density samples, going below unity in some cases, leading to reduced galaxy clustering. While the halo occupation functions themselves vary, the assembly bias trends are similar whether selecting galaxies by stellar mass or star formation rate. Our results provide further insight into the origin and evolution of assembly bias. Our extensive occupation function measurements and fits are publicly available and can be used to create realistic mock catalogues.
INTRODUCTION
Cosmic structure evolves hierarchically in the cold dark matter model. Density fluctuations grow by gravitational instability and form dark matter haloes, which evolve via accretion and mergers with other haloes (Press & Schechter 1974). White & Rees (1978) formulated the basis of modern galaxy formation theory starting from this concept, postulating that galaxies form inside dark matter haloes via the cooling of gas, star formation and mergers of galaxies. This framework is the basis of semi-analytic models of galaxy formation (SAMs; see, e.g., Baugh 2006; Benson 2010 for reviews). These models use the merger histories of dark matter haloes as the starting point to model galaxy formation. The first SAMs used merger trees constructed using Monte-Carlo approaches based on the extended Press-Schechter theory (e.g., Lacey & Cole 1993;Cole et al. 1994;Kauffmann & White 1993), while modern SAMs use merger trees extracted from high-resolution dark matter simulations (e.g., Kauffmann et al. 1999;Bower et al. 2006;De Lucia & Blaizot 2007;Lagos, Cora & Padilla 2008;Benson 2012;Jiang et al. 2014;Croton et al. 2016). This opens up the prospect of studying environmental influences on the formation histories and properties of dark matter haloes and the impact on the galaxies they host.
The framework that led to SAMs also underpins the development of the halo occupation distribution (HOD) approach as an empirical description of galaxy clustering (e.g., Peacock & Smith 2000;Berlind & Weinberg 2002;Cooray & Sheth 2002;Zheng et al. 2005). The HOD formalism characterizes the relationship between galaxies and dark matter haloes in terms of the probability distribution that a halo of virial mass M h contains N galaxies of a given type, together with the spatial and velocity distribution of galaxies inside haloes. An assumed cosmology and a specified shape of the HOD then allows us to predict any galaxy clustering statistic. The HOD approach is a powerful way to interpret observed galaxy clustering measurements, essentially transform-ing correlation function measurements to the relationship connecting galaxies with haloes (e.g., Zehavi et al. 2011;Coupon et al. 2012 and references therein). It is also a useful method to characterize the predictions of galaxy formation models in a concise form that allows us to quantify the galaxy-halo relation (e.g., Zheng et al. 2005;Contreras et al. 2013Contreras et al. , 2017. Another important application of the HOD approach is to facilitate the generation of realistic galaxy mock catalogues by populating dark matter haloes from an N-body simulation with galaxies that reproduce a particular target clustering measurement. This method has become increasingly popular due to the growing demand for such catalogues for planning for and interpreting the results from large galaxy surveys and due to its good performance and low computational cost (e.g., Manera et al. 2015;Zheng & Guo 2016). In the standard HOD framework mass is the only halo property that plays a role. This foundation of the HOD method has its origins in the Press-Schechter formalism and the uncorrelated nature of the random walks used to describe halo assembly in excursion set theory. This leads to the prediction that the halo environment is correlated with halo mass but not with how the halo is assembled (Bond et al. 1991;Lemson & Kauffmann 1999;White 1999). This is, however, not the case for haloes in N-body simulations where halo populations of the same mass but with a different 'secondary property' display different clustering, an effect that is now generally termed (halo) assembly bias. This was convincingly demonstrated in the Millennium N-body simulation of by Gao, Springel & White (2005) who showed the age-dependence of the clustering of haloes of the same mass (see also Sheth & Tormen 2004); this dependence of halo clustering on secondary properties besides mass was later extended to, e.g., concentration, spin, substructure (e.g., Wechsler et al. 2006;Jing, Suto & Mo 2007;Lacerna & Padilla 2012;Xu & Zheng 2017;Mao, Zentner & Wechsler 2018). Croton, Gao & White (2007) used a SAM applied to the Millennium Simulation to show that halo assembly bias also impacts the clustering of galaxies, an effect that is now commonly referred to as galaxy assembly bias, namely halo assembly bias as reflected in the galaxy distribution (see also Zhu et al. 2006;Zu et al. 2008;Lacerna & Padilla 2011;Chaves-Montero et al. 2016). This can potentially have important implications for interpreting galaxy clustering using the HOD framework (e.g., Zentner, Hearin & van den Bosch 2014). Detecting galaxy assembly bias has proven challenging and controversial. Despite some studies which claim to have uncovered the existence of assembly bias in the observable Universe (e.g., Berlind et al. 2006;Yang et al. 2005;Cooper et al. 2010;Wang et al. 2013;Lacerna, Padilla & Stasyszyn 2014;Hearin, Watson & van den Bosch 2015;Miyatake et al. 2016;Saito et al. 2016) others argue that the impact of assembly is small (Abbas & Sheth 2006;Blanton & Berlind 2007;Tinker et al. 2008;Tinker, Wetzel & Conroy 2011;Lin et al. 2016;Dvornik et al. 2017) or that the assembly bias signal could be a result of different systematics (e.g. Campbell et al. 2015;Zu & Mandelbaum 2017;Busch & White 2017;Sin, Lilly & Henriques 2017;Tinker et al. 2017;Lacerna et al. 2018). This is the latest in a series of papers examining the spatial distribution of galaxies predicted by SAMs. Contreras et al. (2013) examined the clustering and HOD predicted by SAMS from different groups and found that the models give robust clustering predictions when the galaxies are selected by properties that scale with the halo mass (such as stellar mass). Contreras et al. (2015) studied how predicted galaxy properties (such as stellar mass, cold gas mass, star formation rate, and black hole mass) correlate with their host halo mass in different SAMs. Contreras et al. (2017) examined how the predicted HOD form evolves with redshift in SAMs. We proposed a parametric form for the evolution of the HOD fitting parameters that can be used when constructing mock galaxy catalogs or for consistently fitting clustering measurements at different epochs.
Finally, in Zehavi et al. (2018) (hereafter Z18) we use SAMs to investigate how the galaxy content of dark matter haloes is influenced by the large-scale environment and halo age at z = 0, for galaxy samples selected by their stellar mass, finding distinct variations of the halo occupation functions. We show that haloes which form early have more massive central galaxies, and thus start hosting them at lower halo mass, and fewer satellite galaxies, compared to late-forming haloes. We also find similar results in hydrodynamical simulations (Artale et al. 2018). These occupancy variations, namely the dependence of the HOD on halo properties other than mass, are intimately related to assembly bias, as it is their effect combined with halo assembly bias that gives rise to galaxy assembly bias.
Here, we build on our previous studies and investigate the evolution of assembly bias and specifically the occupancy variations in SAMs. We extend the analysis of Z18 in a number of ways: 1) we study a wide range of redshifts between z = 0 and z = 3; 2) we explicitly examine separately the different manifestations of assembly bias, namely halo assembly bias, occupancy variation, and galaxy assembly bias; 3) we consider galaxy samples constructed using two properties, stellar mass and star formation rate (SFR); and 4) we select haloes using two secondary parameters, halo formation time and concentration. We use the Guo et al. (2013) SAM which is a recent galaxy formation model from the Munich group implemented in a Millennium class N-body simulation with a WMAP-7 cosmology. Wechsler et al. (2006) and study the evolution of halo assembly bias in large N-body simulations using a mark-correlation statistic and the large-scale bias of the masshalo cross-correlation, respectively. Hearin et al. (2016) examine the redshift dependence of assembly bias in the context of an extension of the HOD framework that incorporates assembly bias (the so-called decorated HOD), finding that the impact of assembly bias on galaxy clustering weakens at higher redshift for samples with fixed stellar mass. We aim to comprehensively investigate the evolution of galaxy assembly bias using a physical galaxy formation model. We focus here on galaxy assembly bias as reflected in the halo occupation and galaxy clustering. To our knowledge this is the first work that explicitly examines the evolution of the occupancy variation, and as a consequence, of galaxy assembly bias. Our aim is to investigate the origin and evolution of assembly bias. This will enable the development of more sophisticated tests to search for assembly bias in the observable Universe. Our results will also help shape the design of new mock galaxy catalogues, which are necessary for the next generation of galaxy surveys.
The outline of this paper is as follows: in Section 2 we introduce the SAM used and describe the different galaxy and halo samples employed in this work. Section 3 shows our results regarding the evolution of halo assembly bias, while Section 4 presents our main results regarding the evolution of the occupancy variation. In Section 5 we study the impact of assembly bias on galaxy clustering and the evolution of galaxy assembly bias. Finally, in Section 6 we summarise our results and present our conclusions. We describe our publicly available occupancy variation measurements and parametric fits in the appendix. Throughout the paper masses are measured in h −1 M , the SFR is measured in M /yr and distances are measured in h −1 Mpc and are in comoving units.
THEORETICAL BACKGROUND AND SAMPLE DEFINITION
In this section we describe the dark matter simulation and the semianalytic model used in this paper. We also present the different galaxy and halo samples we utilize. Finally, we describe the techniques used to characterise the galaxy and halo samples.
The semi-analytic model
Semi-analytical models (SAMs) are one of the principal tools available to model galaxy formation (see Baugh 2006;Lacey et al. 2016 for reviews). These models aim to follow the main physical processes involved in the formation and evolution of galaxies. These processes include: the collapse and merging of dark matter haloes, shock heating and radiative cooling of gas, star formation; supernovae, AGN, and photoionization feedback, chemical enrichment of gas and stars, disc instabilities and galaxy mergers. The SAM used in this work is that of Guo et al. (2013) (hereafter G13). This model is a version of L-GALAXIES the SAM code of the Munich group (De Lucia, Kauffmann & White 2004;Croton et al. 2006;De Lucia & Blaizot 2007;Guo et al. 2011;Henriques et al. 2013Henriques et al. , 2015. For an extended description of this model and its performance we refer the reader to Guo et al. (2013, see also Guo et al. 2016and Contreras et al. 2017. The outputs are publicly available from the Millennium Archive 1 . G13 is the latest publicly available SAM of the Munich group that makes use of the Millennium-WMAP7 dark matter simulation. We will explore other SAMs in future work, but do not expect our conclusions to change.
N-body Simulation
The G13 model is implemented in the Millennium-WMAP7 Nbody simulation . This simulation has similar specifications to the original Millennium simulation of but uses a WMAP7 cosmology 2 . The simulation uses 2160 3 particles in a periodic box of comoving volume ( 500 h −1 Mpc) 3 corresponding to a particle mass of 9.31 × 10 8 h −1 M . There are 61 simulation snapshots output between z = 50 and z = 0.
Halo merger trees are constructed from the simulation outputs. These trees are the starting point of the SAM. A friends-of-friends (FoF) group finding algorithm (Davis et al. 1985) is used to identify haloes in each snapshot of the simulation, retaining those with at least 20 particles. SUBFIND is then run on these groups to identify subhaloes (Springel et al. 2001). Merger trees are constructed by linking a subhalo in one snapshot to a single descendant subhalo in the subsequent output, i.e., a subhalo merger tree. The G13 model uses these merger trees as the starting point to build its galaxy catalogue. Here, the mass of a dark matter halo, M h , is defined as the mass within the radius where the halo overdensity is 200 times the 1 http://gavo.mpa-garching.mpg.de/Millennium/ 2 The values of the cosmological parameters used in the Millennium-WMAP7 simulation are: Ω m0 = Ω dm0 +Ω b0 = 0.272, Ω Λ0 = 0.728, Ω b0 = 0.0455, σ 8 = 0.81, n s = 0.967, h = 0.704. critical density of the simulation (referred to as "m crit200" in the public database).
Classifying samples by galaxy properties
For the main part of our analysis we use samples defined by galaxy number density. To do this we rank the model galaxies either by stellar mass or SFR and include all galaxies above a particular value of the stellar mass or SFR threshold that provides the desired number density. We construct galaxy samples for three different number densities, n = 0.01, 0.0316 and 0.001 h 3 Mpc −3 , and for a wide range of redshifts: z = 0, 0.5, 1, 1.5, 2, 2.5 and 3. The samples are chosen to be evenly spaced in logarithmic number density with differences of half a decade in log abundance.
The cumulative comoving number density of galaxies ranked by stellar mass is often used to link galaxy populations across cosmic time (e.g., Padilla et al. 2010;Leja et al. 2013;Mundy, Conselice & Ownsworth 2015;Torrey et al. 2015;Contreras et al. 2017). This type of selection is preferred over using a constant stellar mass cut to select galaxies at different epochs since it mitigates the need to assume a specific evolution model for the stellar mass, is insensitive to systematic shifts in the calculation of stellar masses and can be readily applied to observations. It also facilitates the comparison with galaxy samples selected using different properties (here, e.g., with galaxies selected by their SFR). Contreras et al. (2013) also showed that the HOD predictions for samples defined in this way are robust among different SAMs at a fixed redshift. Fig. 1 shows the cumulative stellar mass function (top panel) and SFR function (bottom panel) for all redshifts studied here. The horizontal dashed lines show the different number density cuts we consider. The galaxies selected in each case are those to the right of the intersection with their associated dashed line. The top panel exhibits the expected growth of the galaxy stellar mass with time, while the bottom panel shows that there are fewer star forming galaxies at low redshifts than at high redshift.
Classification by halo properties
To investigate assembly bias we define subsets of the fixed number density galaxy samples by selecting haloes using two different intrinsic or secondary properties: formation time (age) and concentration.
We define the formation time of a halo as the redshift when its main progenitor reaches half of the halo's present-day mass for the first time. This definition is commonly used in the study of assembly bias (e.g., Gao et al. 2004;Gao, Springel & White 2005;Croton, Gao & White 2007, Z18). We note that the formation time of a halo is calculated at each redshift independently. We calculate the formation time using the merger trees available in the database and linearly interpolate the mass of the haloes between snapshots.
The other halo property we consider is the concentration. The halo concentration characterizes the density profile. It is canonically defined as c vir = r vir /r s , where r vir is the virial radius of the halo and r s is the inner "transitional" radius appearing in the Navarro, Frenk & White (1996) profile, at which the density profile changes slope. It is often alternatively defined via the rotation curve of the halo, as the ratio between V max and V vir , where V max is the peak value of the rotation curve, V 2 c = G M(r)/r, and V vir the virial velocity of the halo, V vir ≡ V c (r vir ) (Bullock et al. 2001;. We utilize the latter definition here, which is directly calculable from simulation data and does not require any model fitting. In order to explore the variation in clustering and halo occupation with halo age and concentration, following Z18, we rank the haloes by these properties and identify the 20 per cent oldest and youngest haloes (based on their formation time) and (separately) the 20 per cent of haloes that are most or least concentrated. These divisions are made in 0.1 dex bins of halo mass, so as to factor out the influence of the changing halo mass function on these properties; the 20 per cent extremes of the distribution set up in this way effectively have the same mass function as the overall sample. We also tested using binnings of 0.05 and 0.2 dex in halo mass finding no difference in our main results.
The HOD and the correlation function
To study the impact of assembly bias on galaxies we measure the halo occupation functions and the correlation functions for the various halo and galaxy samples.
The HOD formalism describes the "bias" relation between galaxies and mass at the level of individual haloes allowing us to characterize the galaxy-halo connection. The key ingredient is the halo occupation function, N(M h ) , which represents, for a given galaxy sample, the average number of galaxies per halo as a function of halo mass (loosely referred to here also as the HOD). The commonly assumed shape for the halo occupation function is motivated by predictions of physical models such as SAMs and hydrodynamic simulations (Berlind et al. 2003;Zheng et al. 2005). When inferring the HOD it is often useful to consider separately the contribution from central galaxies and that of the additional satellite galaxies populating the halo (Kravtsov et al. 2004;Zheng et al. 2005). For stellar mass (or luminosity) threshold galaxy samples, the expected form of the central galaxies occupation function is a smoothed step function and roughly a power-law for the satellites. For samples defined by SFR or color, the shape of the halo occupation function is more complex to account for the paucity of blue/star forming centrals in massive haloes (e.g., Zehavi et al. 2005;Geach et al. 2012;Contreras et al. 2013;Gonzalez-Perez et al. 2018). We emphasize that the HODs presented in this work are all calculated directly from the SAMs, rather than inferred from the clustering, as is commonly done with observational data.
The correlation function (CF) is the most fundamental measure of the spatial distribution of haloes and galaxies. It is defined as the excess probability of finding a pair of objects at a given separation compared to a random distribution. Following and Z18, whenever we measure the CF for the full galaxy sample we calculate the auto CF (the correlation of a given sample of objects with respect to the same sample). In contrast, when we measure the CF of a subsample of galaxies (e.g., the ones associated with the 20 per cent earliest-formed haloes) we measure the cross CF between this sample and the full galaxy sample. As explained in Z18 (see specifically their Appendix B) using the cross CF increases the signal-to-noise of the measurements and facilitates the interpretation of the results compared with the use of the auto CF of the subsamples.
THE EVOLUTION OF HALO ASSEMBLY BIAS
There are two basic ingredients necessary for galaxy assembly bias: (i) halo assembly bias, namely the dependence of halo clustering on halo properties other than mass, and (ii) the variation in the galaxy content of haloes with these properties, which we refer to as the occupancy variation (see Z18). Galaxy assembly bias requires both effects to be present. In this paper, we study how halo assembly bias, the occupancy variation and the resulting galaxy assembly bias evolve with time. This will provide further insight into the nature and origin of assembly bias and may guide attempts to detect it in observational galaxy samples. We show the evolution of halo assembly bias in this Section. The evolution of the occupancy variation is discussed in Section 4, and the evolution of galaxy assembly bias is presented in Section 5.
First we look at the evolution of halo assembly bias in the dark matter-only N-body simulation without reference to the SAM galaxies. We begin with a visual inspection of the distribution of haloes in the simulation. Fig. 2 shows haloes in a slice of the Millennium-WMAP7 simulation at z = 0 and z = 3, distinguishing between those with early and late formation times and also those with high and low concentrations. Starting with the halo age dependence at z = 0 (Fig. 2, top-left double panels), we see that while both early-formed and late-formed haloes trace the same cosmic web, the early-formed haloes present a sharper view of the web and appear somewhat more clustered. Similar results were shown by Gao, Springel & White (2005) and Z18. The view of the cosmic web when highlighting the extremes of halo concentration at z = 0 ( Fig. 2, top-right panels) is reminiscent of that using halo formation time, though the clustering differences are slightly less apparent in this case. The bottom half of Fig. 2 shows the distribution of haloes chosen similarly, but now at z = 3. As expected, the haloes overall appear less clustered than at z = 0. The differences between the early-formed and late-formed haloes (bottom-left panels) are much smaller in this case, and interestingly for the concentration (bottomright panels), it appears that the haloes with low concentration are in fact now more clustered than the ones with high concentration. We quantify all of the trends discussed above shortly below using the CF.
To better understand how the age and concentration of haloes correlate with one another at different redshifts, Fig. 3 shows the joint distribution of halo concentration and formation time at z = 0 (left) and z = 3 (right). We show both the distribution of the full set of haloes (contours) and the median concentration as a function of halo age for three narrow bins of halo mass (lines and errorbars). The jags in the contours in the z = 3 panel are artificial being caused by the limited time resolution of the Millennium-WMAP7 simulation outputs at high redshift. At z = 0 there is a clear trend of concentration increasing with formation redshift (as shown by the solid lines). On the other hand, at z = 3 there is little variation of halo concentration with formation redshift which suggests that the assembly bias effect with concentration and halo age might be different. For a more extensive comparison between different secondary properties of haloes at z = 0 see Mao, Zentner & Wechsler (2018).
We now look into the CF of halo samples with different concentrations and formation times for a fixed number density after rank ordering the haloes in decreasing mass at z = 0 and z = 3 (Fig. 4). We use a halo number density of n = 0.00618 h 3 Mpc −3 , which is comparable to the number density of central galaxies in the n = 0.01 h 3 Mpc −3 galaxy sample (when selecting galaxies by their stellar mass). In each subplot of Fig. 4, the black line in the top panel denotes the auto CF of the full halo sample, while the red (blue) lines correspond to the cross CF of the full sample with the 20 per cent oldest (youngest) haloes in the top row of the figure, or the 20 per cent highest (lowest) concentration haloes in the bottom row (see § 2.4).
We find that at z = 0, haloes with early formation times and high concentrations are more clustered than ones with late formation times and low concentrations. This is the well-studied behavior of halo assembly bias (e.g., Gao, Springel & White 2005;Wechsler et al. 2006;. The halo assembly bias effect, as reflected by the clustering differences, is slightly stronger for the case of halo formation time than for halo concentration and extends to smaller separations. We note that as we are measuring here halo (instead of galaxy) clustering, the scales involved are all in the socalled 2-halo regime. At higher redshift (e.g., z = 3) there is no difference in the clustering measured for haloes at the extremes of the formation time distribution and low concentration haloes are more clustered than high concentration haloes, reversing the trend seen at the present day. We reach the same conclusions as already inferred from Fig. 2, namely that the halo assembly bias signal (i.e. the difference between the red and blue lines) decreases with increasing redshift for halo samples selected by age. For concentration, the evolution of halo assembly bias is stronger in the sense that the clustering differences reverse at high redshift. These trends are in agreement with the evolution of the halo assembly bias signal found in the original Millennium simulation by and by Wechsler et al. (2006).
To understand the origin of this difference between using age and concentration as the secondary parameter we show in the right panel of Fig. 4 the CF of the main progenitors of the z = 0 haloes selected at z = 1, 2 and 3. We call this sample the 'tracking sample'. For this sample only, the secondary property halo labels (i.e. in terms of the extremes of concentration or formation time) refer to the z = 0 descendants. We find different trends for different scales. At large scales (> 10 h −1 Mpc), the tracking sample shows the same clustering trend as their descendants at z = 0 but with a higher amplitude. This means that the evolution of halo assembly bias is not caused by a change in the clustering of haloes with extreme values of formation time or concentration. Instead we attribute the evolution of the assembly bias signal at a fixed halo number density to a shift in the ranking of haloes according to their mass and secondary property. This means that, for example, haloes with the highest concentrations at z = 0 are not necessarily the ones with the highest concentrations at z = 3.
We demonstrate this shift in the ranking of the haloes in Table 1. Here we show that fewer than 40 per cent of the progenitors of z = 0 haloes were part of the original sample at z = 1. At z = 3 this number decreases to ∼ 20 per cent. This shift also explains the different evolution of halo samples selected by age and concentration. Table 2 shows that while at z = 0 there is a ∼ 40 per cent overlap between members of the early (late) formation time halo sample and the high (low) concentration halo sample this number decreases to ∼ 10 per cent at z = 3. Different trends are seen at intermediate and small separations in the right panels of Fig. 4. The progenitors of early formation time and high concentration haloes are more correlated on small scales and less correlated on intermediate scales (compared to haloes with late formation times and low concentrations). These are not the focus of our work presented here, and we provide just some heuristic considerations. As early-formed haloes grow faster at higher redshifts it may be expected that they exhibit stronger clustering on small scales at z = 3 (since they accrete mass from nearby structures). The stronger clustering on intermediate scales for haloes with late formation times may be explained in terms of these haloes accreting more mass at lower redshifts and the structures that will merge with these haloes being in their vicinity but not immediate proximity.
Since at z = 0 there is a ∼ 40 per cent overlap between halo samples selected by age and concentration (Table 2) we can assume this explanation is also valid for the main progenitors of the haloes selected by concentration.
One might be concerned that the agreement between the correlation functions of early and late formation time haloes in the topmiddle panel of Fig. 4 could be an artifact of the limited time resolution of the Millennium-WMAP7 simulation at high redshifts. To check this we also calculated the correlation functions using the P-Millennium simulation (Baugh et al. 2018), a dark matter only simulation with over four times as many snaphots as the Millennium-WMAP7 run and with a better mass resolution. We find the same trends as those presented in this work, confirming that our results are not a product of the finite time resolution of the dark matter simulation used.
THE OCCUPANCY VARIATION EVOLUTION
In this section we show the evolution of the occupancy variation in the SAM, i.e., how the dependence of the HOD on a secondary halo property varies with time. This may provide us more insight into the nature and origin of this phenomena.
Z18 showed that in SAMs, when selecting galaxies at z = 0 by their stellar mass, the predicted HOD depends on halo formation time as well as halo mass. They found that haloes with early formation times tend to start being populated by central galaxies (the main galaxy of a dark matter halo) at lower masses than those with late formation times, but they have a lower number of satellites. Artale et al. (2018) show that this is also the case in hydrodynamic simulations. We find that the above results also hold for other redshifts. This is shown, for example, in Fig. 5, where we plot the HOD at z = 1 for n = 0.01 h 3 Mpc −3 for galaxies ranked by their stellar mass. The occupancy variation for both central and satellite galaxies is clearly evident. Fig. 6, shows the HOD for z = 0 (top) and z = 3 (bottom) for the same sample selection. In the left panels of Fig. 6 the lines represent the contribution from the 20 per cent of haloes with the earliest (red) and latest (blue) formation times, while the right panels show the contribution from the haloes with the 20 per cent high- est and lowest concentrations. We remind the reader that these halo subsamples are constructed by selecting the haloes in narrow bins of halo mass. At z = 0, the predictions for the high (low) concentration samples are similar to those with early (late) formation times. This is consistent with what we found in Section 3, that the behaviour of these samples in terms of clustering is similar at z = 0, but now extended to the halo occupation with galaxies. This similarity is no longer present at z = 3. At this redshift, the occupancy variation for haloes selected by their age is qualitatively similar to that at z = 0 (and at z = 1; see Fig. 5). For haloes selected by concentration, the occupancy variation decreases somewhat for the satellite galaxies and it almost disappears for the central galaxies as we go to z = 3. These trends also hold for other number density samples. We repeat the above analysis for galaxies selected by SFR in Fig. 7. The overall shape of the HOD at z = 0 for SFR-selected galaxies is different than for galaxies selected by stellar mass due to the tendency of high mass halos to host non star forming (red) central galaxies, as discussed in Section 2.4. Interestingly, the "dip" feature diminishes as one goes toward higher redshifts, possibly due to having less time for quenching mechanisms to occur. By z = 3 the overall shape of the HOD, and in particular the central galaxies contribution, is very similar for the SFR-selected galaxy samples and the stellar mass selected samples (see also Orsi et al. 2008). We have verified that the transition in the shape of the HOD between z = 0 and z = 3 is smooth with increasing redshift. A large set of HOD measurements for different redshifts and number densities is being released with this paper (see Appendix A for more details). At z = 0, for the SFR-selected galaxy samples, early forming and high concentration haloes have a lower number of satellite galaxies compared to haloes with late formation times or low concentrations (same as for galaxies selected by stellar mass). For the central galaxies, at low halo masses, early forming and high concentration haloes have a larger number of central galaxies, while for higher halo masses they have a lower number of central galaxies compared to haloes with late formation times or low concentrations. The latter trend perhaps arises since the central galaxies in the early-formed high-mass halos have more time to be impacted by star formation quenching. At z = 3, the HODs for galaxies selected by SFR display the same trends as those for galaxies selected by stellar mass. The occupancy variation (i.e. the difference between the red and blue lines) stays roughly constant for haloes selected by age. The occupancy variation with halo concentration decreases with redshift for the HOD of the satellites and nearly diminishes for that of the central galaxies.
The full redshift evolution of the occupancy variation is captured in Fig. 8 and Fig. 9, where we show the ratios of the HODs (as in the bottom subpanels of Fig. 5-7) for halos selected by age and concentration at all redshifts explored, for galaxies selected by stellar mass and SFR, respectively. Here we corroborate that, for galaxies selected by stellar mass, the magnitude of the central galaxies occupancy variation is constant with redshift for haloes selected by age and it significantly decreases with increasing redshift (nearly diminishing by z = 3) for haloes selected by concentration. The occupancy variation for the satellites part of the HOD progressively decreases for either age or concentration. The overall shift of the ratios toward lower halo mass with increasing redshift reflects the expected redshift evolution of the HOD (as studied for example by Contreras et al. 2017;see their Fig. 5). For SFR selected galaxies (Fig. 9), the occupancy variations decrease for both age and concentration, with a more pronounced trend for the latter.
The different evolution of the occupancy variation with age and concentration indicates a different origin for these two effects. Even though they appear similar at z = 0, they evolve differently. We will further investigate their nature and origin in future work (Zehavi et al., in prep.). It is interesting to note that the evolution of the occupancy variation shows different trends compared to the evolution of the halo assembly bias found in Section 3, where the signal decreased for the halo samples selected by age but not for those selected by concentration as in the occupancy variation case. Both effects will influence the evolution of the galaxy assembly bias signal, as we will now show.
THE EVOLUTION OF GALAXY ASSEMBLY BIAS
In this section we show the effect of assembly bias on the galaxy correlation function at different redshifts. As we did in Section 3, we measure the auto CF for the full galaxy sample as well as the cross CF of the full sample with the given subsample (e.g. early/late formed haloes).
To study the impact of assembly bias on the CF we shuffle galaxies among haloes of the same mass, following the approach of Croton, Gao & White (2007) and Z18. This consists of taking all haloes in a given bin of halo mass (0.1 dex wide in our case; we also tested using a bin width of 0.05 and 0.2 dex and found no major difference in our results) and randomly reassigning the galaxy population between these haloes. Central galaxies are located at the position of the central galaxies they replace (except if there is no galaxy in a halo; in which case the new galaxy is located at the potential minimum of the halo). The satellite galaxies are moved together with their original central galaxy and retain the same relative positions to it in their new halo. The shuffling removes any potential connection to the assembly history of the halos, and effectively transforms the HOD of any halo subsample (e.g., for a range of halo formation times or concentrations) to be the same as the total HOD (making, e.g., the red and blue lines of Fig. 5 be the same as the black line). This new galaxy sample will have, by construction, no occupancy variation.
The CFs for a stellar mass selected sample with number density n = 0.01 h 3 Mpc −3 at z = 1 is shown in top panel of Fig. 10. The auto CF of the full sample is shown in black, and the red and blue lines are the cross CF for the 20 per cent earliest and latest forming haloes, respectively. The dashed lines show the CF for the shuffled samples. The shaded region and errorbars represent the jackknife errors calculated using 10 realisations for the real and the shuffled samples. The middle panel shows the ratios between the cross CF of the subsamples and the auto CF of the full sample, for both original (solid) and shuffled (dashed) galaxy samples. The bottom panel shows the ratios between the different CFs measured for the original (unshuffled) galaxy samples and the corresponding shuffled ones.
A value above unity for the black line in the bottom panel of Fig. 10 indicates that the original sample has a larger CF than that measured for the shuffled sample. These differences are the manifestation of galaxy assembly bias (Croton, Gao & White 2007). As explained in Z18, this arises from the combined effect of the occupancy variation and halo assembly bias. The central galaxies occupancy variation indicates a preferential occupancy of early-formed halos. These halos are more clustered, thus leading to a stronger clustering signal on large scales. The significant clustering differences on small scales come about from the satellites occupancy variation, where the increased number of satellites in late-forming halos gives rise to a stronger clustering in the 1-halo regime.
It is interesting to note that the clustering of galaxies in the late-forming haloes is stronger on large scales than that for galaxies in early-formed haloes, as can be seen in the middle panel of Fig. 10. This is the opposite to the results found by Z18 at z = 0, implying that trend evolves with redshift. This again arises from the inter-related effects of halo assembly bias and the occupancy variation. The dashed lines in the middle panel correspond to the shuffled galaxy samples, and reflect the same halo assembly bias trends seen in Fig. 4. The central occupancy variation acts to slightly increase this ratio for the galaxies in late-forming haloes and decrease it for the galaxies in early-formed halos (see corresponding discussion in Z18, specifically their §5.3), thus likely resulting in the reversed clustering trend seen.
The evolution of this ratio is individually presented Fig. 11. The left panels show the ratio between the CF measured for galaxies in haloes with the 20 per cent earliest/latest formation times and the CF of the full sample, for z = 0, 1, 2 and 3. The panels on right show the same but for haloes selected by concentration. Solid lines show the original SAM galaxies and dashed lines show the shuffled sample results (i.e., with no occupancy variation). For the shuffled samples, we can see that the difference between the CFs of galaxies in the earliest/latest-forming haloes decreases with increasing redshift while the difference for haloes with the highest/lowest concentrations is reversed, with increasingly stronger clustering found for the haloes with a lower concentration. These trends are consistent with the evolution of the halo assembly bias signal shown in Section 3, where we found a decrease (flip) of the of the difference in clustering on large scales for haloes selected by age (concentration). At high redshift, the differences in the clustering of the SAM galaxies in early and late formed haloes come mostly from the occupancy variation. This is opposite to the situation of galaxies that live in high and low concentrated haloes, where the differences in their clustering come mostly from halo assembly bias.
For the galaxy population predicted by the SAM, galaxies in haloes with late formation times or low concentrations show stronger large-scale clustering at higher redshifts. For the concentration case, the clustering signal becomes identical to that measured for the shuffled galaxies. This is expected since, as shown in Section 4, the occupancy variation of haloes with concentration decreases strongly with increasing redshift. For galaxies in haloes 20% haloes with lowest conc: Figure 8. Same as the bottom sub-panels of Fig. 5, but now combined for z = 0, 0.5, 1, 1.5, 2, 2.5 and 3. The top panels presents the central galaxies contribution and the bottom panels is the satellites contribution. The two left panels show, as solid (dotted) lines, the ratio between the HOD of the 20 per cent earliest (latest) formation time haloes and the full HOD, and in the right panels, the ratio of the HOD of the 20 per cent high (low) concentration haloes and the full HOD, for galaxies selected by their stellar mass. selected by formation time, the clustering of the galaxies in the latest forming haloes is stronger than the shuffled galaxies, while for the galaxies in the earliest-forming haloes it is lower than for the shuffled case. This is again consistent with what we found in § 4, with the occupancy variation with age persisting to higher redshifts, with (at each redshift) late (early) formation effectively shifting the occupation toward higher (lower) halo masses, thus changing the clustering.
On small scales, the dashed lines in Fig. 11 are identical for both age and concentration, since there is no halo assembly bias in that regime (1-halo scales). The SAM galaxies in late-forming haloes are more correlated than those in early-forming haloes on small scales, at all redshifts. This is due to the increased number of satellites in early versus late forming halos, which persists at all redshifts (as seen in Fig. 8). Galaxies selected by halo concentration exhibit a similar behavior -galaxies in low concentration haloes are more clustered than those in high concentration ones -at small-tointermediate scales. On very small scales (below ∼ 0.1h −1 Mpc), though, this trend flips. One might have expected the same smallscale behaviour with concentration due to the similar satellite occupancy variation. However, the concentration differences impact the clustering as well. E.g., for the low-concentration sample, even with more satellite galaxies, they are likely less concentrated (since they trace the dark matter distribution) and as a consequence, less clustered on very small scales. In Fig. 12 we show, for completeness, the corresponding evolution of the CFs for galaxies selected by their SFR. We obtain the same trends found for galaxies selected by stellar mass. We also analysed other number density samples and found similar results for the evolution of the galaxy CFs.
Finally, we consider the evolution of the galaxy assembly bias signal. As previously mentioned, galaxy assembly bias is quantified in terms of the ratio between the CF of a galaxy sample and that of a shuffled sample, where the relation to halo assembly has been erased, as shown by the black line in the bottom panel of Fig. 10. Fig. 13 presents our measurements for three different number densities (n = 0.01, 0.00316 and 0.001 h −3 Mpc 3 ) over a range of redshifts, for galaxies selected by stellar mass and SFR. We find that this clustering ratio generally decreases for higher redshifts, and for lower number densities. Interestingly, this decrease can be large enough in some of these cases so that the original sample becomes less clustered than the corresponding shuffled sample, and the clustering difference changes sign and continues growing in magnitude. This typically occurs for lower number densities and at high redshifts. This ratio is overall lower for the galaxy samples selected by their SFR rather than stellar mass. Nonetheless, the trends with redshift and number density persist for these SFR selected samples, Our results are in qualitative agreement with those found by Hearin et al. (2016;their Fig. 8) over the limited redshift range they explore (0 < z < 1). However, we note that Hearin et al. compare samples with the same stellar mass thresholds at the different redshifts, not accounting for any stellar mass evolution. Effectively, this amounts to probing more massive galaxies (lower number densities) at higher redshifts, and thus it is impossible to separate the evolution they find from the expected number density dependence.
Again, the impact of assembly bias on galaxy clustering arises from the combined contributions of the occupancy variation and halo assembly bias. At relatively low redshift for the stellar mass selected samples, these typically combine to produce an increased clustering (see the discussion following Fig. 11 and in Z18). For example, using our results in the previous sections for n = 0.01h 3 Mpc −3 , we can explain the behaviour exhibited in the top-left panel of Fig. 13. As we saw earlier for the halo age case, the level of halo assembly bias decreases while the level of the central galaxies occupancy variation remains similar, leading to a diminishing galaxy assembly bias effect. For lower number densities at high redshifts, halo assembly bias reverses sense (in a similar manner to concentration) such that early-formed halos become less clustered than the late-formed ones, and this gives rise to the reversed sense of galaxy assembly bias in those cases.
One could have a-priory envisioned a scenario in which the stochasticity involved in the galaxy formation processes would serve to weaken galaxy assembly bias over time. Alternatively, one might have expected the signature to grow with time (i.e., diminish as one goes to higher redshift), due to the hierarchical growth of structure. However, it seems that the evolution of assembly bias is far more intricate. The overall trend we find is that the CFs ratio monotonically increases with time (or decreases with increasing redshift; Fig. 13). This leads to a change in the sign of the effect, i.e., whether the clustering of the galaxy sample is stronger or weaker due to assembly bias effects, as well as a shift in whether the magnitude of this clustering difference decreases or increases with time. This gets more complex as the amplitude of the clustering ratio varies with the specifics of the galaxy selection (e.g., stellar mass or SFR) and number density, thus it is non-trivial to predict which galaxy sample would show negligible or extreme assembly bias properties and at which redshift.
SUMMARY & CONCLUSIONS
We use a state-of-the-art semi-analytic model of galaxy formation, the G13 SAM model, to study the origin and evolution of assembly bias in the galaxy distribution. We identify two separate contributions to this effect: halo assembly bias, which refers to the different clustering of haloes with different 'secondary property', and occupancy variation, the dependence of the number of galaxies in haloes of the same mass on a second property of the haloes. We isolate the evolution of these two effects for haloes selected by their concentration and formation redshift, two of the most common secondary properties used to measure assembly bias. The galaxy samples correspond to different number densities based on either ranked stellar mass or SFR. Our key results are shown in Figures 4, 8 and 13. We now summarise our main findings: • At z = 0 the concentration of dark matter haloes correlates with formation time. This correlation weakens at higher redshifts.
• Haloes at z = 0 with high concentrations or early formation times are more clustered than those with low concentrations or late formation times. At high redshift, there are no differences in the CF measured for haloes with different formation times, but low The same as Fig. 11, but now for galaxies selected by their SFR instead of stellar mass.
concentration haloes are more correlated than high concentration ones.
• Haloes ranked to have an extreme concentration or formation time at a given redshift do not necessarily have the same ranking at other redshifts. We found that the main progenitors of z = 0 haloes display clustering similar to that measured for their descendants. This means that the evolution of the halo assembly bias signal is not caused because a set of haloes (e.g., high concentration haloes) change their clustering over time, but because haloes change their ranking in terms of a secondary property.
• At z = 0, haloes with early formation times or high concentrations are populated by galaxies starting at lower halo masses (for a fixed cut in stellar mass) but they have fewer satellite galaxies for a fixed mass compared to haloes with late formation times or low concentrations.
• For galaxies selected by SFR we generally find similar occupancy variation trends to those found for galaxies selected by stellar , and for z = 0, 0.5, 1, 1.5, 2, 2.5 and 3 as labelled. Galaxies selected by stellar mass are shown on the left hand side while galaxies selected by SFR are shown on the right. mass (though different shape of the HOD). Haloes with early formation times or high concentrations are first populated by galaxies at a lower mass and have fewer satellite galaxies at a given mass compared to haloes with late formation times or low concentrations. The one difference is that at higher halo masses, where the central galaxies occupation drops, there are less centrals in haloes with early formation times or high concentrations than for those with either late formation times or low concentrations.
• The occupancy variation for central galaxies in haloes with different formation times stay roughly constant as a function of redshift for a fixed galaxy number density and for galaxies selected by either stellar mass or SFR. The corresponding satellite galaxies occupancy variation decreases somewhat with increasing redshift.
• The occupancy variation for galaxies in haloes with different concentrations diminishes for the central galaxies and satellites with increasing redshift, for both stellar mass or SFR selected galaxy samples.
• The evolution of the CF of galaxy samples without occupancy variation (i.e., the shuffled samples) reflects the same trends on large scales as the evolution of halo assembly bias for haloes selected by age or concentration; the CF differences for galaxies in haloes with early and late formation times decreases with look back time, while the CF of galaxies in low-concentration haloes increases relative to the CF of galaxies in high-concentration haloes when going to higher redshifts.
• The CF of galaxies hosted by haloes with late formation times or low concentration increases relative to the CF of galaxies in haloes with early formation times or high concentrations, respectively, with increasing redshift.
• The occupancy variation tends to increase the amplitude of the CF of galaxies that live in haloes with either late formation times or low concentrations, and decrease it for galaxies that live in haloes with early formation times or high concentrations.
• Galaxy assembly bias as measured by the ratio between the CF of the model galaxies and that of the shuffled galaxies decreases with redshift, going below 1 in some cases. This CFs ratio is generally smaller for lower number densities and for SFR-selected samples.
The different evolution of halo assembly bias and the occupancy variation with age and concentration likely points to a different origin for the dependence on these two secondary parameters. This is further corroborated by their lack of correlation at high redshift. In general, we find similar trends in the evolution of assembly bias, for both the occupancy variation and galaxy assembly bias, for galaxies selected by SFR versus stellar mass. This is quite impressive considering that galaxy samples selected by stellar mass and by SFR exhibit quite different behaviours in the SAMs (Contreras et al. 2013(Contreras et al. , 2015, and may be relevant for upcoming surveys. The results shown here will help to inform theoretical models of assembly bias and the development of observational tests to detect its existence (or absence) in the Universe. They can also be used to construct improved mock galaxy catalogues incorporating assembly bias (as standard HOD mocks do not include this effect). For these purposes we are releasing all the HODs and occupancy variation measures obtained in this work, as well as parametrised fits for them (see Appendix A for more details). award (ST/M007995/1 -DPI20140114) and Anillo ACT-1417. SC is also supported by the European Research Council through grant ERC-StG/716151. IZ acknowledges support by NSF grant AST-1612085 and by a CWRU Faculty Seed Grant. NP & EJ are further supported by "Centro de Astronomía y Tecnologías Afines" BASAL PFB-06 and by Fondecyt Regular 1150300. This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No 734374. The calculations for this paper were performed on the ICC Cosmology Machine, which is part of the DiRAC-2 Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS, and Durham University and on the Geryon computer at the Center for Astro-Engineering UC, part of the BASAL PFB-06, which received additional funding from QUIMAL 130008 and Fondequip AIC-57 for upgrades.
|
2019-01-06T16:59:29.031Z
|
2018-08-08T00:00:00.000
|
{
"year": 2019,
"sha1": "85cff9237a42cb51f0926ba5aa7ecc3c00aa521c",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/484/1/1133/27577405/stz018.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6fab78d71889438d0a3d1ec2c2cdc409027e94f4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
199673596
|
pes2o/s2orc
|
v3-fos-license
|
Distributionally robust chance constrained problems under general moments information
In this paper, we focus on distributionally robust chance constrained problems (DRCCPs) under general moments information sets. By convex analysis, we obtain an equivalent convex programming form for DRCCP under assumptions that the first and second order moments belong to corresponding convex and compact sets respectively. We give some examples of support functions about matrix sets to show the tractability of the equivalent convex programming and obtain the closed form solution for the worst case VaR optimization problem. Then, we present an equivalent convex programming form for DRCCP under assumptions that the first order moment set and the support subsets are convex and compact. We also give an equivalent form for distributionally robust nonlinear chance constrained problem under assumptions that the first order moment set and the support set are convex and compact. Moreover, we provide illustrative examples to show our results.
be viewed as a combination of chance constraint and distributionally robust optimization, has become a significant and effective approach to address practical optimization problems involving uncertainty and acquired a great development both in theory and applications [5,13,14,27,31,35,36,37]. In this paradigm, the distribution of uncertain parameter is not precisely known and belongs to a given information set, and the optimal solution is required to be feasible for all the realizations of the uncertain distribution. DRCCP provides a computationally viable methodology for immunizing chance constrained programming against the distribution uncertainty.
Most distributionally robust chance constrained methods are developed with the purpose of achieving a computationally tractable model for corresponding DRC-CPs, in which how to describe the distribution set is an important issue. Recently, Calafiore and El Ghaoui [5] presented that a distributionally robust individual chance constrained problem could be rewritten as a second-order cone problem under the exact second order moment information. Zymler et al. [36] obtained a semidefinite programming for a distributionally robust joint chance constraint under the exact second order moment information. Yang and Xu [34] proved that DRCCP for nonlinear uncertainties under the exact second order moment information was equivalent to a robust optimization problem. Yet, most previous results on the tractability of DRCCPs are restricted to the case that the distribution information is described by the exact second order moment information. As pointed out by Delage and Ye [6], it is not reliable when empirical moments have been treated as exact moments. In order to strengthen safety, they replaced the exact moments by the ambiguity moments information and discussed the distributionally robust optimization problem (DROP) under ellipsoidal moments information. Moreover, Natarajan et al. [17] considered a distributionally robust expected utility problem involving an interval moments uncertainty set. Wiesemann et al. [30] discussed DROPs by using a family of support subsets described by conic representable confidence sets and first order moment residing on an affine manifold to describe the distribution information set.
On the other hand, Ding et al. [7] discussed DRCCP under interval moments uncertainty set and obtained its equivalent convex programming. Zhang et al. [35] considered a distributionally robust chance constrained appointment scheduling problem under ellipsoid moments uncertainty set and derived an approximate semidefinite programming model. Wang et al. [28] showed a distributionally robust chance constrained support vector machines problem could be transformed into a second order cone problem under the exact second order moment information, and discussed the problem under interval first order moment uncertainty, ellipsoid first order moment uncertainty and second order moment uncertainty controlled by F -norm, respectively.
We note that, in the research works mentioned above, DROPs and DRCCPs have been discussed under various uncertainty moments sets such as ellipsoidal moments sets, interval moments sets and so on. Clearly, the ellipsoidal moments sets and the interval moments sets are all convex and compact. Therefore, one natural question is: can we get the equivalent form for DRCCPs involving some convex and compact information sets?
The main purpose of this paper is to make a new attempt to study DRCCPs. The rest of this paper is organized as follows. In Section 2, we obtain the convex programming for DRCCP under assumptions that the first and second order moments belong to the corresponding convex and compact sets, respectively. We also present some computationally tractable examples for the support functions of the matrix sets and the equivalent convex programming. Moreover, we apply one result to the worst case VaR optimization problem and obtain its closed form solution. To avoid the computation of support function about the matrix sets, we show the convex programming for DRCCP under assumptions that the information set of first order moment and the support subsets are convex and compact in Section 3. In Section 4, for distributionally robust nonlinear chance constrained problem under the set of first order moment and the support set are convex and compact, we obtain the equivalent form and show an example in which the equivalent form is convex. Finally, we summarize the results in Section 5.
DRCCP under the general first and second order moment information.
In this section, we consider the following DRCCP: where x is decision vector, χ ⊂ R n is a convex and compact set, c ∈ R n is a deterministic cost vector, ξ ∈ R k is a random vector, and w 0 (x), w (x) are two affine functions. Let P denote the distribution information set of the involved probability distributions and include the true distribution P. By employing the model (1), Ghaoui et al. [8] discussed the worst case VaR problem and presented several convex reformulations, Zymler et al. [37] considered two types of the worst case VaR of nonlinear portfolio and developed tractable representations, respectively, and Wang et al. [28] showed a distributionally robust chance constrained support vector machines problem could be transformed into a second order cone problem. Based on limited history data, it is hard to obtain the exact distribution information. In this section, we suppose the ambiguity information set has the following form where P denotes the set of all probability measures on the measurable space (R k , B) with B being the Borel σ-algebra on R k . Denote by S k the space of symmetric matrices of dimension k. Generally speaking, the exact first (respectively, second) order moment µ (respectively, Σ) is not far away from the empirical first (respectively, second) order moment µ 0 (respectively, Σ 0 ). Thus, we can assume that where A, Ξ ∈ S k , ζ ∈ R k , and for i = 1, 2, U i is a given nonempty, convex and compact set with 0 ∈ U i . The confidence of decision makers in µ 0 and Σ 0 has been demonstrated by the magnitude of sets U 1 and U 2 , respectively. Some examples of U i are as follows: They are exactly the ellipsoidal set (µ − µ 0 ) T Σ −1 0 (µ − µ 0 ) ≤ λ 1 and LMI Σ λ 2 Σ 0 which have been mentioned in [6], respectively; (ii) Let U 1 = ζ| µ − µ 0 ≤ ζ ≤ µ − µ 0 with A being the unit matrix and They are exactly the interval sets µ ≤ µ ≤ µ and 0 Σ Σ Σ which have been mentioned in [7,17], respectively. It is worth to mention that the distributionally robust individual chance constrained problem discussed in [7] is a special case of DRCCP in this paper. Next, we are devoted to obtaining the tractable reformulation of DRCCP (1) under the distribution information set P 1 . Denote the support function of set U ⊂ R k by Similarly, denote the support function of the set S ⊂ S k by where Y, X = tr(Y X) is the trace scalar product.
Theorem 2.2. DRCCP (1) under the information set P 1 is the following convex programming Proof. Firstly, we rewrite problem (1) as the following equivalent form By constructing the information sets U = {(µ, Σ) : µ ∈ U µ , Σ ∈ U Σ } and the chance constraint in (4) can be reformulated as follows: Let 1 S denote the indicator function of the set Letting y = w(x) T ξ, similar to the discussion in the proof of Theorem 2.2 in [7], by employing the strong duality theorem of Ihii [10] and Lemma 2.1, we can reformulate the problem as the following form where λ 0 , λ 1 , λ 2 ∈ R are the dual variables for the constraints. When λ 2 ≤ 0, the feasible region is empty. Thus we get λ 2 > 0. Similar to the proof of Theorem 2.1 in [17], by taking y * = −λ1 2λ2 in the first constraint and y * = η−λ1 2λ2 in the second constraint and using the following change of variables Then, problem (6) can be rewritten as follows: In what follows, we prove that problem (8) is equivalent to the following one: For fixed x, suppose that (µ * , Σ * , p * , t * , z * , s * , η * ) is the optimal solution of (8). We also assume that, for some (µ 1 , Σ 1 ) ∈ U, at least one of the constraints is violated for (p * , t * , z * , s * , η * ), which means one of the following inequalities holds Suppose the first inequality holds. Then we can obtain which is a contradiction with the fact that (µ * , Σ * , p * , t * , z * , s * , h * ) is the optimal solution of (8). Thus, we have Similarly, we can prove In conclusion, we obtain that problem (8) is equivalent to problem (9). Then, problem (4) can be reformulated as By the definition of support function, the above problem can be reformulated as Clearly, the third constraint of the above system does not hold when η = 0, which implies that η > 0. From the first and second inequalities of the above problem, we have Then, (5) can be simplified as For investigating the convexity of problem (4), we introduce a new variable v and so (10) can be simplified as follows: It is well known that x T Σx is convex in x when Σ 0. This fact shows that, for any θ ∈ [0, 1] and x 1 , x 2 ∈ R n , we have . From the above discussion, we get the conclusion.
2.1.
Examples. Theorem 2.2 shows that DRCCP (1) under the information set P 1 can be reformulated as a convex programming. It follows from (3) that the key issue for the computationally tractability of the convex programming is the computation of support functions about the vector set and matrix set. Concerning with the computationally tractable support functions about the vector sets, we refer the reader to [3]. For matrix Ξ, if we define U 2 U(vec(Ξ)), then we can deal with the support function of the matrix set by employing the method of [3]. Nevertheless, it is necessary to consider the support function about the matrix set directly, since the structure of the matrix is destroyed if we transform the matrix into the vector form. Therefore, we next show some examples for the support functions about the matrix sets, in which the term δ * w(x)w T (x)|U 2 can be reformulated as some computationally tractable forms.
Example 2.1. Let U 2 = Ξ ∈ S k | Θ j Ξ Ξ j , j = 1, 2, · · · , J , where Θ j 0 and Ξ j 0 are given matrices. By Schur complement, the support function of U 2 is given as follows: where τ > 0 and D 0. By using the inequality the support function of U 2 can be represented as follows: If D is the unit matrix, then U 2 = Ξ ∈ S k | Ξ F ≤ √ τ and the corresponding support function reduces to √ τ w(x) 2 2 . Example 2.3. Let U 2 = Ξ ∈ S k | Ξ σp ≤ τ with τ > 0, where the Schatten norm · σp is defined in [4] by Here A ∈ S k and σ i (A) is the absolute value of the i-largest eigenvalue of A which has real eigenvalues. Let A, B ∈ S k and p, q ∈ [1, +∞] satisfy 1 p + 1 q = 1. Then it is easy to see that tr(A T B) ≤ A σp B σq . In particular, Thus, the support function can be given as follows: Example 2.4. Let U 2 = Ξ ∈ S k | tr(C j Ξ) ≤ b j , Ξ 0, j = 1, 2, · · · , J , where C j ∈ S k and b j ∈ R are given for j = 1, 2, · · · , J. For the support function of U 2 , we have where β j ∈ R, D j ∈ S k , for j = 1, 2, · · · , J, and Ξ 0 0 are given. Then the support function of U 2 can be given as follows: In the above examples, we drive some computationally tractable support functions about the matrix sets. Next, we show two computationally tractable examples for problem (3).
An application to the risk management.
Recently, the Value-at-risk (VaR) has become one of the most popular risk measures in risk management. Usually, the VaR can be defined as follows: where x denotes the vector of asset weights, and ξ denotes the random vector of relative asset returns. When the distribution of returns is normal distribution N (µ, Σ), the VaR can be expressed as where κ(ε) = −Φ −1 (ε), and Φ −1 (·) is the inverse cumulative standard normal distribution. In practice, the exact distribution is not easy to acquire based on limited history data. In 2003, Ghaoui et al. [8] presented the exact formulation for (11) by setting κ(ε) = ε −1 (1 − ε) under the first and second order moments information.
But the optimal portfolio strategy may become infeasible when we treat the empirical moments as the exact moments. In this subsection, we investigate the VaR (11) under the information set P 1 . Let w 0 (x) = −γ and w(x) = −x. From Theorem 2.2, we have Thus, we obtain It is worth to mentioning that (13) will degenerate to (12) with κ(ε) = ε −1 (1 − ε) when we set U 1 = 0 and U 2 = 0 (no uncertainty about µ and Σ) . Next we give two examples to show how to use (13) to compute the VaR.
where Σ 2 0 and C ∈ R k×k with c ∈ R k . Then, the support function can be rewritten as follows: Thus, we have 3. DRCCP under the general first order moment and support subsets information. In section 2, we obtain the equivalent convex programming for DR-CCP under the information set P 1 . We notice that the term δ * w(x)w T (x)|U 2 in the equivalent convex programming only has a few tractable forms which have limited the practical applications of this model. In 2014, Wiesemann et al. [30] discussed distributionally robust programming under the conic represented distribution sets with support and first order moment information. In the WKS-type of ambiguity set, the authors used the interval probability of support subsets instead of variance to describe the deviation. Many stochastic optimization problems have been studied under the WKS-type of ambiguity set [11,12]. In this section, we consider the following information set where X 0 , X j are convex and compact sets and U µ is defined in (2), J = {1, 2, · · · , J}. Suppose that p j ,p j ∈ [0, 1] and p j ≤p j , for all j ∈ J . We remark that the information set P 2 where it is assumed that the set of first order moment and the support subsets are convex and compact is a generalization of the conic represented information set considered by Wiesemann et al. [30]. We also suppose that P 2 satisfies the following conditions: (C1) There exists a distribution P ∈ P 2 such that P(ξ ∈ X j ) ∈ (p j ,p j ), whenever p j <p j for j ∈ J . (C2) X J is bounded and almost contains all realizations of random variable such that P(ξ ∈ X J ) = 1. (C3) For all j, j ∈ J and j = j , either X j X j , X j X j or X j ∩ X j = ∅.
Condition (C1) ensures that we can apply the strong dual theorem in [21] to reformulate DRCCP (1). Condition (C2) states that X J contains the support information of random variable ξ. Condition (C3) implies that a strict partial order on the confidence sets X j regarding the -relation and requires that the incomparable sets must be disjoint. We denote Next, we devote to getting the equivalent form for DRCCP (1) under the information set P 2 which satisfies conditions (C1)-(C3).
Theorem 3.1. DRCCP (1) under the information set P 2 can be rewritten as the following convex programming where x ∈ R n , t ∈ R and β, λ j , γ j ∈ R k are decision variables.
Proof. By constructing the information set the chance constraint in (1) can be reformulated as follows where 1 S represents the indicator function of the set Then, the inner subordinate problem sup P∈P21 E P [1 S ] can be unfolded as where β ∈ R k , γ j , λ j ∈ R are the dual variables for the constraints. Obviously, the strong dual theorem holds by condition (C1). By Lemma 2.1, the problem (17) is equivalent to By condition (C3), we divide X J into nonempty and disjoint sets where D(j) denotes the index set of strict subsets of X j . Then, the left hand term of (16) can be rewritten as Similar to the proof of Theorem 2.1, the above problem can be reformulated as inf β,γj ,λj ,η Next we claim that η > 0. Suppose η = 0, then the constraints which contain η become Taking integral operation on both sides of the above constraints, we have where ζ(·) is the exact nonnegative distribution measure. Then, we have From p j ≤ p j ≤p j , one has which is a contradiction with Thus, we have η > 0. Divided by η in all constraints of problem (19) and substituted ( β η , The robust counterparts of constraints (21) are Since the left side of the above inequality is linear in ξ, we obtain that the optimal solution must be taken on the boundary ofX j . According to the same boundary of X j andX j , the above problem can be reformulated as By similar discussion for constraint (22) and the definition of support function, problem (20) can be reformulated as follows Thus, constraint (16) can be rewritten as This completes the proof.
Theorem 3.1 shows that DRCCP (1) under the information set P 2 which satisfies conditions (C1)-(C3) can be rewritten as an equivalent convex programming. The equivalent tractable form of DRCCP (1) is difficult to acquire when P 2 does not satisfy condition (C3). By choosing a partition for X j and each element of this partition is required to be satisfied condition (C3), instead we can obtain a convex approximation for DRCCP (1). Compared to the equivalent form (3) of DRCCP under the information set P 1 , we only need to compute the support function about vector sets in problem (15). We remark that the method for solving the distributionally robust programming under the conic represented distribution sets proposed in [30] is not suitable for solving problem (15). Now we give the following examples to illustrate the efficiency of our results.
4.
Distributionally robust nonlinear chance constrained problem under the general first order moment and support information. In the former discussions, the constraint function in DRCCP (1) is bilinear in x and ξ. In many practical problems the constraint function may have more complicate and flexible structures. In this section, we consider the following distributionally robust nonlinear chance constrained problem where g(·, x) is concave for all x ∈ R n . We notice that the nonlinear constraint function will result in unnecessarily conservation about DRCCP (1). Here we consider the following distribution set under support and first moment information where U ξ is a convex and compact set and U µ is defined in (2). Let f : R k → (−∞, +∞]. We denote the convex conjugate of f by and the concave conjugate of f by where Dom(f ) is the effective domain of f . We need the following lemma to show our main result.
[1] Let f, −g : R k → (−∞, +∞] be two proper, convex and lowersemicontinuous functionals. If there exists an element x 0 ∈ Dom(f ) Dom(g) such that either f or g is continuous at x 0 , then the following equality holds: Now we give our main result in this section.
5.
Conclusions. In this paper, we mainly focus on the equivalent determinate forms for DRCCPs under several general convex moments information sets. We demonstrate some examples for the support functions about some matrix sets and the equivalent convex programming. We also show the closed form solution for worst case VaR optimization problem. Compared with the results of [28], we directly obtain the equivalent convex programming form for DRCCP and establish a uniform framework for DRCCP under general second order moments information.
How to obtain more computationally tractability examples for support function about matrix set is still an open problem. The equivalent determinate form for distributionally robust nonlinear chance constrained problem under information sets P 1 and P 2 also need further discussion. As pointed out by [2], the larger the uncertainty set P is, the worse the optimal solution's performance level will be under P for distributionally robust optimization problems. Clearly, DRCCPs will be more conservative when the magnitude of uncertainty set P becomes bigger. Therefore, how to reduce the conservatism of DRCCP would really deserve to be taken into consideration.
We note that, in some real applications, the sets U µ and U Σ might be unknown and need to be estimated. Sun and Xu [26] presented a comprehensive convergence analysis of distributionally robust optimization problem where the ambiguity sets were described by a sequential increasing distribution information. Thus, it is important and necessary to consider the perturbation analysis of DRCCPs, which requires our further investigation.
|
2019-08-16T16:06:00.899Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "1bafffcfecff4fb3f1de0ebc9137d5bcd9e71319",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=565eb6ce-3267-4d8e-9af5-e4e347e6ce6e",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9ed14d275904401314b15f25ac291d1bc14c102d",
"s2fieldsofstudy": [
"Mathematics",
"Economics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
219730334
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Cow’s Milk and Esomeprazole on the Absorption of Erlotinib: A Randomized, Crossover Pharmacokinetic Study in Lung Cancer Patients
Introduction Erlotinib’s gastrointestinal solubility and absorption are decreased by proton pump inhibitors (PPIs). Since erlotinib is a lipophilic drug, we hypothesized that concomitant intake with the fatty beverage milk may be a feasible way to increase erlotinib uptake. We performed a two-period, randomized, crossover study to investigate the influence of cow’s milk with 3.9% fat on the exposure of erlotinib with and without the PPI esomeprazole in patients with non-small cell lung cancer (NSCLC). The effect of esomeprazole was studied in an additional intrapatient comparison. Method Pharmacokinetic sampling was performed on days 7 and 14 during 24 consecutive hours. During the 7 days prior to pharmacokinetic sampling, erlotinib was taken daily with 250 mL of either water or milk. In the PPI arm, esomeprazole (40 mg once daily 3 h prior to erlotinib) was taken for 3 days. Results Erlotinib area under the curve from time zero to 24 h (AUC24) did not significantly change when administered with milk, compared with water, in both non-PPI users (n = 14; − 3%; 95% confidence interval [CI] − 12 to 8%; p = 0.57) and patients who used esomeprazole (n = 15; 0%; 95% CI − 15 to 17%; p = 0.95). Esomeprazole decreased erlotinib AUC24 by 47% (n = 9; 95% CI − 57 to − 34%; p < 0.001) and Cmax by 56% (95% CI − 64 to − 46%; p < 0.001). No differences in toxicities were observed between milk and water. Conclusion Milk with 3.9% fat has no effect on the exposure to erlotinib in NSCLC patients, independent of PPI use. The combination with milk is safe and well tolerated. Concomitant esomeprazole treatment strongly decreased both erlotinib AUC24 and Cmax and should be avoided if possible.
Introduction
Erlotinib is a tyrosine kinase inhibitor (TKI) registered for the treatment of epidermal growth factor receptor (EGFR)mutated metastatic non-small cell lung cancer (NSCLC) [1,2]. It is indicated in combination with gemcitabine as firstline therapy for unresectable or metastatic pancreatic cancer [1]. Erlotinib is orally administered on a daily basis at a dose of 150 and 100 mg once daily for NSCLC and pancreatic cancer, respectively. Intra-and interpatient variability differs significantly due to interactions with food [3], concomitant medication [4], and lifestyle factors (i.e. smoking) [5,6].
The bioavailability of erlotinib largely depends on its solubility in the stomach and passive diffusion and probable active cellular transport in the gastrointestinal tract [7]. Optimal drug absorption is reached at a physiologically low intragastric pH (i.e. pH value of 1), since erlotinib is then protonized and is thus better soluble [8]. However, various acid-reducing drugs, including histamine-2 receptor antagonists (e.g. ranitidine) and proton pump inhibitors (PPIs; e.g. omeprazole) may lead to a 40-50% decrease in erlotinib absorption due to an increase in intragastric pH [9]. It has been previously demonstrated that this impaired systemic exposure to erlotinib can be corrected when administered in combination with the acidic beverage cola [10]. However, daily intake of acidic and highly caloric beverages such as
Key Points
Cow's milk did not significantly alter the exposure of erlotinib, while esomeprazole decreased both the area under the curve from time zero to 24 h and maximum concentration of erlotinib by 47% and 56%, respectively.
Patient-reported toxicity was equal between the milk and water groups, with and without proton pump inhibitors. Hence, cow's milk can be used as a safe alternative to water for the administration of erlotinib. Concomitant treatment with esomeprazole should be avoided if possible.
used any prescribed PPI could only participate in the PPI arm of this study when willing to switch to esomeprazole. It was possible for patients to participate in both study arms if PPI use was discontinued or if they were willing to take esomeprazole as required for this study. All participating patients were asked to sign a written informed consent form. The study was approved by the local Ethics Committee (Erasmus University Medical Center Rotterdam; MEC and was registered in the Dutch Trial Registry (number NL5984; NTR6148) [15].
Study Design
This was a single center, randomized, two-period, crossover pharmacokinetic study with two study arms. Figure 1 shows the study flow chart. After signing informed consent and after screening, patients were allocated to the non-PPI (arm A) or PPI (arm B) study arms. Hereafter, they were randomized to start with erlotinib with 250 mL of water (period 1) or cow's milk containing 3.9% fat (period 2) for 7 consecutive days (days 1-7 or 8-14). The 7-day period was chosen to ensure that erlotinib concentrations reached steady state. At days 7 and 14, patients were electively admitted for 24-h pharmacokinetic blood sampling. During each admission, 13 blood samples were collected; < 5 min before erlotinib intake (t = 0 h) and at time points (0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, 8, 12 and 24 h) after erlotinib intake. Patients had to take erlotinib according to its label, i.e. fasted for at least 2 h prior to and 1 h after administration. Additionally, on the day of hospital admission, food intake was prohibited between 4 h prior to and 1 h after erlotinib administration. Consumption of beverages was restricted for 1 h before and after erlotinib intake. In the PPI arm, patients were required to take esomeprazole (40 mg once daily) 3 h prior to erlotinib intake on days 5, 6 and 7 and days 12, 13 and 14 after the start of the study. The timing of esomeprazole intake was chosen to ensure maximal inhibition of gastric acid secretion at the time of erlotinib intake [16]. All samples were analyzed by a validated liquid chromatography-tandem mass spectrometric assay for precise quantification of erlotinib plasma concentrations [17].
Study Objectives
The primary objective was the difference in geometric mean of the area under the curve from time zero to 24 h (AUC 24 ) between periods with concomitant cow's milk compared with water, both with and without esomeprazole. Secondary objectives were the effects of esomeprazole intake in patients who were included in both arms, other pharmacokinetic cola or orange juice has disadvantages, such as dental problems, disrupted bone mineral composition, and weight gain [11]. We hypothesized that a healthier way to enhance erlotinib bioavailability could be by making use of the effects of other food components. The exposure of erlotinib is increased 33-66% when administered concomitantly with a high-fat meal [3]. We explored this potentially positive food effect as a proof-of-principle by optimizing erlotinib absorption in the presence of a beverage containing fat. In the past, milk-based drug formulations have shown to be equally effective compared with standard formulations in terms of solubility and dispersion [12]. Milk is consumed worldwide by billions of people. It is a healthy beverage that contains essential proteins, vitamins and minerals (e.g. calcium and phosphorus). Cow's milk accounts for more than 80% of the global milk production [13]. This is the first study that investigates the effects of erlotinib administered concomitantly with high-fat whole cow's milk compared with water. In addition, a direct intrapatient comparison to study the effects of esomeprazole on the systemic exposure of erlotinib has never been made. Therefore, we also explored the potential drug-drug interaction of esomeprazole use on the absorption of erlotinib.
Patient Eligibility
Adult NSCLC patients were eligible for inclusion in this study if they had an Eastern Cooperative Oncology Group (ECOG) performance status [14] of ≤ 1, were treated with a stable dose of erlotinib for at least 2 weeks (to guarantee steady-state plasma concentrations) and did not use any other (complementary or alternative) medicine or compounds that may have the potential to interact with either erlotinib or esomeprazole. Patients who concomitantly outcomes (i.e. clearance, maximum concentration [C max ] and time to C max [T max ]), and comparison of (the incidence and severity of) the adverse effects of treatment with erlotinib between periods and study arms.
Adverse Event Monitoring
Toxicity was scored by the investigator at baseline and during hospital admission in accordance with the US National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE) grades, version 4.03 [18]. Patients were provided with a diary to report any (ongoing) adverse events during the study.
Statistical Analyses
Given a clinically relevant difference of 30% in AUC, a within-patient standard deviation of 25%, 80% power and a two-sided significance level of 5%, 14 evaluable patients were required per study group (i.e. with or without esomeprazole) [19]; hence, a total of 28 patients had to be included. 24 and C max were performed on logtransformed values, since these parameters were assumed to follow a log-normal distribution [20]. Estimates for the mean differences in (log) AUC 24 and C max between milk and water were obtained for both study arms separately (with or without esomeprazole) using a linear mixed-effect model with treatment (water or milk), sequence and period as fixed effects, and subject-within-sequence as a random effect [21]. Variance components were estimated based on restricted maximum likelihood (REML) methods, and the Kenward-Roger method of computing the denominator degrees of freedom was used. The mean differences and their 95% confidence intervals (CIs) were exponentiated to provide point estimates of the ratio of geometric means and 95% CIs for these ratios, which can be interpreted as relative differences in percentages. T max was analyzed using the nonparametric Wilcoxon signed-rank test. Analyses to study the effect of esomeprazole were performed in a similar way, although they also included the effect of water versus milk as a fixed effect and only included patients who participated in both study arms. Toxicity was described as the incidence of toxicity per period. This was taken into account in case of
Analyses of AUC
Without PPI 1 Study flowchart. After screening, patients were allocated to the non-PPI (arm A) or PPI (arm B) arms. Hereafter, they were randomized to start with administration of either concomitant water (period 1) or cow's milk (period 2). Subsequent participation in both arms was allowed and is illustrated with the arrows between arms A and B. Hospital admissions for pharmacokinetic blood sampling took place at days 7 and 14. Esomeprazole 40 mg once daily was administered in arm B at days 5, 6 and 7, and days 12, 13 and 14. PPI proton pump inhibitor an increase in CTCAE grade per cycle. Since the design of this study was not appropriate to detect a significant difference in toxicity, these results had a descriptive character. All statistical analyses were performed using Stata (StataCorp. 2017. Stata: Release 15.1. Statistical Software. College Station, TX, USA: StataCorp LP).
Patients
A total of 21 unique patients were included between February 2017 and November 2019. The patient demographics are presented in Table 1. For personal reasons, one patient withdrew informed consent after completion of the first period. Nine patients were included in both the non-PPI and PPI arms; hence, 29 pairs of study periods were completed-14 in the non-PPI arm and 15 in the PPI arm (Fig. 1).
Pharmacokinetic Effects of Milk
The pharmacokinetics of erlotinib when taken with milk or water are presented in Table 2. Erlotinib AUC 24 decreased non-significantly by 3% (95% CI − 12 to 8%; p = 0.567) when administered with milk, compared with water, in the non-PPI patients. In addition, in those patients who used esomeprazole, erlotinib exposure did not significantly differ as a result of intake with either water or milk (0%; 95% CI − 15 to 17%; p = 0.953). Figures 2a and b show the absence of an effect of milk in both study arms. C max did not differ in non-PPI or PPI users, with relative differences of a 6% and 1% increase, respectively (95% CI − 21 to 11%, p = 0.409; and 95% CI − 12 to 17%, p = 0.831, respectively). In both study arms, T max increased non-significantly at 0.5 h; in the non-PPI arm from 2.0 to 2.5 h (p = 0.729) and in the PPI arm from 2.5 to 3.0 h (p = 0.306). Interpatient variability, measured by the coefficient of variation (CV), was lower with milk compared with water in both study periods and for both AUC 24 and C max . This lower variability in AUC 24 with milk intake was most pronounced in the PPI arm (CV 38% vs. 61%) [ Table 2].
Effects of Esomeprazole on Erlotinib Pharmacokinetics
Based on data from the nine patients who participated in both study arms, esomeprazole decreased erlotinib AUC 24 by an average of 47% (95% CI − 58 to − 34%; p < 0.001) and C max by 56% (95% CI − 64 to − 46%; p < 0.001) compared with the period in which esomeprazole was not used. These results are displayed in Fig. 3 and Table 3. T max seemed longer for both the milk and water periods, especially in the PPI arm (Table 2). In the setting of administration with water, the interpatient variability in AUC 24 increased from 37 to 61% due to esomeprazole co-treatment. When erlotinib was taken with milk, the interpatient variability in AUC 24 was not affected by esomeprazole co-treatment (CV 38% vs. 35%) [ Table 2]. Table 4 presents all adverse events experienced. Overall, patient-reported adverse events during this study did not increase compared with baseline. Independent of study arm, no differences in toxicities were observed between study periods. Furthermore, patients reported almost equal adverse event grades in both the non-PPI and PPI arms (data not shown). Two grade 3 adverse events occurred-one period of nausea that fluctuated for several weeks, and one increase in skin rash during concomitant nadroparine treatment. Both patients used erlotinib for more than 3 months prior to this In the PPI arm, esomeprazole was administered 3 h prior to erlotinib intake. PPI proton pump inhibitor increase in toxicity. For the first patient, erlotinib was temporarily discontinued several weeks after study completion and restarted at a reduced dosage. For the second patient, erlotinib was temporarily discontinued and its dosage reduced. These dose reductions were effective in reducing toxicity in both cases. There was one serious adverse event (SAE) in this study, namely a CTCAE grade 3 malignant spinal fracture, which occurred after randomization and before the first study period. This SAE required hospital admission and was considered to be not related to study procedures, and erlotinib treatment was continued. No eminent study intervention-related toxicity occurred.
Discussion
This study reports the absence of a pharmacokinetic effect of cow's milk with 3.9% fat on exposure to erlotinib in NSCLC patients, independent of PPI use. Additionally, this study showed a decrease in erlotinib AUC 24 of almost 50% and a decrease in C max of more than 50% when erlotinib was administered 3 h after esomeprazole intake.
A possible explanation for the lack of effect of milk on erlotinib exposure is that the 3.9% fat content of cow's milk is not high enough to affect absorption. In absolute values, patients were administered 9.75 g (250 mL × 3.9%) of fat from milk. This is relatively low in comparison with a highfat meal, which consists of 500-600 kilocalories of pure fat [3] (c.q. 56-67 g). The effect of a high-fat meal on erlotinib disposition ranges from a 33% AUC increase when taken 2 h after erlotinib administration [22], to a 66% increase in AUC of erlotinib when food and drug are taken concomitantly [23]. In theory, the negative effect of esomeprazole of almost 50% decrease in AUC 24 could be overcome by coadministration of a high-fat meal.
An additional reported effect of increasing the bioavailability of erlotinib with coadministration of a high-fat meal was a decrease in interpatient variability [3]. The benefits of less interpatient variability are a more predictable effectivity and toxicity on a large scale, since more patients will be administered within the therapeutic window. Our data show that milk also reduced interpatient variability, especially in the PPI arm (Table 2). Although, on average, bioavailability did not change, the lower interpatient variability would be an argument in favor of erlotinib administration with milk instead of water. Another reason why erlotinib absorption was not affected by milk could be that the strong pH buffering capacity of milk [12] prevents the intragastric pH from decreasing. Hence, the beneficial effect of the milk's fat is counteracted by switching erlotinib to its less soluble, non-ionized form, which is not an optimal condition for transluminal transportation across gastrointestinal cells. Furthermore, there is no evidence of milk interacting with drug transporters or hepatic cytochrome P450 isoenzymes.
Average milk consists of 3-4% fat [13]. Since we used cow's milk with the highest fat content (3.9%) commercially available, it is unlikely that lighter variants of cow's milk would have a higher effect on the bioavailability of erlotinib. Nevertheless, cow's milk may be of interest for increasing systemic exposure of TKIs with vaster food effects, i.e. lapatinib (up to 325% and 200% AUC increase with a high-and low-fat meal, respectively) [3]. In line with milk, yoghurt (0.4% fat [24]) is not expected to interact with erlotinib absorption and could also be considered safe. Coadministration with yoghurt was previously studied and was considered safe for the TKI nilotinib [3,25].
Moreover, for the first time, we conducted an intrapatient comparison on the effects of esomeprazole on the AUC 24 and C max of erlotinib, which is in line with previous research with erlotinib and omeprazole [9]. We hence warn patients and prescribers of this possible harmful interaction, which could lead to therapy ineffectiveness. Potential solutions for patients who are dependent on PPI use may be a delayed PPI intake until erlotinib is fully absorbed or by taking erlotinib concomitantly with cola [10]. Albeit practical, the most feasible solution is a critical reconsideration of the need to prescribe a PPI and discontinuation of the PPI where possible.
Another way to increase the aqueous solubility, and therewith bioavailability, of erlotinib could be to improve its formulation [26]. A phospholipid formulation showed an improved pharmacokinetic profile in rats [27]. Before this new formulation could be considered to be implemented in clinical practice, further research should first be conducted to determine its possible benefits and deficits.
Furthermore, the absence of a milk effect on erlotinib exposure is probably also the reason why this study found no differences in patient-reported toxicity. This is not surprising as, for erlotinib, the plasma concentration is correlated with the occurrence of the most prevalent adverse effects of skin rash and diarrhea [28]. Erlotinib intake with milk is just as safe as intake with water, and could thus be advised to patients as an alternative for administration with water, for example to mitigate mild gastrointestinal reflux complaints or as the patient's preference.
Interestingly, although esomeprazole reduced erlotinib exposure by half, patients did not report less toxicity; however, the 3-day period during which patients had to take esomeprazole was most likely too short to have a noticeable effect on toxicity. When esomeprazole is taken for a longer period of time, the chronic decrease in erlotinib exposure could have a more distinctive effect of less toxicity.
Conclusions
Whole cow's milk with 3.9% fat has no clinically relevant effects on the exposure of erlotinib in NSCLC patients, independent of PPI use. The combination with milk instead of water is safe and well tolerated, and may be a good alternative for some patients. Meanwhile, the use of esomeprazole 3 h prior to erlotinib intake strongly decreased both erlotinib AUC 24 and C max , and should be avoided if possible.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by-nc/4.0/.
|
2020-06-18T09:06:29.962Z
|
2020-06-17T00:00:00.000
|
{
"year": 2020,
"sha1": "e79d04e93a637dc2b90170593be18f65fa97023b",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40262-020-00910-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9763a2109ad8a6a6f3d47acc429967ae31f83e40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18964554
|
pes2o/s2orc
|
v3-fos-license
|
Fast Modeling of Binding Affinities by Means of Superposing Significant Interaction Rules (SSIR) Method
The Superposing Significant Interaction Rules (SSIR) method is described. It is a general combinatorial and symbolic procedure able to rank compounds belonging to combinatorial analogue series. The procedure generates structure-activity relationship (SAR) models and also serves as an inverse SAR tool. The method is fast and can deal with large databases. SSIR operates from statistical significances calculated from the available library of compounds and according to the previously attached molecular labels of interest or non-interest. The required symbolic codification allows dealing with almost any combinatorial data set, even in a confidential manner, if desired. The application example categorizes molecules as binding or non-binding, and consensus ranking SAR models are generated from training and two distinct cross-validation methods: leave-one-out and balanced leave-two-out (BL2O), the latter being suited for the treatment of binary properties.
Introduction
Methods exist to mine data of analogue series or combinatorial data sets, for instance, those based on SAR maps [1,2] or R-group polymorphisms [3], among others [4,5]. However, none is as simple as the Superposing Significant Interaction Rules (SSIR) method, a new systematic procedure able to rank analogue series that, in turn, constitutes an inverse SAR tool. The SSIR ideas was originated after the recent experience in a Design of Experiments (DoE) context treating molecular families sharing a common scaffold [6][7][8].
The SSIR method conceptualizes a combinatorial family as a series of sites (factors in DoE vocabulary [9]), each one having the ability to accommodate one of a set of various residues (levels). From this knowledge, combination rules of presence/absence of certain residues in sites are categorized as being significant or not. The SAR model consists of all the rules being categorized as significant. Each rule grants an additional positive or negative vote to each molecule that matches it. Hence, each analogue collects a series of signed votes that, once added up, establish a molecular ranking scale. It is expected that the ranked molecular series will correlate with the interest/non-interest molecular tags attached to the molecules according to the values of the analyzed property.
Results and Discussion
As application, here it is presented an example of Binding Activities modeling. In reference [5] a method is presented for obtaining SAR relationships of analogue series based on the analysis of dual-activity difference maps. The authors present as an example the application over a set of 106 pyrrolidine bis-diketopiperazines tested against two formylpeptide receptors (FPR). Table 1 presents the substitution codifications along the four available molecular scaffold sites.
The details of both experimental endpoints can be found in the aforementioned reference. Following the notation of the original work, the binding activities (Ki) here will be denoted as FPR1 (related to antibacterial inflammation and malignant glioma cell metastasis) and FPR2 (associated with chronic inflammation in systemic amyloidosis, Alzheimer's disease and prion diseases). The goal of Medina-Franco et al. [5] was to compare both properties by working with differences arising from molecular pairs and looking for activity switches (i.e., specific substitutions that have opposite effects on the activity of the compounds against two biological targets) and selectivity switches (minor structural modifications that drastically invert the selectivity pattern of two compounds). Here, each single property will be modeled. Table 2 shows the codified set of 106 compounds. The original compound formulations can be found in Table S1 of the original article's [5] supporting information. It is worth noting that the SSIR method can be applied systematically without the need for special preparatory operations (other methods require molecular minimizations, alignments, descriptor calculations and so on). The method's symbolic nature means it is only necessary to arbitrarily codify the molecular substituents and decide which analogues are declared as being of interest (see materials and methods section below). These characteristics allow the SSIR method to model sets confidentially by masking the original information or molecule codification. pyrrolidine bis-diketopiperazines tested against two formylpeptide receptors (FPR). Table 1 presents the substitution codifications along the four available molecular scaffold sites. The details of both experimental endpoints can be found in the aforementioned reference. Following the notation of the original work, the binding activities (Ki) here will be denoted as FPR1 (related to antibacterial inflammation and malignant glioma cell metastasis) and FPR2 (associated with chronic inflammation in systemic amyloidosis, Alzheimer's disease and prion diseases). The goal of Medina-Franco et al. [5] was to compare both properties by working with differences arising from molecular pairs and looking for activity switches (i.e., specific substitutions that have opposite effects on the activity of the compounds against two biological targets) and selectivity switches (minor structural modifications that drastically invert the selectivity pattern of two compounds). Here, each single property will be modeled. Table 2 shows the codified set of 106 compounds. The original compound formulations can be found in Table S1 of the original article's [5] supporting information. It is worth noting that the SSIR method can be applied systematically without the need for special preparatory operations (other methods require molecular minimizations, alignments, descriptor calculations and so on). The method's symbolic nature means it is only necessary to arbitrarily codify the molecular substituents and decide which analogues are declared as being of interest (see materials and methods section below). These characteristics allow the SSIR method to model sets confidentially by masking the original information or molecule codification. Table 1. Molecular substitution codifications. Note that each letter represents a distinct substituent depending on the substitution site. The library has four diversity points and the expanded set covers M = 5ˆ8ˆ9ˆ17 = 6120 compounds. In the reference, a = 106 analogues are reported. In this set, the analogues of interest have been defined as those presenting low value of Ki expressed in terms of concentration in nM units. In both cases the b = 32 compounds (ca. 30%) presenting the lowest values were chosen as being of interest (property values lesser or equal to 411 and 410 nM for FPR1 and FPR2, respectively, and marked in Table 2 with asterisks in columns pKi1 and pKi2). The number of rules of order 1 (negations not allowed, see materials and methods section below) are 5 + 8 + 9 + 17 = 39. For orders 2-4 the number of rules are 531, 3029 and 6120, respectively. If negations are allowed, the numbers of possible rules increase to 78, 2124, 24232 and 97920, respectively. Figure 1 shows the distribution of p-values attached to the rules of order 4 (negation terms allowed) for both properties. It is noteworthy that property FPR2 reaches rules having much lower p values. This behavior is also found for other rule orders. The presence of more significant rules suggests that FPR2 could be better modeled.
106
BDBA 4.000 4.000 a The 32 compounds of interest (Ki1 ≤ 411) are marked with an asterisk. b The 32 compounds of interest (Ki2 ≤ 410) are marked with an asterisk.
The library has four diversity points and the expanded set covers M = 5 × 8 × 9 × 17 = 6120 compounds. In the reference, a = 106 analogues are reported. In this set, the analogues of interest have been defined as those presenting low value of Ki expressed in terms of concentration in nM units. In both cases the b = 32 compounds (ca. 30%) presenting the lowest values were chosen as being of interest (property values lesser or equal to 411 and 410 nM for FPR1 and FPR2, respectively, and marked in Table 2 with asterisks in columns pKi1 and pKi2). The number of rules of order 1 (negations not allowed, see materials and methods section below) are 5 + 8 + 9 + 17 = 39. For orders 2-4 the number of rules are 531, 3029 and 6120, respectively. If negations are allowed, the numbers of possible rules increase to 78, 2124, 24232 and 97920, respectively. Figure 1 shows the distribution of p-values attached to the rules of order 4 (negation terms allowed) for both properties. It is noteworthy that property FPR2 reaches rules having much lower p values. This behavior is also found for other rule orders. The presence of more significant rules suggests that FPR2 could be better modeled. Table 3 shows the area under the receiver operating characteristic (AU-ROC) values [10][11][12][13] attached to the obtained ranking classification. Exploring the generation of rules of order 1, 2 and 3, immediate results were obtained for fitting and leave-one-out (L1O) tests. For all cases, the cutoff pvalue was set to pc = 0.005. The total number of significant rules entering each calculation is given between brackets. Along the L1O or balanced leave-two-out (BL2O) cycles (see section 3), certain rules present in fit are sometimes automatically discarded or some new significant rules appear as a result of the extraction and replacement steps. Hence, the total number of significant rules found along the cycles usually increases with respect to the single training calculation. Each BL2O calculation required 2368 cycles. In Table 3, the number of well classified pairs, ties and bad pair rankings encountered along the BL2O loops are explicitly indicated. For instance, regarding the FPR1 property, the BL2O involving rules of order 3 leads to 1909 well internally classified pairs, 2 ties and 457 incorrect pair rankings. For FPR2, the counts were 2253, 0 and 115, respectively. Those counts are Table 3 shows the area under the receiver operating characteristic (AU-ROC) values [10][11][12][13] attached to the obtained ranking classification. Exploring the generation of rules of order 1, 2 and 3, immediate results were obtained for fitting and leave-one-out (L1O) tests. For all cases, the cutoff p-value was set to p c = 0.005. The total number of significant rules entering each calculation is given between brackets. Along the L1O or balanced leave-two-out (BL2O) cycles (see Section 3), certain rules present in fit are sometimes automatically discarded or some new significant rules appear as a result of the extraction and replacement steps. Hence, the total number of significant rules found along the cycles usually increases with respect to the single training calculation. Each BL2O calculation required 2368 cycles. In Table 3, the number of well classified pairs, ties and bad pair rankings encountered along the BL2O loops are explicitly indicated. For instance, regarding the FPR1 property, the BL2O involving rules of order 3 leads to 1909 well internally classified pairs, 2 ties and 457 incorrect pair rankings. For FPR2, the counts were 2253, 0 and 115, respectively. Those counts are related to AU-ROC values because it is well-known that, for a single fitting calculation, given a couple of molecules (one of interest and the other of non interest), the AU-ROC corresponds to the a posteriori probability that the classifier correctly sorts the pair [13].
Training and Cross-Validation
In all cases, the second property is clearly modeled better by SSIR. As mentioned, this may be because the rules for FPR2 reach more significant (i.e., small) p-values (see Figure 1). In other words, the analogues defined as of being of interest for FPR2 seem to be much more related to particular substituent combinations. Table 3. Area under the receiver operating characteristic (AU-ROC) values for several calculations for properties FPR1 and FPR2. The threshold p c value was set to 0.005 and negation terms were allowed in rules. The number of accepted rules along the loops is given in brackets. For the balanced leave-two-out (BL2O) cross-validation process, the number of well classified pairs, ties and bad pair rankings encountered along the cycles are indicated between slashes. See text for more details.
Property
Rule related to AU-ROC values because it is well-known that, for a single fitting calculation, given a couple of molecules (one of interest and the other of non interest), the AU-ROC corresponds to the a posteriori probability that the classifier correctly sorts the pair [13]. In all cases, the second property is clearly modeled better by SSIR. As mentioned, this may be because the rules for FPR2 reach more significant (i.e., small) p-values (see Figure 1). In other words, the analogues defined as of being of interest for FPR2 seem to be much more related to particular substituent combinations. Table 3. Area under the receiver operating characteristic (AU-ROC) values for several calculations for properties FPR1 and FPR2. The threshold pc value was set to 0.005 and negation terms were allowed in rules. The number of accepted rules along the loops is given in brackets. For the balanced leavetwo-out (BL2O) cross-validation process, the number of well classified pairs, ties and bad pair rankings encountered along the cycles are indicated between slashes. See text for more details. For this library, rules of order 2 are well suited to reveal general patterns attached to activity values of interest. Table 4 lists the first most significant rules of order 2 found for the FPR1 property. The systematic presence of G substituent becomes evident at position 2 (S-benzyl) attached to rules having a positive vote. Even more, the negation of G substituent at this position is systematically accompanied by a negative rule vote. The remaining rules mainly ask to avoid residue B (R-2naphthylmethyl) at the same position. Other diverse combinations complete the full set of 117 selected rules having p ≤ 0.005. Inspection of the whole set of significant rules reveals that position 2 is the most relevant one when modeling the FPR1 property. This kind of information can be useful for some applications, for instance when a compound must be optimized in order to refine other molecular properties. For this library, rules of order 2 are well suited to reveal general patterns attached to activity values of interest. Table 4 lists the first most significant rules of order 2 found for the FPR1 property. The systematic presence of G substituent becomes evident at position 2 (S-benzyl) attached to rules having a positive vote. Even more, the negation of G substituent at this position is systematically accompanied by a negative rule vote. The remaining rules mainly ask to avoid residue B (R-2-naphthylmethyl) at the same position. Other diverse combinations complete the full set of 117 selected rules having p ď 0.005. Inspection of the whole set of significant rules reveals that position 2 is the most relevant one when modeling the FPR1 property. This kind of information can be useful for some applications, for instance when a compound must be optimized in order to refine other molecular properties. Table 5 lists the first most relevant rules of order 2 when modeling the FPR2 property. The pattern found in this list is the presence of residue C at the first substitution site (S-isopropyl). Again, the negation of residue C at this specific site is attached to a negative vote for the rules. Despite this particular rule behavior, molecular position 1 is not the most relevant one, as seen from an inspection of the full set of 447 selected rules. In this group of significant rules, one encounters a diversity of residues to be placed or avoided at specific positions. Hence, the FPR2 property is modeled from several "points of view" regarding combinations of substituents. This variety of choices confers better modeling options to the property and, in this case, a more robust final SSIR model. The results in Table 3 have been checked by means of randomization tests. These tests consist of randomly scrambling all the molecules' interest/non-interest labels and redoing the modeling calculations from scratch 1000 times, i.e., generating all the rules again from the beginning and recalculating all the probabilistic p values. Figure 3 shows the fake AU-ROC values obtained for the FPR1 (Figure 3a) and FPR2 (Figure 3b) properties through L1O predictions. The calculation involves the rules of order 2 (p c = 0.005). During the cycles, a SSIR model could only be reproduced 428 (Figure 3a) or 409 (Figure 3b) times. For the other cases, all the rules' significances were greater than the threshold p c . The "randomized" models are represented by a point in the graph whereas the correct model is represented by a cross. The points always present lesser AU-ROC values (vertical axis) than the correct model (except for a model for the FPR1 property). The graph also shows how the number of rules found per randomized test is lesser than 117 ( Figure 3a) and 447 (Figure 3b), the number of rules defining the correct model. For FPR1 a fake model consisted of 102 rules and 105 for FPR2. All this data confirms again that the FPR2 property is much better modeled than FPR1, as the corresponding cross in Figure 3b is clearly further away from the cloud of randomized points.
Property Rule Order Overall
Possibly, FPR1 property is being overparametrized as several L1O-AU-ROC values fall near the unscrambled test. It has to be noted that, in some cases, the random scrambling of molecular interest/non-interest tags leads to situations that can be partially modeled by SSIR. For instance, if the original tags pointing to molecules having low property values (analogues of interest) are mainly placed on molecules having higher property values during scrambling, then the situation becomes a sort of complementary version of the original one and SSIR is able to model it. Unfortunately, another undesirable possibility is left to chance: in some cases the random placement of the tags can set the analogues of interest in a partially correlated way with respect to one or more substituents. This situation will also be modeled by SSIR, generating fake rules. Therefore, the combination of both methodologies, cross-validation and randomization tests, are to be taken into account when modeling with SSIR in order to detect spurious models. model is represented by a cross. The points always present lesser AU-ROC values (vertical axis) than the correct model (except for a model for the FPR1 property). The graph also shows how the number of rules found per randomized test is lesser than 117 ( Figure 3a) and 447 (Figure 3b), the number of rules defining the correct model. For FPR1 a fake model consisted of 102 rules and 105 for FPR2. All this data confirms again that the FPR2 property is much better modeled than FPR1, as the corresponding cross in Figure 3b is clearly further away from the cloud of randomized points.
Inverse Structure-Activity Relationships (SAR): New Analogue Proposals
The SSIR method is an inverse SAR tool [14] because of its ability to suggest new compounds. For the model of rules of order 3 (p c = 0.005, negations allowed), the SSIR program was asked for predictions of new combinatorial analogues having a high number of positive votes. The SSIR program generated all the remaining 6014 items that were not training compounds. For this set of new analogues, the number of votes coming from the rules ranged from´376 to +397 for FPR1 and from´1101 to +1283 for FPR2. Each list provides a ranking attached to either property. Another calculation was conducted to generate external analogues. To prevent extrapolations outside the training set chemical space, consideration was only given to compounds having at most one single substitution difference (in any site) respect to at least three training compounds (of course, other choices are possible). A total of 511 analogues fulfill this condition. For this set of new analogues, the number of collected votes ranged from´396 to +397 for FPR1 and from´1101 to +1278 for FPR2. As presented, the lists of proposed molecules ranked by SSIR help to prioritize structures for synthesis, screening, database pruning or selection. As shown, the rules can be combined to detect common prioritized structures. This optimization task needs not be immediate or easy in general, especially in those cases where the multiple objectives are contradictory (negatively correlated). It is not the goal of this article to deal with the topic of multiobjective optimization [15] and the related issues will be published elsewhere.
Advantages and Drawbacks of the Method
One of the advantages of the SSIR method (described in Section 3) is that the input data can be prepared fast because pre-process tasks are minimal. Starting the process is immediate because conformational analysis, molecular superpositions and index calculations are not needed. The symbolic treatment can be interpreted as a sort of encryption. Hence, the modeling procedure can be offered to a third party in a confidential manner, i.e., without revealing the molecular database being studied.
Of course, the method has drawbacks. Apart from those cited above, the molecular space couldn't be explored beyond the substituents codified in the training database. This conditions the eventual test or validation molecular set structure (other methods, such as Inductive Logic Programming [16], allow the application of generated rules to molecules presenting new substituents). It is also worth mentioning that the results depend on the balancing of the database and on the degree of library dilution with respect to the full definable set of molecules. The method cannot deal with libraries of analogues presenting only one single substitution site. Our current research focuses on getting selected rules of higher order, as exhaustive generation is not possible in many cases due to combinatorial explosion. Work is also done to apply the method to continuous property values.
SSIR constitutes a rules search engine that has been presented here in a net context. The inner procedure can be improved, and additional benefits are attainable with the help of other techniques mainly devoted to rule management.
Libraries, Sublibraries, Rules and Negation Terms
A congeneric molecular series sharing a common scaffold with n anchorage sites is visualized by the SSIR method as a structure having n factors. In turn, each substitution point i is able to accommodate m i residues or building blocks (relevant sites are only those for which m i > 1). In this manner the total number of analogues definable in the library is M " and each analogue is identified by the list of sorted residues, for instance A 1 B 2 B 3 ...C n or, simply, ABB...C, as the position of each letter specifies the substitution point.
The SSIR procedure assumes that the molecular property values depend on the effect of some relevant substituents placed at some relevant sites, but also that these non-linear effects can be expanded by superposing rules involving only a few sites.
For illustrative purposes, let us consider a toy library obtained by a combination of residues in 3 sites (see Figure 4). The full set of molecules belonging to the library is given by the Cartesian product R = R 1ˆR2ˆR3 where the levels for each site are represented in turn by the site sets R 1 = {A,B}, R 2 = {A,B,C} and R 3 = {A,B,C,D}. The symbols associated with each residue, despite being repeated, generally stand for distinct codified entities along the sites. If some residues are identical for two or more sites, the repeated notation always remains unambiguous because the corresponding influence over the molecular properties is distinct due to the positional effects. For this example, m 1 = 2, m 2 = 3 and m 3 = 4. Hence, the complete library R contains M = 2ˆ3ˆ4 = 24 analogues collected in the universal set R = {AAA, AAB, AAC, ..., BCC, BCD}. It should be noted here that the entire library is generally not at the disposal of the researcher; usually only a fraction of it is known (i.e., already synthesized and with the molecular property evaluated). and each analogue is identified by the list of sorted residues, for instance A1B2B3...Cn or, simply, ABB...C, as the position of each letter specifies the substitution point.
The SSIR procedure assumes that the molecular property values depend on the effect of some relevant substituents placed at some relevant sites, but also that these non-linear effects can be expanded by superposing rules involving only a few sites.
For illustrative purposes, let us consider a toy library obtained by a combination of residues in 3 sites (see Figure 4). The full set of molecules belonging to the library is given by the Cartesian product R = R1 × R2 × R3 where the levels for each site are represented in turn by the site sets R1 = {A,B}, R2 = {A,B,C} and R3 = {A,B,C,D}. The symbols associated with each residue, despite being repeated, generally stand for distinct codified entities along the sites. If some residues are identical for two or more sites, the repeated notation always remains unambiguous because the corresponding influence over the molecular properties is distinct due to the positional effects. For this example, m1 = 2, m2 = 3 and m3 = 4. Hence, the complete library R contains M = 2 × 3 × 4 = 24 analogues collected in the universal set R = {AAA, AAB, AAC, ..., BCC, BCD}. It should be noted here that the entire library is generally not at the disposal of the researcher; usually only a fraction of it is known (i.e., already synthesized and with the molecular property evaluated). Here, the wildcard notation X will stand for any of the residues belonging to a particular anchorage point. Hence, the full database is also denoted by R = {X}1 × {X}2 × {X}3 = {XXX} or, simply, XXX. When applying the SSIR method, it is very important to define partial subsets of analogues taken from the library. For instance, the rule XAX stands for all the analogues presenting residue A at the second site. This is the same as in the molecular set XAX = {AAA, AAB, AAC, AAD, BAA, BAB, BAC, BAD} It is said that this rule embraces or condenses m1 × m3 = 8 analogues. Rule XAX is of order 1 because it establishes substitution restrictions in one site. This virtual library also admits rules of order 2 (the ones setting substitution restrictions in two sites, as for the case of XAD standing for the two analogues simultaneously presenting residues A and B at positions 2 and 3, respectively) or order 3 (such as the rule identified with the analogue BBC). The maximum rule order definable in a library is n, the number of substitution slots. The total number of rules (assuming that there are not redundant molecular symmetry issues) is In Equation (2) the leftmost summation defines the rule orders, the inner k summation symbols constitute a Nested Summation Symbol (NSS) [17][18][19][20] and generates the combinations of k elements Here, the wildcard notation X will stand for any of the residues belonging to a particular anchorage point. Hence, the full database is also denoted by R = {X} 1ˆ{ X} 2ˆ{ X} 3 = {XXX} or, simply, XXX. When applying the SSIR method, it is very important to define partial subsets of analogues taken from the library. For instance, the rule XAX stands for all the analogues presenting residue A at the second site. This is the same as in the molecular set XAX " tAAA, AAB, AAC, AAD, BAA, BAB, BAC, BADu It is said that this rule embraces or condenses m 1ˆm3 = 8 analogues. Rule XAX is of order 1 because it establishes substitution restrictions in one site. This virtual library also admits rules of order 2 (the ones setting substitution restrictions in two sites, as for the case of XAD standing for the two analogues simultaneously presenting residues A and B at positions 2 and 3, respectively) or order 3 (such as the rule identified with the analogue BBC). The maximum rule order definable in a library is n, the number of substitution slots. The total number of rules (assuming that there are not redundant molecular symmetry issues) is In Equation (2) the leftmost summation defines the rule orders, the inner k summation symbols constitute a Nested Summation Symbol (NSS) [17][18][19][20] and generates the combinations of k elements taken from the pool of n (the selection of sites involved in each rule) and the rightmost product counts how many rules are being generated from the previously selected k substitution sites. This last term corresponds to the combinatorial object variations with repetition. Given the identification of the sites being combined (values of i 1 , i 2 ,...,i k ), the number of generated rules of order k arising from it is The number of compounds being condensed by each rule of k-th order is Regarding the toy example above, Equation (2) generates a total of 9 rules of order 1 (2 + 3 + 4). The rules of order 2 arise from the combinations of sites 1-2, 1-3 and 2-3 and a total of 6 (2ˆ3), 8 (2ˆ4) and 12 (3ˆ4) rules are generated, respectively. There are a total of 2ˆ3ˆ4=24 rules of order 3 (the maximum order definable in this example). Hence, the complete sum (2) is V = 59.
Equations (2)-(4) involve rules made with "positive terms" for which the presence of a residue is required. The universe of rules increases if rules involving the concept of 'non presence of a specific residue' are also taken into account. These are termed negation terms or negation operators. The negation operator of a certain residue is denoted by a bar as in rule XAX, defining a complementary term, which here stands for the set {XBX, XCX} condensing the 2ˆ1ˆ4 + 2ˆ1ˆ4 = 16 analogues not having residue A at the second position. The combination of two complementary variables can lead to other specific molecular sets. Take as an example the rule which condenses the 8 analogues not having residue A at the first site and simultaneously not having the residue B at the second. That is the same as the set tBuˆtA,CuˆtA,B,C,Du " tBuˆtA, CuˆtXu " tBAX, BCXu Despite that algebra opens the possibility to define many combinations of negation terms along the rules (see Appendix), here all the rules will only involve juxtaposed positive (A, B, ...) or negative (A,B, ...) terms. A word of caution must be given here: it is not always necessary to attach negation operators at every site. It will be redundant to apply negation operators to binary sites (those presenting only two residues) because each level is the natural negation of the other. Specifically, for a binary site set {A,B} the complementary rule A is the same as the rule B. Conversely, the rule B is also equivalent to rule A.
All in all, systematic rule generation by computation is performed nesting three combinatorial entities: combinations among k sites, defining the rule order; the generation of variations among the residues attached to the previously selected sites; and another variational generator taking into account the presence or absence of individual negation terms (adding at most a 2 k expanding term). References [17][18][19][20] provide information about how to implement these discrete algorithms.
Rules Significance and Votes
The relative relevance of a rule arises after a preliminary dichotomization of the library. The investigated molecular property must be codified in a binary fashion (in general tagging each analogue as being or not being of interest, i.e., being or not being active, a drug, desirable, and so on). Many times the original property is not binary, and in these cases the SSIR method requires an arbitrary threshold frontier. Once the available molecular set presents only two classes, associating a significance p-value to each rule is immediate [21][22][23]. The calculation involves the number of molecules of interest which are present (known) in the library and are condensed by the rule: if the known sublibrary consists of a ď M analogues, being b of them declared of interest, a particular rule condensing c known molecules has the following hypergeometric probability to condense d out of c being also of interest: The cumulated probabilities that the rule condenses d or more (d+) structures of interest define the significance level or p-value: p pd`, c; b, aq " p pd : min pb, cq , c; b, aq " minpb,cq For instance, in our example above, let us assume that, from the M = 24 analogues, only a = 15 are at our disposal with known activity and that, from these, b = 5 are declared of interest. Then, if a rule such as XAX condenses c = 6 known molecules (the remaining 2 are not yet synthesized) and d = 4 of them are found to be active, expression Equation (7) indicates that the probability of randomly collecting 4 or more active molecules is p(4+,6;5,15) = 0.047. If this value is (arbitrarily) considered significant, the SSIR method will assume that rule XAX is a promising one, and will include it in the SAR model. Then, the assumption is that the 2 analogues that are not yet synthesized are expected to have promising chances of being active. The SSIR method assumes that the superposition of many significant rules provides a ranking method that increases the chances of pointing out new active analogues not present in the known sublibrary. This assumption requires an important aspect: establishing an arbitrary p-value cutoff (p c ) defining "significant" rules (in our example we have assumed that p c = 0.05).
The SSIR SAR model is additional in the sense that each compound (being present or not in the known training library) is assigned to a number of votes coming from the significant rules that condense it. Besides, some rules entering the model have a negative vote. For instance, if the rule BCX significance is p(d+,c;b,a) > p c , then the SSIR protocol focuses the attention on the complementary probability counterpart and evaluates the event consisting of collecting d or less active compounds, p(d´,c;b,a). It is easy to demonstrate that ppd´, c; b, aq " 1´ppd`, c; b, aq`Ppd, c; b, aq " pprb´ds`, a´c; b, aq If the condition p(d´,c;b,a) ď p c is satisfied, the rule BCX receives a negative vote, and this vote will be inherited by all the structures condensed by the rule. In fact, this procedure is equivalent to granting a positive vote to the full negation complementary rule BCX.
It is assumed that the higher the number of net positive votes a structure collects, the higher the probability of it being of interest. Conversely, analogues attached to net negative votes are presumably of non-interest. The SSIR goal is to efficiently impute votes not only to training compounds but also to new test or validation analogues not present in the available sublibrary, which serves as a training set.
Once a training set library is given and the compounds of interest are defined, the our SSIR code generates rules, keeps the significant ones according to the pre-established cutoff p-value, and assigns positive or negative votes to them. The SSIR SAR model consists of the whole set of selected rules and attached votes. The model is then ready to be applied to a test or external validation set. Future investigations are to be conducted in order to establish the efficiency of SSIR relative to the known (synthesized) percentage of the library and also to the effect of the actual distribution of substituent combinations.
Cross-Validation
As a way to test the model's predictive capabilities, several cross-validation procedures are implemented in our SSIR program code. Here the leave-one-out (L1O) and the balanced leave-two-out (BL2O) tests have been considered. Both tests are iterative and require the generation of predictions either for single analogues (L1O) or pairs of them (BL2O). All the L1O and BL2O cross-validation cycles have been designed according the Internal Test Sets (ITS) method [24][25][26] consisting of generating all the models from the beginning, i.e., selecting all the rules from scratch as if the left out cross-validated analogue(s) were not present in the original library. This constitutes a realistic cross-validation simulation, which helps to detect overparameterization effects. Despite this being a time-consuming task, fortunately the accumulative nature of SSIR rules allows the final results to be obtained after fast reckoning procedures. For instance, in order to implement a L1O procedure it is only necessary to perform the first full training involving all the analogues and keep in disk or computer memory the number of relevant compounds condensed by each rule and the number of these which are of interest. Then, during the cross-validation cycles, for each virtually left out structure it is only necessary to do the simple re-count of condensed and active structures that would be obtained if the model were rebuilt from scratch without the cross-validated analogue. In essence, this is a way to see which effect produces the momentary absence of each cross-validated analogue among each rule. This allows immediate recalculation of the rule significances and reconsideration of the vote assignation.
The same applies for the BL2O cross-validation procedure. The method consists of looping over all the pairs of active and non-active analogues and cross-validating them. This prevents generating all the combinations of molecular pairs except n actˆnnact , the product of the number of active (n act ) analogues by the number of non-actives (n nact ). For every left-out pair of molecules, two prediction votes are given and are added up. At the end of the full procedure, the sum of votes of the active and non-active compounds is divided by n nact and n act , respectively, giving a set of homogenate ranking votes. As for the L1O case, a fast version of the BL2O procedure has also been implemented, getting for each rule the new attached vote (if any) after carefully managing the respective condensation counts.
Conclusions
SSIR, a systematic procedure used to rank series of combinatorial analogues, has been described. In addition to this general description of the method, an illustrative application example has been provided. The method has been shown to be fast and systematic, leading to good predictions in some cases. Some overparametrization features can be easily detected relying on cross-validation or randomization test procedures. It has also been shown that the SSIR method constitutes an inverse (Q)SAR tool engine. The balanced leave-two-out (BL2O) cross-validation procedure has been also described.
|
2016-07-09T08:41:28.331Z
|
2016-05-26T00:00:00.000
|
{
"year": 2016,
"sha1": "8b1dd11d855277ea6dd0c69252433171df91586c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/6/827/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b1dd11d855277ea6dd0c69252433171df91586c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
139675664
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the residual stresses in 95wt%Al2O3-5wt% SiC wear protection coating using X-Ray diffraction technique
This paper aims to measuring the residual stresses practically in wear protection coatings using the sin2ψ method according to X-ray diffraction technique. The wear protection coatings used in this study was composite coating 95wt% Al2O3-5wt% SiC, while bond coat was AlNi alloy produced by using flame spraying technique on the mild steel substrate. The diffraction angle, 2θ, is measured experimentally and then the lattice spacing is calculated from the diffraction angle, and the known X-ray wavelength using Bragg’s Law. Once the dspacing values are known, they can be plotted versus sin2ψ, (ψ is the tilt angle). In this paper, stress measurement of the samples that exhibit a linear behavior as in the case of a homogenous isotropic sample in a biaxial stress state is included. The plot of dspacing versus sin2ψ is a straight line which slope is proportional to stress. On the other hand, the second set of samples showed oscillatory dspacing versus sin2ψ behaviour. The oscillatory behaviour indicates the presence of inhomogeneous stress distribution. In this case the X-ray elastic constants must be used instead of Young’s modulus (E) and Poisson ratio (ν)values. These constants can be obtained from the literature for a given material and reflection combination. The value of the residual stresses for the present coating calculated was compressive stresses (-325.6758MPa).
Introduction
Growing use of thermal spray coatings, specially for high-temperature environmental resistance, needs assurance in coating durability, i.e. debonding, resistance to cracking, and spallation, both during appliance and in service. Residual stresses are acknowledged to play important roles in coating durability; i.e., tensile residual stresses commonly increase the susceptiveness to cracking and debonding. Various studies have been concerned to the measurement of residual stresses in coatings [1][2][3][4]. Residual stresses advance during cooling of a thermal spray coating because of the dissimilarity of thermal expansion coefficients of the coating and substrate. Depending on the relative consequences of the thermal expansion coefficients of the coating and substrate [5][6][7] residual stress can be either tensile or compressive Parameters that heavily affect the value of residual stresses are coating and substrate temperature during spray deposition and properties of the coating such as thickness, roughness and porosity. Experiments have demonstrated that residual stresses increase with coating thickness and deposition temperature [8]. Thermal barrier coatings (TBC) are the finest way to preserve components of gas turbine engines and the demand for this coatings is becoming more vital as higher temperature engines are being advanced [9][10][11][12][13][14]. Normally, the residual stresses of thermally sprayed coatings are made by diverse mechanisms and sources [15][16][17]. In a thermal spray procedure with a high flame temperature, such like plasma spray, flame spray, or arc spray, entirely and partially molten particles striking onto the surface of the substrate, are became flat, solidified, and cooled down in a very small period of time (few microseconds). After their solidification and adhesion onto the surface of the substrate, the shrinkage of the splats can be stalled by substrate material or the underlying solidified coating material, which outcomes in tensile stresses which are called intrinsic, deposition, or quenching stress. Due to a very high temperature difference, a high theoretical residual stress in the order of up to 1 GPa can be made. However, Because of the many relaxation mechanisms, such as micro cracks, the sliding of the splats, plastic deformations, and material creep, the experimentally measured values are much lower (<100 MPa) [18]. X-ray diffraction was used as a corresponding technique; it can measure stress only in a thin surface layer, whereas the penetrating power of neutrons allows through-thickness stress profiling without any material exclusion [19]. This work aims to evaluation of the residual stresses in 95 wt% Al2O3-5wt% SiC wear protection coating coating using X-ray diffraction technique (XRD).
Materialsand parameters of the spraying processes
The coatings were applied by thermal spraying method (flame spraying) in air on the plain-carbon steel (AISI 1050), cylindrical substrate that's dimensions were 15 mm in diameter and 10 mm in height. The flame spraying system is designed and implemented in the welding laboratory of Mechanical Engineering Department, College of Engineering, University of Diyala, Iraq using spray gun the heat flame is produced by the burning of oxygen and acetylene, where the molten powder is carried out in the gas mixture and is attached to the surface to be coated by the high temperature of the torch which can raise to 3000 ° C. It is required to control the pressure of the gases to obtain the flame equal to the speed of the powder rush. The oxygen pressure should be adjusted according to the spray gun used no more than 4 bar and the acetylene pressure not more than 0.7 bar before spraying process. Two layers coatings were used in this work are the bond coat from (AlNi) alloy to reduce of thermal expansion coefficient between substrate and composite ceramic coating as a top coating layer Conditions of deposition process are listed in Table1. 1. X-ray stress evaluation. XRD-based residual stress measurements were made using standard ds pacing vs. sin2ѱ techniques using Shimadzu X-Ray Diffractometer type XRD-6000 and CrKα radiation. The sin 2 ѱ method [20,21] was used to determine the residual stresses in this work, the change of a lattice plane distance (d spacing) of a phase, i.e., the peak shift of the corresponding reflection, was measured for tilt ѱ-angles between and . To calculate the residual stresses the linear regression of the plot (d spacing ) versus sin 2 ѱ and the X-ray elastic constants. The coating and substrate physical properties (elastic modulus, Poisson's ratio, and coefficient of thermal expansion), thickness of the top coating, bond coat and substrate are shown in the table 2. The deposition temperature used in the present work during coating process and modeling for the top coating, bond coat and substrate was C.
3.Results and discussion
From Shimadzu X-Ray Diffractometer XRD-6000 chart, will be getting on the following values are shown in the Table 3. The stress can then be obtained from the following equation:
3.3.1. Quenchingstresses. Quenching stresses are of a level that corresponds to the values determined in the top surface of free standing coatings (Fig. 2a). They are always tensile and roughly constant through the deposit thickness. XRD data of quenching stresses, determined on the surface of different free-standing samples, are shown in Table 3.
Cooling stresses.
The cooling stresses can be related to the effect of the coating-substrate connection by studying the residual stress relieving in free-standing coatings. For instance, the XRD data for 95wt% Al2O3-5wt% SiC flame spraying coating given in Table 3. The composite coating has a different behavior and the values remain constant, because of their brittleness, which induces mechanisms of strain accommodation inside the deposit. The value of the residual stresses is Ϭ= -325.6758 MPa which was calculated using equation (1). This cooling stresses are due to the mismatch in thermal contraction between coating and substrate. They contribute to a stress gradient in dense coatings with a relatively high stiffness, because they are not affected by the release of quenching stress. In that case cooling component of the residual stress state increases with the coating thickness. The spalling of some thick coatings after spraying and during cooling to room temperature can be explained by the effects of the cooling stresses. So, interface with the substrate as a consequence of thermal mismatch strains. In fact, the difference of the stresses determined in the surface layer of these samples is opposite in sign to the cooling stresses calculated for the coating at the interface with the substrate. They are representative of the maximum bending stress at the surface side of the coatings while they are connected to the substrates. Cooling stresses, as responsible for a stress gradient in the coating thickness, explain the spalling of thick of materials that can be affected by their action. Industrial practice shows that such a phenomenon occurs, not during the spraying process, but after some time has elapsed, during the cooling to room temperature. This is explained by the bending moment due to the stress gradient, which would reach considerable magnitude in the thickest coatings. The residual stresses within the flame sprayed composite ceramic top coating are not distributed uniformly through the thickness, but change gradually, with the maximum in plane compressive stress at the interface, and diminish away from the interface, presenting very small value of residual stress on the free surface, this is confirmed by the study [27].
Conclusions
The X-ray diffraction technique has been demonstrated to be a powerful tool for evaluating and analyzing residual stress distributions in composite wear protection coating systems. In contrast to rather 'simple' single film on substrate systems, which are usually analyzed by means of the sin2ψmethod or related techniques, stress evaluation in complex coating systems requires the application of advanced methods that are different for the coating and the substrate. In the present work may be drawn the following points: 1. The analysis of the in-plane stress was done in the individual layer of 95wt% Al2O3-5wt% SiC flame spraying composite coating. 2. The evaluation of the residual stresses is Ϭ= -325.6758 Mpa which was calculated using the least-square-fitting the corresponding function to the experimental sin2ψ-data. This cooling stresses are due to the mismatch in thermal contraction between coating and substrate. 3. The residual stresses within the flame sprayed composite ceramic top coating are not distributed uniformly through the thickness, but change gradually, with the maximum in plane compressive stress at the interface, and diminish away from the interface, presenting very small value of residual stresses on the free surface.
|
2019-04-30T13:07:05.943Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "d707bf7575b6eaecbee7407480823b5be40027a5",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/311/1/012016/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8c5aabfe401aa3ffe6528296cfedbf0faf2bcc08",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
245824410
|
pes2o/s2orc
|
v3-fos-license
|
Whey-Adapted versus Natural Cow’s Milk Formulation: Distinctive Feeding Responses and Post-Ingestive c-Fos Expression in Laboratory Mice
The natural 20:80 whey:casein ratio in cow’s milk (CM) for adults and infants is adjusted to reflect the 60:40 ratio of human milk, but the feeding and metabolic consequences of this adjustment have been understudied. In adult human subjects, the 60:40 CM differently affects glucose metabolism and hormone release than the 20:80 CM. In laboratory animals, whey-adapted goat’s milk is consumed in larger quantities. It is unknown whether whey enhancement of CM would have similar consequences on appetite and whether it would affect feeding-relevant brain regulatory mechanisms. In this set of studies utilizing laboratory mice, we found that the 60:40 CM was consumed more avidly than the 20:80 control formulation by animals motivated to eat by energy deprivation and by palatability (in the absence of hunger) and that this hyperphagia stemmed from prolongation of the meal. Furthermore, in two-bottle choice paradigms, whey-adapted CM was preferred against the natural 20:80 milk. The intake of the whey-adapted CM induced neuronal activation (assessed through analysis of c-Fos expression in neurons) in brain sites promoting satiation, but importantly, this activation was less pronounced than after ingestion of the natural 20:80 whey:casein CM. Activation of hypothalamic neurons synthesizing anorexigenic neuropeptide oxytocin (OT) was also less robust after the 60:40 CM intake than after the 20:80 CM. Pharmacological blockade of the OT receptor in mice led to an increase in the consumption only of the 20:80 CM, thus, of the milk that induced greater activation of OT neurons. We conclude that the whey-adapted CM is overconsumed compared to the natural 20:80 CM and that this overconsumption is associated with weakened responsiveness of central networks involved in satiety signalling, including OT.
Introduction
Nutritional benefits of cow's milk (CM) consumption, one of the most common elements of daily diets on a global scale, are to a large extent associated with its protein composition [1][2][3][4]. Whey and casein are the predominant components of the protein fraction [5]. However, the protein fraction of CM and other kinds of animal milk (e.g., goat's milk (GM)) contains the natural whey:casein ratio of about 20:80, which is in contrast to the 60:40 ratio of human milk [6]. Many milk products for human nutrition-most commonly infant formulas-have therefore whey added in order to parallel the 60:40 ratio of human milk [7][8][9].
The aforementioned reports focus on the effects of whey or casein alone. One should not apply an oversimplified assumption that the combined whey and casein, while being part of the complex milk-based diet, produce appetite and metabolic effects that are either negligible or merely 'proportional' to the whey:casein content. Also, the animal source of a given milk formulation is extremely likely to be a modifying factor as milk derived from different species possess unique characteristics [28].
Yet despite this frequent modification to the protein fraction, surprisingly little is known about how the shift from the natural to the whey-enhanced ratio in commonly consumed kinds of milk (importantly, in CM), affects appetite and post-ingestive processes. El Khoury et al., who gave adults a 20:80 or whey-adapted 60:40 milk beverage, found that the whey-adapted milk consumed with cereal reduced postprandial glycemia independently from insulin, mainly via delayed gastric emptying [22]. The preprandial glucose was lower and GLP-1 release was elevated after the 60:40 milk. In animal studies, obese rats displayed better glucose tolerance when eating whey instead of whey plus casein [29,30].
In our recent mouse study, which was the first to directly examine how the 20:80 versus 60:40 whey:casein milk intake affects feeding and feeding-related neural mechanisms, the whey-adapted formulation was found to be overconsumed, and this avid consumption was associated with differences in brain activation in regions controlling appetite and in melanocortin mRNA levels [31]. Importantly, this initial study was carried out using goat's milk, whose effects on numerous physiological parameters, from gastrointestinal to sensory to immune, is different from cow's milk. Thus, though the above-mentioned murine study was informative in determining that whey-adapted GM produces different appetite-related effects from the natural control (20:80) formulation, it is unknown whether the same whey:casein ratio enhancement in CM, the most commonly used milk worldwide, would have similar consequences for appetite.
Thus, the current study, designed to bridge this gap, investigated whether the wheyadapted (60:40) CM formulation was consumed in different quantities than the control natural 20:80 CM by laboratory mice. In the single tastant (no diet choice) scenarios, we studied natural control vs. whey-adapted CM intake in animals motivated to eat mainly by calorie needs (after energy deprivation) or by palatability (in non-deprived individuals). In two-bottle choice tests, relative preference for the 60:40 vs. natural control CM formulation was assessed.
Taking into account the earlier report showing that the 60:40 GM is preferred over the 20:80 GM and that the relative preference for GM is higher than for CM, we examined whether whey-adapted CM is not only preferred over the natural control CM, but also over the natural 20:80 GM formulation. We hypothesized that appetite responses to the 20:80 vs. 60:40 CM stem from the central mechanisms being distinctively affected by whey:casein ratio enhancement. Therefore, activation of feeding-related brain circuitry (determined through analysis of c-Fos expression) was analysed in animals just after the completion of a 1-h meal in which similar amounts of the control 20:80 vs. 60:40 CM were ingested.
In order to further substantiate the hypothesis that greater consumption of the 60:40 CM stems-at least to some extent-from suppressed satiety signalling in the brain, we examined whether the percentage of c-Fos-positive (activated) neurons synthesizing a key satiety neuropeptide, oxytocin (OT) is lower after intake of the 60:40 than 20:80 whey:casein CM. Typically, the level of activation of hypothalamic OT neurons parallels the magnitude of a satiety response [32]. We subsequently tested whether pharmacological blockade of the receptor that binds OT elevates consumption of the 20:80 CM but not 60:40 CM, thus of the formulation which activates the anorexigenic OT system in a more robust fashion. Pharmacological blockade of the OT receptors was done with the antagonist that penetrates the blood-brain barrier, L-368,899 [33,34].
Animals
The studies were performed on adult male C57Bl mice. The species/strain/age combination was chosen as one of the most frequently used food intake laboratory animal models that had been studied also in relation to goat milk formulations differing in whey:casein ratios (feeding behaviour and gene expression), OT receptor blockade and diet preference, and c-Fos and feeding [33][34][35]; thus, it can be treated as a reliable reference model for the data obtained here. The animals were housed individually in a temperature-and humiditycontrolled (22 • C; relative humidity 40-45%) facility with a 12:12 h light:dark schedule (lights on at 09:00). The mice had unlimited access to standard laboratory chow (Diet 86, Sharpes Stock Feed, Wairarapa, New Zealand) and water unless stated otherwise. Chow was presented in the overhead hopper, whereas water, in a 150-millilitre bottle with a 10-cm nozzle, was placed in the conventional overhead lid bottle holder. Milk formulations (DGC, Ltd., Hamilton, New Zealand) were available periodically as per the detailed description below. A different cohort of animals was used in each study unless noted otherwise. The procedures had received prior approval by the University of Waikato's institutional animal ethics committee (approval #1057).
Milk Formulations Used in the Studies
The control CM formula contained the natural whey:casein protein ratio of 20:80 (Control 20:80), whereas the whey-adapted CM test formula had 60% whey and 40% casein (60:40). The composition of the prepared solutions can be found in Table 1. In addition to CM formulations, in Experiment Section 2.3.4, we used equivalent whey:casein ratio GM-based formulations that had been previously tested in laboratory animal feeding trials [31]. All formula compositions follow Codex standard (CODEX STAN 72-1981; based on nutrients/energy value), which requires lactose to be within the range of 5.4-9.8 g/100 mL. The solutions were prepared just before use by being reconstituted in tap water. In order to avoid neophobia, all mice were pre-exposed to the milk on at least two occasions (1 h each) within two weeks prior to the trials. Milk was given to the animals in 150-millilitre bottles equipped with 10-cm-long metal nozzles. The bottles were placed manually at the scheduled time of fluid presentation in the standard cage lid bottle holder at a 45-degree angle. The positioning allowed animals to have unobstructed access to the nozzles.
Energy Deprivation-Induced Intake of Whey-Adapted vs. Control CM Formulation
Before gaining episodic access to a single bottle of milk, mice (n = 10/group; individually housed) were deprived overnight of standard chow (water was available during the time of energy deprivation). At 09:00 a bottle containing either the Control 20:80 whey:casein CM formulation or the whey-adapted 60:40 CM was placed in the cage for 3 h. Water was removed for the 3-h meal. Milk consumption was measured in grams at 1 and 3 h.
Intake of Whey-Adapted vs. Control CM Formulation in Non-Deprived Animals
The same cohort of animals as in Section 2.3.1 was used here (7 days of no treatment elapsed between studies). On the day of the study, standard chow and water were removed from cages (at 09:00) and a bottle containing either the Control 20:80 whey:casein CM formulation or the whey-adapted 60:40 CM (n = 10/group; individually housed) was Mice (n = 7-8/group; individually housed) were acclimatized to the two-bottle presentation of the Control 20:80 whey:casein CM formulation and the whey-adapted 60:40 CM on two separate occasions one week prior to the study. On the experimental day at 10:00, standard chow and water were removed and animals were simultaneously presented with two bottles, one containing the Control 20:80 CM formulation and the other, the 60:40 CM. Intake was measured after 2 h by weighing the bottles and the data were expressed in grams.
Preference for the Simultaneously Presented Whey-Adapted vs. Control CM and GM Formulations (Two-Bottle Choice)
In earlier laboratory animal studies [35], we have found that GM is more preferred to CM. We have also reported that whey-adapted GM is consumed more avidly than the natural 20:80 GM formulation [31].
Here, by using two-bottle tests (the same as described above in Section 2.3.3 2 h, water and chow removed during the test) in separate cohorts of mice (n = 7/group; individually housed), we investigated whether the whey-adapted formulations are preferred over the natural (20:80) whey:casein milks regardless of which species the milk was derived from. In other words, we evaluated whether the enhanced whey:casein ratio in CM supersedes the gustatory effect of milk derived from the previously evaluated GM.
We first confirmed that the whey-adapted GM is indeed more preferred than the Control 20:80 GM. Then the two-bottle choice test was done in mice that received access to the control vs. whey-adapted milk on the experimental day (Control 20:80 CM vs. wheyadapted 60:40 GM; Control 20:80 GM vs. whey-adapted 60:40 CM). Finally, we examined preference when formulations containing the same whey:casein ratio were presented simultaneously (Control 20:80 CM vs. Control 20:80 GM; whey-adapted 60:40 CM vs. whey-adapted 60:40 GM). Formulation intake after 2 h was measured in grams.
c-Fos Expression in the Feeding-Related Brain Circuit after Consumption of the Same Amount of the Control 20:80 versus Whey-Adapted CM Formulation
As whey-adapted CM promoted more avid consumption (elevated preference compared to the natural 20:80 control and in the no-choice scenarios, a prolonged meal) we wished to examine whether consumption of the same amount of the Control 20:80 CM vs. the whey-adapted 60:40 CM formulation led to a different level of neuronal activation of brain areas that control eating behaviour. For the analysis, we chose regions that participate in the regulation of feeding for energy (hunger, satiation) and pleasure [36]: the medial preoptic area, paraventricular nucleus, supraoptic nucleus, arcuate nucleus, ventromedial hypothalamic nucleus, dorsomedial hypothalamic nucleus, lateral hypothalamic area, dorsal motor nucleus of the vagus, nucleus of the solitary tract, area postrema, bed nucleus of the stria terminalis, nucleus accumbens shell, nucleus accumbens core, and central nucleus of the amygdala. A different pattern of neuronal activation after consumption of a similar amount of milk would shed more light on whether the readiness of animals to consume more food given the whey-adapted formulation stems from dysregulation of hunger/satiety signalling and/or different reward processing.
Neuronal activation was determined by immunohistochemical detection of an immediateearly gene product, c-Fos. Maximum c-Fos immunoreactivity (IR) occurs about 60-90 min after the stimulus (see, e.g., [37,38]). In our study, we examined c-Fos IR corresponding to the beginning of the meal (to establish baseline activity without food consumption) and after 1-h exposure to the Control 20:80 vs. whey-adapted 60:40 CM.
On the experimental day, chow and water were removed from the cages at 10:00. The animals (n = 7-8/group) were divided into three groups: Group 1 did not receive any milk and was perfused an hour later in order to visualize baseline c-Fos IR corresponding to the beginning of the meal. Group 2 was given a bottle containing the Control 20:80 CM for 1 h. Group 3 received the whey-adapted 60:40 CM formulation for 1 h. Bottles were removed after 1 h, but no food or water was returned to the cages until the animals from Groups 2 and 3 were perfused an hour later (i.e., at 12:00). Since the intake of the Control vs. whey-adapted CM formulations does not differ during the first hour of consumption, but instead, the intake of the whey-adapted CM is prolonged, the mice from both groups consumed similar amounts of the milk during the hour (2.4 ± 0.15 g).
Activation of Oxytocin (OT) Neurons after Consumption of the Same Amount of the Control 20:80 versus Whey-Adapted CM Formulation
Having noted differences in c-Fos immunoreactivity in the hypothalamus, we examined whether the percentage of activated (i.e., c-Fos-positive) neurons synthesizing an anorexigen, oxytocin (OT), differs after consumption of the same amount of the Control 20:80 versus whey-adapted milk. This was done in order to assess whether OT, one of the key neuropeptidergic satiety systems, is engaged more after consumption of one diet versus the other.
In order to visualize c-Fos expressing OT cells, after the completion of the c-Fos staining, hypothalamic sections containing the paraventricular (PVN) and supraoptic (SON) nuclei (thus, the two regions where OT cells are amassed [32]) were further stained for visualization of OT. The immunohistochemical procedure was similar to the one described above. For the primary antibody incubation, rabbit anti-OT was used (1:25,000; Millipore, Temecula, CA, USA), and nickel sulphate was not added to the DAB reagent. Consequently, OT neurons were stained brown instead of the black colour obtained for c-Fos.
In the c-Fos+OT staining, we determined the total number of OT neurons (this total number of OT neurons is typically unaffected by food intake) and the number of OT neurons positive for c-Fos (in earlier studies, this has been found to be very low at the beginning of a meal and it gets elevated as the amount of ingested food increases [39]). Cells were counted bilaterally in the PVN and SON, and the percentage of OT neurons containing Fos-positive nuclei was calculated.
Effect of Pharmacological Blockade of the OT Receptor on Consumption of Whey-Adapted vs. Control CM Formulation in Non-Deprived Animals
It has been previously reported that peripheral administration of the blood-brain barrier-penetrant OT receptor antagonist, L-368,899, in nondeprived mice elevates consumption of diets whose intake produces a particularly robust activation of the hypothalamic OT system [34]. Thus, considering that the Control 20:80 formulation generated a greater percentage of c-Fos-positive OT neurons in the PVN (in the c-Fos+OT experiment above), we sought to investigate whether consumption of this milk is differently affected by L-368,899 than intake of the whey-adapted formulation.
On the day of the study, chow and water were removed from cages (at 09:00) and a bottle containing either the Control 20:80 whey:casein CM formulation or the whey-adapted 60:40 CM was placed in the cage for 3 h. The formulations were the only source of calories and fluid for the 3-h meal. Ten minutes prior to the presentation of either diet, the mice were injected intraperitoneally (IP) with isotonic saline (vehicle) or 1 mg/kg b. wt. L-368,899 (the dose known to elevate intake of diets that induce the most robust response of the central OT system [33,34]). The amount of consumed formulations (in grams) was assessed at 3 h for saline vs. L-368,899-treated groups (n = 9-11 per group) separately in the cohort given the Control 20:80 CM and in the cohort given the whey-adapted 60:40 milk.
Statistical Analyses
Feeding was standardized per gram of body weight. Food intake and immunohistochemistry two-group comparisons were analysed with Student's t-test. In the feeding studies where three groups were compared with each other, a one-way ANOVA followed by Tukey's post-hoc test with a correction for multiple comparisons was used. Differences were considered significant for p < 0.05.
Results
In the first hour of a meal in the no-choice feeding scenarios, both overnight-deprived animals and non-deprived mice drank similar amounts of the Control 20:80 and the wheyadapted 60:40 CM (Figure 1A,B). The majority of feeding activity occurred during this first hour. However, in the next 2 h of milk availability, animals given the whey-adapted CM formulation continued consumption at a level higher than those given the Control 20:80 milk. This was the case for the hungry mice (p = 0.007) eating primarily for energy as well as the non-deprived ones (p = 0.029) that were eating mainly for palatability.
hour. However, in the next 2 h of milk availability, animals given formulation continued consumption at a level higher than those milk. This was the case for the hungry mice (p = 0.007) eating prim as the non-deprived ones (p = 0.029) that were eating mainly for p In the two-bottle choice test, the animals drank approximatel whey-adapted CM than its natural 20:80 CM equivalent (p < 0.001 preference for the CM was paralleled in the preference experim formulations: the mice chose the 60:40 whey:casein ratio (p < 0.001 adapted CM was also preferred over the natural Control 20:80 GM and the whey-adapted GM was chosen over the natural Control 2 ure 2C). Finally, when whey:casein content-matched formulation taneously, mice showed preference for the GM-based ones: sign In the two-bottle choice test, the animals drank approximately four times more of the whey-adapted CM than its natural 20:80 CM equivalent (p < 0.001; Figure 1C). This higher preference for the CM was paralleled in the preference experiment utilizing GM-based formulations: the mice chose the 60:40 whey:casein ratio (p < 0.001; Figure 2A). The wheyadapted CM was also preferred over the natural Control 20:80 GM (p < 0.001; Figure 2B), and the whey-adapted GM was chosen over the natural Control 20:80 CM (p < 0.001; Figure 2C). Finally, when whey:casein content-matched formulations were presented simultaneously, mice showed preference for the GM-based ones: significant for the 60:40 GM vs. 60:40 CM (p = 0.0105; Figure 2D) and a trend toward significance for the natural Control 20:80 GM vs. 20:80 CM (p = 0.109; Figure 2E). Compared to the baseline (at meal onset), CM formulation consumption led to changes in the activity of several brain areas related to feeding ( Figure 3A). An increase in the number of c-Fos-positive nuclear profiles regardless of the whey:casein ratio was found in the hypothalamic paraventricular (PVN; for the Control, p = 0.001; for the wheyadjusted, p = 0.0468) and ventromedial nuclei (VMH; for the Control, p = 0.0032; for the whey-adjusted, p = 0.0003) and in the central nucleus of the amygdala (CEA; for the Control, p = 0.0006; for the whey-adjusted, p = 0.0143). However, in the PVN, the level of c-Fos immunoreactivity was significantly higher (two-fold) in the group consuming the Control 20:80 than the whey-adapted diet (p = 0.0312). In the dorsomedial hypothalamic nucleus (DMH), only the intake of the natural Control 20:80 CM led to a significant increase (p = 0.0211), whereas the whey-adapted CM effect did not reach significance. In the lateral hypothalamus (LH), neither of the groups was significantly different from the baseline, but the effect on c-Fos of the Control 20:80 CM was significantly higher than of the 60:40 formulation (p = 0.0157). Finally, in the brainstem nucleus of the solitary tract (NTS), a significant above-baseline increase in c-Fos IR was detected after the Control 20:80 CM intake (p = 0.005), whereas the higher mean after the 60:40 CM consumption did not reach the threshold of significance. It should be noted, though, that the difference in meal-end NTS activity was detected also between the natural Control 20:80 CM vs. the 60:40 CM formulation (p = 0.0396; Figure 3A). Double immunohistochemistry for c-Fos and the anorexigen, OT, revealed that while consumption of either milk type led to an increase in the percentage of activated OT neurons in the SON (20:80 CM p = 0.0071; 60:40 CM p = 0.0477) and PVN (20:80 CM p < 0.001; 60:40 CM p = 0.0458), in the PVN consumption of the 20:80 CM produced a significantly higher percentage than the intake of the whey-adapted milk (p = 0.0147; Figure 3B). Photomicrographs of sites in which c-Fos levels differed between the 20:80 CM and the whey-adapted formula as well as of the c-Fos-positive OT neurons in the PVN are shown in Figure 4. Compared to the baseline (at meal onset), CM formulation consumption led to changes in the activity of several brain areas related to feeding ( Figure 3A). An increase in the number of c-Fos-positive nuclear profiles regardless of the whey:casein ratio was found in the hypothalamic paraventricular (PVN; for the Control, p = 0.001; for the whey-adjusted, p = 0.0468) and ventromedial nuclei (VMH; for the Control, p = 0.0032; for the wheyadjusted, p = 0.0003) and in the central nucleus of the amygdala (CEA; for the Control, p = 0.0006; for the whey-adjusted, p = 0.0143). However, in the PVN, the level of c-Fos immunoreactivity was significantly higher (two-fold) in the group consuming the Control 20:80 than the whey-adapted diet (p = 0.0312). In the dorsomedial hypothalamic nucleus (DMH), only the intake of the natural Control 20:80 CM led to a significant increase (p = 0.0211), whereas the whey-adapted CM effect did not reach significance. In the lateral hypothalamus (LH), neither of the groups was significantly different from the baseline, but the effect on c-Fos of the Control 20:80 CM was significantly higher than of the 60:40 formulation (p = 0.0157). Finally, in the brainstem nucleus of the solitary tract (NTS), a significant above-baseline increase in c-Fos IR was detected after the Control 20:80 CM intake (p = 0.005), whereas the higher mean after the 60:40 CM consumption did not reach the threshold of significance. It should be noted, though, that the difference in meal-end NTS activity was detected also between the natural Control 20:80 CM vs. the 60:40 CM formulation (p = 0.0396; Figure 3A). Double immunohistochemistry for c-Fos and the anorexigen, OT, revealed that while consumption of either milk type led to an increase in the percentage of activated OT neurons in the SON (20:80 CM p = 0.0071; 60:40 CM p = 0.0477) and PVN (20:80 CM p < 0.001; 60:40 CM p = 0.0458), in the PVN consumption of the 20:80 CM produced a significantly higher percentage than the intake of the whey-adapted milk (p = 0.0147; Figure 3B). Photomicrographs of sites in which c-Fos levels differed between the 20:80 CM and the whey-adapted formula as well as of the c-Fos-positive OT neurons in the PVN are shown in Figure 4.
Finally, pharmacological blockade of the OT receptor with L-368,899 in nondeprived animals given 3-h access to the Control 20:80 CM significantly increased consumption compared to the saline-injected mice (p = 0.0315). On the other hand, the drug treatment had no effect on the intake of the whey-adapted formulation: saline controls and L-368,899injected animals ingested the same amounts of the 60:40 whey:casein CM ( Figure 5). Finally, pharmacological blockade of the OT receptor with L-368,899 in nondepriv animals given 3-h access to the Control 20:80 CM significantly increased consumpti compared to the saline-injected mice (p = 0.0315). On the other hand, the drug treatme had no effect on the intake of the whey-adapted formulation: saline controls and 11, x FOR PEER REVIEW Figure 5. Intake of the individually presented natural Con whey-adapted (60:40 whey:casein) CM in non-deprived mi minutes before the meal, the animals were injected with iso penetrant OT receptor antagonist, L-368,899 (1 mg/kg). Intak
Discussion
Establishing a proper level of protein in a diet, driven by the need for adequate delivery of amino acid and biochemical and signalling processes of the organ of undesirable effects of excess protein (low acceptabili ciuria due to protein overload, and in infant nutrition, on weight gain). While one obvious approach is to chan tent, another strategy to improve nutritional and metab particularly used in cow milk formulations is to adjus and milk derived from other non-bovine species are of consumption, i.e., the natural whey:casein ratio of 20:8 hand, this strategy appears sound as the 60:40 ratio re other hand, our understanding of the appetitive an switch from the 20:80 to the whey-enhanced content is data that pertain to either whey or casein alone, and ratios [16,19,26,40]. To further complicate matters, CM ond most commonly consumed worldwide, i.e., GM) re orie density and macronutrient composition, differ in Figure 5. Intake of the individually presented natural Control (20:80 whey:casein) CM and the wheyadapted (60:40 whey:casein) CM in non-deprived mice during 3-h access to the diets. Ten minutes before the meal, the animals were injected with isotonic saline or a blood brain barrier-penetrant OT receptor antagonist, L-368,899 (1 mg/kg). Intake was measured at 3 h. * p ≤ 0.05.
Discussion
Establishing a proper level of protein in a diet, including in milk formulations, is driven by the need for adequate delivery of amino acids that facilitate structural integrity and biochemical and signalling processes of the organism. It must balance the avoidance of undesirable effects of excess protein (low acceptability of high-protein foods, hypercalciuria due to protein overload, and in infant nutrition, undesirable effects on kidneys and on weight gain). While one obvious approach is to change the overall dietary protein content, another strategy to improve nutritional and metabolic effects of protein that has been particularly used in cow milk formulations is to adjust the whey:casein ratio. Thus, CM and milk derived from other non-bovine species are oftentimes whey-adapted for human consumption, i.e., the natural whey:casein ratio of 20:80 is adjusted to 60:40. On the one hand, this strategy appears sound as the 60:40 ratio reflects that of human milk. On the other hand, our understanding of the appetitive and metabolic consequences of this switch from the 20:80 to the whey-enhanced content is fragmented: it frequently relies on data that pertain to either whey or casein alone, and not on the 20:80 vs. 60:40 specific ratios [16,19,26,40]. To further complicate matters, CM and other milk (including the second most commonly consumed worldwide, i.e., GM) regardless of being matched for calorie density and macronutrient composition, differ in physical and chemical characteristics.
The current set of studies shows for the first time that whey-adapted CM is consumed more avidly than the natural control 20:80 whey:casein CM. Notably, we observed that a higher intake of the whey-adapted CM occurred regardless of the energy status of the animal, i.e., both calorie-deprived and non-deprived animals given the 60:40 CM as a single-tastant meal ingested more of this milk. The fact that animals readily ingested CM even when they were not hungry indicates that CM at both whey:casein ratios is palatable. It should be noted, however, that in those two feeding scenarios, during the first hour of the meal, consumption of the control 20:80 CM was the same as of the 60:40 CM. It suggests that the motivation to eat was initially similar in both cases. Importantly, it was during the later phase of the meal when differences in ingestive behaviour became apparent. During that time the 20:80 CM controls virtually ceased consumption, but the mice given the whey-adapted CM continued their feeding activity.
While the effect of whey:casein adjustment on single tastant intake bears some similarities between CM and (with what had been previously reported) GM [31], it cannot be directly compared. That is because the earlier report used shorter mealtimes and-in the post-deprivation feeding-utilized simultaneous presentation of GM and chow. It seems though that the enhanced intake of the whey-adapted formulation was achieved in that GM study despite a shorter timeframe of milk presentation (2 rather than 3 h), which may potentially indicate a more robust and immediate effect.
That palatability-driven processes contribute to the intake of the whey-adapted CM is evident through the preference experiments in which animals were given a choice between two kinds of milk. Mice consumed approximately four times more of the 60:40 CM than of the control 20:80 formulation. This four-fold change is a stark difference in preference: as a comparison, earlier studies employing short-term, simultaneous presentations of isocaloric and palatable liquids have shown differences within 10% of the consumed volume [33,34]. Importantly, the 60:40 whey:casein CM formulation was more preferred than the natural 20:80 GM, even though the formulations were similar in energy content and macronutrient composition, and that 60:40 GM had been reported to be more preferred than 20:80 GM in laboratory animals [35]. Our additional preference studies showed that, indeed, when matched for the whey:casein ratios, GM retained its status as the preferred milk. However, enhancing the proportion of whey content superseded the effect of the GM preference reported previously.
A prolonged meal is oftentimes associated with the combination of enhanced palatability of food and delayed satiation. In fact, one of the processes that facilitate enhanced consumption of tasty foods is weakened satiety. For example, at the brain level, it has been shown that agonists of opioid receptors, which mediate eating for palatability by suppressing neural and endocrine mechanisms that promote a feeling of fullness, delay termination of consummatory behaviour, whereas opioid receptor antagonists bring about early satiation, especially when administered in conjunction with the presentation of palatable diets [41][42][43]. This interaction between neural systems mediating reward and satiety occurs primarily at the brain level in the hypothalamus: neural mediators of palatabilitydriven feeding (e.g., opioids) tend to suppress activation of hypothalamic areas, such as the paraventricular nucleus, which synthesize a number of anorexigenic peptides (including OT) [41,44,45].
Interestingly, our c-Fos expression mapping throughout the brain circuit regulating various aspects of appetite supports the hypothesis of diminished satiety signalling in response to whey-adapted CM contrasted against the natural control 20:80 milk. Compared to the neuronal activation baseline (i.e., at the moment of the presentation of milk), an increase in the number of c-Fos-positive nuclear profiles after CM regardless of the whey:casein ratio was found in the hypothalamic paraventricular nucleus (PVN, the region whose lesion leads to overeating and obesity [46]), which is in line with previous reports examining c-Fos expression after food consumption [47]. Notably, the level of neuronal activation was significantly higher in the 20:80 CM controls than in the whey-adapted group. The dorsomedial hypothalamic c-Fos IR was significantly higher than baseline only in the 20:80 CMbut not in the 60:40 CM-fed mice, whereas in the lateral hypothalamus, the 60:40 CM mice showed lower activation than the natural whey:casein ratio CM-fed controls. A similar pattern of diminished post-meal activity after the 60:40 CM was found in the brainstem nucleus of the solitary tract, which serves as a 'relay area' for communication between the periphery (including the gut) and the hypothalamic and extrahypothalamic sites [48,49]. The nucleus of the solitary tract sends projections to, among others, the paraventricular, dorsomedial, and lateral hypothalamic nuclei, thus regions whose activation was affected differently by the natural versus whey-adapted CM [49]. Finally, a similar post-feeding c-Fos response was noted in the ventromedial hypothalamus, a glucose-sensing area [50].
What increases our confidence in stipulating that the satiating effects of the 60:40 CM are subpar compared to the 20:80 formulation is the fact that the percentage of c-Fos positive (activated) OT neurons in the PVN after consumption of the whey-adapted milk is lower than after the intake of the 20:80 CM. OT is known to decrease feeding [32,39]. The strength of an anorexigenic/satiating stimulus is proportional to the level of activation of PVN OT cells [32]. Hence, it is not surprising that prior to the beginning of the meal in this study, the level of OT neuronal activation was very low. It was also expected that regardless of a milk type, after consumption of milk, there would be an increase in OT activity. However, the fact that the ingestion of a similar amount of the 60:40 and 20:80 CM led to a greater increase in the percentage of c-Fos-positive OT neurons after the natural CM suggests that the whey-adapted formula generated a less robust response of the key neuropeptidergic satiety system, thereby facilitating prolonged consumption. This is particularly important considering that the differences in c-Fos expression between the two kinds of milk are seen in the PVN but not in the SON. This is because unlike the SON, which supplies OT to the neurohypophysis, PVN OT neurons send projections not only to the pituitary but also to many brain target areas, including those that regulate reward, gastrointestinal functions, and metabolic processing [36,39].
The involvement of OT in the differential appetitive response to the 20:80 vs. 60:40 CM is further substantiated by the injection study involving the blood-brain barrier penetrant OT receptor antagonist. L-368,899 administration (limiting endogenous OT binding to its receptors) [33,34], increased the intake of the 20:80 CM, but not of the whey-adapted milk.
Among the forebrain sites that mediate primarily eating for pleasure and motivation to eat, both kinds of milk induced activation of the central nucleus of the amygdala, which is in concert with the findings that the control 20:80 and whey-adapted CM are palatable. We did not find differences in the magnitude of the amygdala activation though. Somewhat surprisingly we did not see above-baseline activity in the remaining regions that promote reward (nucleus accumbens core and shell, bed nucleus of the stria terminalis). However, one should note that it may stem from the fact that in order to ensure similar volumes of consumption of both kinds of milk, our paradigm was limited to only 1 h of a meal, which would have likely been extended to 2-3 h should the animals have been allowed to continue feeding. Furthermore, this was an episodic/ad hoc meal rather than a scheduled presentation of palatable food, and the latter would have been more effective in inducing pronounced c-Fos changes in reward areas [51,52].
It should be noted that while whey is often added to products based on ruminant kinds of milk to replicate the 60:40 ratio of human milk, this exacerbates differences in individual proteins, for example, by further increasing the concentrations of beta-lactoglobulin that is not present in human milk (see Table 1). More importantly, the present study demonstrates there are other neuronal consequences of adding whey from ruminant milk. Due to obvious ethical considerations, we were unable to compare these results with human milk. There is evidence of increased weight gain and risk of obesity in infants with a high protein intake associated with formulas with added whey compared to breastfed infants [53]. However, clinical studies have shown that infants fed formula based on GM without whey and at a low protein content have a weight gain [54] and body composition [55] that is comparable to breastfed infants.
In sum, we conclude that the 60:40 whey:casein CM formulation is consumed in greater quantities in prolonged meals by both energy-deprived and non-deprived mice. Mice show a heightened preference for whey-adapted CM in choice scenarios and this phenomenon expands upon GM. Though the intake of the whey-adapted CM induces neural mechanisms promoting satiation, this activation is less pronounced than the one induced by the natural 20:80 whey:casein CM.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the University of Waikato's institutional animal ethics committee (approval #1057 from 18 December 2018).
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2022-01-09T16:03:35.994Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "82741d63cca30c00ab53b75588cd2ec56ec98fa5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/11/2/141/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d85d0500f8515ce08cbce1d0de74d1cec9cf1268",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246703546
|
pes2o/s2orc
|
v3-fos-license
|
The Fight Against Severe COVID-19: Can Parasitic Worms Contribute?
As of 31 December 2021, COVID-19, caused by infection with SARS-CoV-2, had been confirmed in more than 285 million people worldwide, with more than 5.4 million dead resulting in a case fatality ratio of 1.89%. This figure is likely to be vastly underestimated, as a proportion was not registered officially as COVID-19-related/excess deaths. The United States recorded the highest number (54,656,866) of confirmed cases. In Africa, there are 47 countries affected, with 7,065,972 cumulative cases and 155,081 deaths were recorded by 31 Dec 2021 (WHO African Region numbers at a glance). To date, the currently approved vaccines have been effective in preventing COVID-19, particularly in regards to severe symptoms (1). However, several immune escape mechanisms of SARS-CoV-2 and the rapid emergence of mutated variants (2) pose a great challenge to the efficacy of these vaccines. Patients with severe COVID-19 tend to have a high concentration of pro-inflammatory cytokines (IL-2, IL-7, IL-10, G-CSF, TNF-a, CXCL10, MCP1, and MIP1a) (3), suggesting that a cytokine release syndrome (CRS) (4) (also loosely referred to as a cytokine storm), which is a form of life-threatening systemic inflammatory response syndrome (SIRS), can often feature in severe COVID-19 infections. Among the increased levels of inflammatory mediators in COVID-19 patients, the plasma levels of IL-6, an amplifier in the cytokine storm, are significantly elevated in non-survivors compared with survivors (5). The main cause of death of COVID-19 is due to severe acute respiratory distress syndrome (ARDS) with this high severity being dependent on the cytokine storm. Sepsis has been defined as a life-threatening organ dysfunction caused by a dysregulated host response to infection (6). Endothelium damage, vascular permeability, microvascular dysfunction, coagulation pathway activation, and impaired tissue oxygenation occur during sepsis and can lead to multisystem organ dysfunction (MODS), organ failure and consequently a potentially lethal outcome. As many patients with severe COVID-19 show typical clinical manifestations of septic shock, with other symptoms meeting the diagnostic criteria for sepsis and septic shock according to the Sepsis-3 International Consensus (6), Li et al. hypothesized that viral sepsis is a crucial process in severe COVID-19 cases (7). Accumulating evidence further links the pathology of severe COVID19, such as acute kidney injury, to sepsis (8).
INTRODUCTION
As of 31 December 2021, COVID-19, caused by infection with SARS-CoV-2, had been confirmed in more than 285 million people worldwide, with more than 5.4 million dead resulting in a case fatality ratio of 1.89%. This figure is likely to be vastly underestimated, as a proportion was not registered officially as COVID-19-related/excess deaths. The United States recorded the highest number (54,656,866) of confirmed cases. In Africa, there are 47 countries affected, with 7,065,972 cumulative cases and 155,081 deaths were recorded by 31 Dec 2021 (WHO African Region numbers at a glance). To date, the currently approved vaccines have been effective in preventing COVID-19, particularly in regards to severe symptoms (1). However, several immune escape mechanisms of SARS-CoV-2 and the rapid emergence of mutated variants (2) pose a great challenge to the efficacy of these vaccines.
Patients with severe COVID-19 tend to have a high concentration of pro-inflammatory cytokines (IL-2, IL-7, IL-10, G-CSF, TNF-a, CXCL10, MCP1, and MIP1a) (3), suggesting that a cytokine release syndrome (CRS) (4) (also loosely referred to as a cytokine storm), which is a form of life-threatening systemic inflammatory response syndrome (SIRS), can often feature in severe COVID-19 infections. Among the increased levels of inflammatory mediators in COVID-19 patients, the plasma levels of IL-6, an amplifier in the cytokine storm, are significantly elevated in non-survivors compared with survivors (5). The main cause of death of COVID-19 is due to severe acute respiratory distress syndrome (ARDS) with this high severity being dependent on the cytokine storm.
Sepsis has been defined as a life-threatening organ dysfunction caused by a dysregulated host response to infection (6). Endothelium damage, vascular permeability, microvascular dysfunction, coagulation pathway activation, and impaired tissue oxygenation occur during sepsis and can lead to multisystem organ dysfunction (MODS), organ failure and consequently a potentially lethal outcome. As many patients with severe COVID-19 show typical clinical manifestations of septic shock, with other symptoms meeting the diagnostic criteria for sepsis and septic shock according to the Sepsis-3 International Consensus (6), Li et al. hypothesized that viral sepsis is a crucial process in severe COVID-19 cases (7). Accumulating evidence further links the pathology of severe COVID-19, such as acute kidney injury, to sepsis (8).
IMMUNOMODULATORY THERAPY OF SEVERE COVID-19
In regards to potential immunomodulatory strategies for severe COVID-19, the IL-6-STAT3 signaling pathway has been considered a promising therapeutic target for the cytokine storm generated in the disease. Tocilizumab, a specific monoclonal antibody that blocks IL-6, has been recommended for use in critically ill COVID-19 patients with extensive bilateral pulmonary lesions and with elevated serum levels of IL-6. However, anticytokine therapy with Tocilizumab did not improve survival rates despite reducing the likelihood of progression to the composite outcome of mechanical ventilation or death (9). In addition, ulinastatin, a serine protease inhibitor with anti-inflammatory properties (including inhibition of IL-6), previously used in the treatment of acute pancreatitis and sepsis, has been suggested for severe COVID-19 treatment (10); yet its clinical performance and cost-effectiveness remain to be validated in large cohort studies.
The value of glucocorticoids in mitigating the inflammatory response due to COVID-19 has been widely scrutinized. Recent reliable evidence from large-scale randomized clinical trials (RCTs) revealed that the use of dexamethasone reduced 28-day mortality but only in patients requiring respiratory support (11), while another parallel, double-blind, placebo-controlled, randomized, Phase IIb clinical trial showed that the administration of methylprednisolone was able to reduce 28-day mortality in patients aged over 60 years (12). In addition, hydroxychloroquine, a disease-modifying antirheumatic drug (DMARD), used for the treatment of rheumatoid arthritis and lupus, has been studied for its potential as an immunomodulatory therapeutic for COVID-19 disease. Evidence from 12 RCTs indicated that hydroxychloroquine has little or no effect on the risk of death, probably has no effect on progression to mechanical ventilation, and that it is less likely that the drug is effective in protecting people from infection, although this was not excluded entirely (13). Other immunomodulatory agents that have been therapeutically tested in SARS-CoV-2 infection include the interleukin-1 receptor (IL-1R) antagonist anakinra, the Janus kinase inhibitors baricitinib and ruxolitinib, the anti-C5a antibody vilobelimab, the anti-gout agent colchicine, the antirheumatic drug leflunomide, convalescent plasma, interferon beta, interferon kappa and intravenous immunoglobulins (IVIg) (14). However, robust data from further RCTs are required to elucidate their potential for the treatment of severe COVID-19.
HELMINTH CO-INFECTION AND SEVERITY OF COVID-19
The "old friends" hypothesis argues that some co-evolved microbes and other pathogens, including helminths, could help to establish appropriate immunomodulatory function and thus protect the host against a large spectrum of immune-related disorders (15). Mammals infected with helminths typically elicit an anti-inflammatory Th2 immune response, including the activation of Th2 cells and the elevation of Th2-type cytokines such as IL-4, IL-5 and IL-13 (16). This host-helminth interaction could be beneficial in dampening inflammatory damage induced by the Th1/Th17 branches of the immune system, repairing injured tissue and restoring homeostasis (17). Chronic helminthic infection suppresses both Th1 and Th2 responses by actively inducing the expansion of FOXP 3+ regulatory T cells, IL-10 producing B cells and alternatively activated macrophages (AAMs), which together promote the release of regulatory cytokines such as TGF-b and IL-10 (18).
There is controversy regarding whether helminth coinfection leads to increased susceptibility and attenuated immunopathology of other pathogens (i.e., viruses, bacteria and protozoa) or, in some circumstance, exacerbated pathology due to higher infection burdens (19). And this also likely applies to the interaction between helminths and SARS-CoV-2 (20,21). It has been suggested that the immunosuppressive and regulatory T-helper response stimulated by helminths may balance the inflammatory Th1/Th17 response triggered by SARS-CoV-2 infection, potentially restricting the severity of COVID-19 disease (22,23). In contrast, a recent viewpoint article argued that COVID-19 patients co-infected with helminths may be unable to mount a quick and efficient immune response against SARS-CoV-2 in the early phase of the infection, thereby leading to increased patient morbidity and mortality (24). However, other evidence indicates that COVID-19 lethality rates are significantly lower in Sub-Saharan Africa than in the industrialized world (25). Wolday et al. (26) carried out a prospective observational cohort study to investigate whether there was a potential correlation between co-infection with intestinal parasites and the severity of COVID-19 in two sites in an endemic area of Ethiopia in Sub-Saharan Africa. The study revealed that patients co-infected with parasites had lower odds of developing severe COVID-19, with an adjusted odds ratio (aOR) of 0.23 (p < 0.0001) for all parasites, an aOR of 0.37 (p < 0.0001) for protozoa, and an aOR of 0.26 (p < 0.0001) for helminths. The authors thus concluded that co-infection with the enteric parasites, Hymenolopis nana, Schistosoma mansoni and Trichuris trichiura reduced the risk of severe COVID-19 occurrence in this cohort of African patients. When stratified by species, co-infection with T. trichiura showed the lowest probability of developing severe COVID-19. In addition, of 11 cohort patients who died, all were parasite-free (26). The results of this study thus suggested that parasites, particularly the chronic disease-associated parasitic helminths, induced a Th2-prone response in the host, which modulates COVID-19 severity by restricting the hyperinflammation associated with the viral infection. Further epidemiological studies on helminth-mediated COVID-19 alleviation are, however, required to support this argument (27,28).
HELMINTH-DERIVED PRODUCTS CAN ATTENUATE THE SEVERITY OF SEPSIS
The "old friends" hypothesis, together with the inverse global distribution of allergy/autoimmune diseases and helminth infections, and the proclivity for helminths to orchestrate immunomodulatory effects (typically induction of a Th2 immune response) on the host immune system stimulated the concept of developing helminth-based therapies. Robust evidence from animal model studies showed that helminth infection and helminth-derived products were able to prevent/ alleviate a variety of autoimmune and inflammatory diseases/ disorders (i.e., sepsis, type 2 diabetes, allergic asthma, rheumatoid arthritis, inflammatory bowel disease, type 1 diabetes and multiple sclerosis) (29,30).
In regard to sepsis, epidemiological studies (over the period 2006-2015) indicated a rapid increase in hospitalization and mortality rates due to severe sepsis in high-income countries (31). This report added further support to the hypothesis that the lack of helminth infections may contribute to the aetiology of sepsis (32). To date, a number of helminth-derived molecular products have resulted in improved sepsis outcomes in animal models. Several studies have investigated the role of Schistosoma japonicum cystatin (rSj-Cys) in regulating the inflammatory response in the cecal ligation and puncture (CLP)-induced mouse sepsis model (33)(34)(35). Administration of rSj-Cys to mice provided significant therapeutic effects on CLP-induced sepsis characterized by increased survival rates, alleviated overall disease severity with reduced tissue injury in the kidney, lung and liver (33) and cardiomyopathy (34). These therapeutic effects were linked to the upregulation of regulatory cytokines (IL-10 and TGF-b1) and the downregulation of pro-inflammatory cytokines (IL-1b, IL-6 and TNF-a) as measured in serum. Similarly, treatment of mice with cyclophilin A (CsCyPA) from the liver fluke, Clonorchis sinensis, provided significant therapeutic effects on CLP-induced sepsis characterized by an improved survival rate (36). Furthermore, using a murine model of septic shock, Ramos-Benitez et al. demonstrated in vitro and in vivo that Fh15, a recombinant variant of the common liver fluke Fasciola hepatica fatty acid binding protein, suppressed the LPS-induced cytokine storm, working as an antagonist of Toll-like receptor 4 (TLR4) (37). In the gram-negative bacteria-induced sepsis rhesus macaque model, Fh15 effectively suppressed bacteremia, endotoxemia, and many other inflammatory markers, emphasizing its promise as a candidate for immunomodulatory therapy against sepsis (38). In addition, the excretory-secretory products of Trichinella spiralis adult worms were also shown to be beneficial to the outcome of CLP-induced sepsis by preventing exacerbated inflammation and severe pathology in treated mice (39). These effects were associated with reduced levels of pro-inflammatory cytokines (IL-1b, IL-6 and TNF-a), upregulated levels of IL-10 and TGF-b, and decreased expression of HMGB1, TLR2 and MyD88 in the lung tissues of the treated mice (39). Albeit informative, these observations were accrued from animal sepsis models and human clinical trials now need to be undertaken to validate the results obtained.
DISCUSSION
Rapid mutations in SARS-CoV-2 challenge the efficacy of the current COVID-19 vaccines and concerns about their long-term safety require an urgent need to search for safe and cost-effective alternatives for preventing severe COVID-19 disease. Increased efforts are required as, to date, conclusive evidence of effective immunomodulatory therapies for severe COVID-19 is scarce (14). Due to similarities in the pathological process, sepsis animal models provide the opportunity to evaluate the efficacy of novel candidates for the immunomodulatory therapy of critically ill COVID-19 patients. We argue that helminth-derived products and molecules that can potentially induce a Th2-biased immune response may provide a contributory role in preventing severe COVID-19 by restricting the cytokine storm associated with ARDS. The aforementioned helminth-derived molecules (i.e. rSj-Cys, CsCyPA and Fh15) have been shown to increase survival rates in animal models of sepsis, thereby representing potential candidates for immunomodulatory treatment against severe COVID-19. Such components should be validated for efficacy, first in the K18-hACE2 transgenic murine model of SARS-CoV-2 infection which shares many features of severe COVID-19 infection (40), and then in clinical cohorts. The time phase in sepsis progression is regarded as a key factor for successful immunomodulatory therapy. Due to immunosuppression and immune exhaustion, treatment with immunomodulators at the late stage of severe COVID-19 could be less effective or even deleterious; consequently, the helminthderived molecular products should be administrated as a prophylactic therapy against severe COVID-19. The suppression of the antiviral response due to excessive immunotherapy may encourage viral replication and result in a delay of clearance of SARS-CoV-2 so that administration of helminth-derived immunomodulators that elicit a mild Th2-skewed immune response could be a useful strategy to prevent severe COVID-19, while maintaining the patient's ability to kill cells infected with the virus. Severe COVID-19 has greater incidence in older individuals, due in part to an increased inflammatory response in these patients (41), begging the question whether prophylactic therapy based on helminth-derived product should primarily target and would be more effective in older individuals? Another unanswered question is whether a well-controlled low level concomitant infection with a live helminth, such as hookworm (42), can achieve an equivalent or superior effect to an immunomodulator or vaccine in preventing serious outcomes of COVID-19. It would be valuable to test such helminthic-based therapies as these may represent a safe and costeffective anti-inflammation approach to reducing COVID-19 severity.
|
2022-02-11T14:23:42.192Z
|
2022-02-11T00:00:00.000
|
{
"year": 2022,
"sha1": "44da3a9f2b03beb4f9492781869fc74a29f04a05",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "44da3a9f2b03beb4f9492781869fc74a29f04a05",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202869146
|
pes2o/s2orc
|
v3-fos-license
|
Purification of Monomers Leads to High-Quality Lignin Macromonomers
Purification is critical in any chemical process. The removal of impurities will produce the product in better quality and high standard. In this study, a new type of monomer was prepared by condensation polymerization of alkali lignin (AL) and methacryloyl chloride (MAC). The effect of AL/MAC ratio and the purification of MAC were investigated. The physical and chemical properties of the product obtained which is lignin methacrylate (LMA) were characterized by Fourier Transformed Infrared spectroscopy (FT-IR) and Nuclear Magnetic Resonance (1H-NMR). The result reveals that the purification was successfully removed the hydroxyl groups from the stabilizer in MAC and it is found that all the hydroxyl groups are coming from AL. The single interaction between AL and MAC has successfully enhanced the product obtain which does show no hydroxyl groups. The 1H-NMR data also showed that the purification of MAC influences the outcome. The noise and impurities were eliminated after the purification of MAC, and more pure products were obtained. This new monomer (LMA) synthesis that can be further utilized for various applications.
Introduction
Paper and pulping industries produce a vast amount of by-products during the stage of its production. Lignin, being one of the by-products, is often considered as waste [1][2][3][4] and not having proper utilization. An estimated 50 million tons of lignin were generated every year [5][6] with less than 10% being satisfied exploited [3,7]. Inappropriate disposal of lignin into the water source gave rise to an escalation in carbon oxygen demand (COD) [8][9][10][11] as well as a contribution towards water pollution on the environment. The exploration of lignin is still the main topics of research even though it is reported lacking in the proper applications [12][13][14]. The majority of lignin are burned to produce fuel. Meanwhile, some industries utilize lignin to generate the steam, electricity, and heat [15][16][17]. Even though the fuel made provides more energy than cellulose, this activity will exacerbate global warming since carbon dioxide is released into the environment and will facilitate the pollution of air [13]. Recycling or modification of lignin into useful products is the best ways to minimize the waste and foster revenue for the industries [1].
Lignin-based macro-monomer published in several articles. The macro-monomer can improve the chemical and physical properties of the lignin. Recently, many researchers have reported the synthesis of a macro-monomer with different types of lignin such as biobutanol lignin [37], organosolv lignin [38], and hydroxypropyl lignin [39][40]. The macro-monomer synthesis from lignin with methacryloyl chloride remains an attractive topic among researchers where the protocol to produce this specific type of monomer has been highlighted [41]. However, during their recent studies, it found that the chemical used have not undergone further purification.
In this present study, we investigated the effect of chemicals purification and the ratio of lignin to a methacryloyl chloride and its influence onto the lignin segments. To the best of our knowledge, this goal has never attained yet. The monomer is synthesis similarly to the previous studies with some minor modifications [41]. The monomer produced is then characterized portrayal by using Fourier Transformed Infrared spectroscopy (FT-IR) and Nuclear Magnetic Resonance ( 1 H-NMR). The results show that noise and impurities successfully removed in the monomer produced after the purification of the chemicals.
Experimental
2.1. Materials Alkali lignin (AL), triethylamine (TEA), tetrahydrofuran (THF) and methacryloyl chloride (MAC, 97%) were supplied by Sigma-Aldrich Korea Ltd. All this chemical were utilized without any purification. Prior used for the synthesis of lignin methacrylate, MAC was used with and without purification.
Preparation of lignin methacrylate (LMA) monomer
First, 0.1 g of lignin was dissolved in 40 mL THF. Then, 100.0 g of TEA was added under continuous stirring. Next, 100.0 g of MAC solution was added dropwise into the solution. Upon completion, the flask was immediately sealed, and the esterification process was performed at room temperature for 24 h. Then, the resultant TEA salt precipitate was separated using the filtration process, and the remaining THF solution was turned into a concentrate via the evaporation process. The resultant residue was dissolved in chloroform and an aqueous amount of sodium hydrogen carbonate. After that, water was used to rinse the remaining salts and methacrylic acid from the synthesized salt.
Finally, the final product was dried over magnesium sulfate and was evaporated under vacuum for 24 h. The resulting product was labeled as LMA monomer. Two different mass ratios of lignin and MAC were studied in this experiment which is 1:100 and 1:200. Besides, the purification effect of MAC on the lignin framework also was investigated.
Physico-chemical characterisation
Fourier Transform Infrared (FT-IR) measurements for all samples were recorded in the range between 4000 and 400 cm -1 by the spectrometer (Spectrum GX, Perkin Elmer). 1 mg of sample was mixed with 100 mg of KBr pellet during the preparation. The FT-IR spectra were then set to the transmission mode, and the analysis was performed at resolution 2.0 cm -1 . The nuclear magnetic resonance ( 1 H-NMR) spectra for LMC were recorded by a nuclear magnetic resonance spectrometer (Varian NMR Systems 500MHz) using DMSO-d 6 as the solvent. For the 1 H-NMR, the samples of LMA (100 mg) was dissolved in 0.5 ml of DMSO-d 6 for analysis of purposes.
Results and discussion
3.1. Characteristics of purified and unpurified methacryloyl chloride (MAC) The interaction of MAC and lignin was investigated by the studies on the purification effect of MAC. The reaction scheme of LMA synthesis is shown in Fig.. 1. Fig. 1. The reaction scheme for the synthesis of lignin methacrylate Fig. 2 showed the FT-IR spectra of the standard and purified MAC. At a band of 3200-3500 cm -1 regions, the conventional MAC spectra (a) had absorption peaks at 3460 cm -1 , which is assigned to OH stretching band. Meanwhile, the purified MAC spectra (b) shows that the stretching band of the hydroxyl group was disappeared. The appearance of new peaks in this region at 3552 cm -1 , 3479 cm -1 and 3420 cm -1 were assigned to the methyl group (-CH 3 ) of MAC segments. Also, we noted that the peak intensity at 1617 cm -1 bending mode of the carbonyl group of MAC segments became stronger and more apparent as compared to that of with FT-IR spectra in (a). This evidence implied that the OH group of monomethyl ether hydroquinone segments successfully removed from MAC by the distillation process. After the purification, it found that all the hydroxyl groups are coming from AL and not from the MAC. The 1 H-NMR spectra for purified MAC in DMSO-d 6 shown in Fig. 3.
Characteristics of lignin methacrylate (LMA) by FT-IR spectroscopy
FT-IR spectra of lignin methacrylate prepared at a mass ratio of 1:100 and 1:200 between lignin and MAC were compared and used to confirm the presence of the MAC segments in the lignin framework.
As per shown in Fig.s 4 and 5, the IR band appeared at 1725 cm -1 and 1637 cm -1 . These bands assigned to C=O stretch which originates from MAC segments while -C=C-stretch from a vinyl group respectively. In the framework of LMA, it found the group of -C=O from MAC was present. The vinyl stretch together with ester group also observed at 1637 cm -1 and 1298 cm -1 respectively. The vinyl group intensity at 3235 cm -1 increases meanwhile, the band at 3500 cm -1 for hydroxyl group was unable to be detected in both cases of LMA, which explains the occurrence of interaction between MAC within the lignin structure. These indicated that the LMA was successfully synthesized using esterification process. The comparison of LMA with a different mass ratio of 1:100 and 1:200 using purified and unpurified of MAC investigated by using FT-IR analysis. Fig.. 6 shows that the peak at the range 1290 cm -1 representing ester group as a product of the reaction between MAC and lignin. The height of this product can indicate in the FT-IR spectra after purification process The LMA monomer formation verified by the absence of some characteristic signals of a hydroxyl group and the appearance of the double bond signals in the IR spectra. Besides that, it observed that in the LMA monomer, there are two indications of absorption bands which can show at 953 cm -1 (C-H bending of a vinyl group) and another one at bands at 1637 cm -1 (C=C stretching vibration). These two bands verify the formation of lignin methacrylate from the primary reaction between lignin and MAC.
Characteristics of lignin methacrylate by 1H-NMR spectroscopy
The interaction behavior between MAC and lignin structure analyzed by using NMR spectroscopy. Fig. 7 presents a comparison of the 1 H-NMR spectra for lignin methacrylate of the different ratio of MAC with and without purification. Fig. 7 a (ii) and b (ii) show that the signals at 6.24 and 6.02 ppm were indicated the vinyl protons in the lignin methacrylate. However, the intensity of vinyl proton peak in spectra a (i) and b (i) is lower compared to that of lignin methacrylate with the purification of MAC. The peak at 1.93 ppm for both ratios of lignin methacrylate show the presence of protons in the aromatic group from the lignin segments. Meanwhile, as for the lignin methacrylate without purification of MAC, the impurities can be detected at the region of 1.93 ppm as shown in spectra a (i) and b (i). The signal presence at 6.02 ppm (1H), 6.24 ppm (2H), 1.93 ppm (3H, -CH 3 ) and 2.51 ppm (2H, -CH 2 ) and 3.59-3.58 ppm (1H, -C-O-CH) prove the formation of lignin methacrylate (LMA).
|
2019-09-17T01:08:06.138Z
|
2019-08-27T00:00:00.000
|
{
"year": 2019,
"sha1": "49e91ec709d7e8021de6268c2da4fe0277e57be6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/548/1/012021",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "39445c8626bff4e910c8df34f1a8c882815077a7",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
16459954
|
pes2o/s2orc
|
v3-fos-license
|
MicroRNA-21 regulates prostaglandin E2 signaling pathway by targeting 15-hydroxyprostaglandin dehydrogenase in tongue squamous cell carcinoma
Background Oral tongue squamous cell carcinoma (OTSCC) is one of the most aggressive forms of head and neck/oral cancer (HNOC), and is a complex disease with extensive genetic and epigenetic defects, including microRNA deregulation. Identifying the deregulation of microRNA-mRNA regulatory modules (MRMs) is crucial for understanding the role of microRNA in OTSCC. Methods A comprehensive bioinformatics analysis was performed to identify MRMs in HNOC by examining the correlation among differentially expressed microRNA and mRNA profiling datasets and integrating with 12 different sequence-based microRNA target prediction algorithms. Confirmation experiments were performed to further assess the correlation among MRMs using OTSCC patient samples and HNOC cell lines. Functional analyses were performed to validate one of the identified MRMs: miR-21-15-Hydroxyprostaglandin Dehydrogenase (HPGD) regulatory module. Results Our bioinformatics analysis revealed 53 MRMs that are deregulated in HNOC. Four high confidence MRMs were further defined by confirmation experiments using OTSCC patient samples and HNOC cell lines, including miR-21-HPGD regulatory module. HPGD is a known anti-tumorigenic effecter, and it regulates the tumorigenic actions of Prostaglandin E2 (PGE2) by converts PGE2 to its biologically inactive metabolite. Ectopic transfection of miR-21 reduced the expression of HPGD in OTSCC cell lines, and the direct targeting of the miR-21 to the HPGD mRNA was confirmed using a luciferase reporter gene assay. The PGE2-mediated upregulation of miR-21 was also confirmed which suggested the existence of a positive feed-forward loop that involves miR-21, HPGD and PGE2 in OTSCC cells that contribute to tumorigenesis. Conclusions We identified a number of high-confidence MRMs in OTSCC, including miR-21-HPGD regulatory module, which may play an important role in the miR-21-HPGD-PGE2 feed-forward loop that contributes to tumorigenesis. Electronic supplementary material The online version of this article (doi:10.1186/s12885-016-2716-0) contains supplementary material, which is available to authorized users.
Background
Head and neck/oral cancer (HNOC) is a commonly encountered malignancy. Head and neck squamous cell carcinoma (HNSCC), which arises from the epithelium lining of this region, makes up the majority (over 90 %) of HNOC. Oral tongue squamous cell carcinoma (OTSCC) is one of the most aggressive form of HNSCCs, which exhibits a propensity for rapid local invasion and spread [1], has a distinct nodal metastasis pattern [2,3]. OTSCC patients also suffer from a high recurrence rate [4]. OTSCC is a complex disease with extensive genetic and epigenetic defects, including microRNA deregulation. MicroRNAs are pivotal regulators of physiological and disease processes through their control of diverse cellular processes. Several microRNAs have been functionally classified as oncogenes or tumor suppressors, and the aberrant expression of microRNA has been observed in almost all cancer types including OTSCC [5][6][7][8]. Deregulation of these cancer-associated microRNAs can significantly impact tumor initiation and progression by activating pathways promoting uncontrolled proliferation, favoring survival, inhibiting differentiation, and promoting invasion [9,10]. MicroRNAs are not directly involved in protein coding, but are able to control the expression of their target genes at post-transcriptional levels by facilitating mRNA degradation and/or repressing translation. As such, the identification and detection of functional microRNA-mRNA regulatory modules (MRMs) are crucial components for understanding of micro-RNA functions.
MicroRNAs are a class of small non-coding RNAs of approximately 22 nucleotides in length that are endogenously expressed in mammalian cells. They are related to, but distinct from, siRNAs. A key difference between siRNA and microRNA is that siRNA requires almost complete complementary to its targeting sequence for it to exert the silencing function, whereas microRNA usually binds to its target genes through partial complementary. While numerous sequence-based bioinformatics methods for microRNA target prediction have been developed, these methods often lead to high false discovery rates [11]. In order to minimize false positives and to detect the functional microRNA targets under a specific biological condition, recent approaches often integrate the microRNA and mRNA profiling analysis in conjunction with the sequence-based target prediction. Two types of experiments are common: 1) differential mRNA profiling experiment on a microRNA transfected cell line and its negative control, and 2) simultaneous microRNA and mRNA profiling analysis on samples of different phenotypes (e.g., normal vs. tumor). The first approach has been used by many groups, including us, to define the functional micro-RNA targets when a specific microRNA is over-or under-expressed [12][13][14]. The second approach aims to discover microRNA with altered expression related to different phenotypes and to uncover their targets mRNAs. This approach is based on the simple principle that inverse relationships in their expression profiles should be held between a specific microRNA and its functional target genes. When integrated with the sequence-based bioinformatics target prediction, this approach is believed to lead to the identification of high confidence microRNA targets.
Our group and several others have recently undertaken extensive RNA-based surveys to identify gene expression and microRNA abnormalities in OTSCC. In this study, we utilized our existing transcription profiling dataset [15], and a meta-analysis of 13 published microRNA profiling studies [16], and integrate them with a collection of 12 sequence-based bioinformatics tools to define the deregulation of functional MRMs in OTSCC. We then evaluated these MRMs in 2 OTSCC patient cohorts and a panel of HNSCC cell lines. With our comprehensive approach, we identified a panel of high confidence microRNA-mRNA regulatory modules in OTSCC, including miR-21-15-Hydroxyprostaglandin Dehydrogenase (HPGD) regulatory module. We also confirmed the positive feed-forward loop that involves miR-21, HPGD and Prostaglandin E2 (PGE2) in HNOC cells that contribute to tumorigenesis.
MicroRNA target prediction
The microRNA target prediction was performed using the comparative analysis function of the miRWalk [17], which contains a collection of 10 bioinformatics tools, including DIANAmT, miRanda, miRDB, miRWalk, RNAhybrid, PicTar (4-way), PicTar (5-way), PITA, RNA22, TargetS-can5.1. In addition, MicroCosm 5.0 and TargetScanHuman 6.2 were also used for predicting the microRNA targets. For our study, genes that were predicted by at least one method were defined as candidate microRNA targets. The base-pairing and the minimum free energy (mfe) for the binding of microRNA to its targeting sequences were predicted using the RNAhybrid program [18].
Clinical samples from OTSCC patients
We downloaded the RNASeq and miRNASeq profiling datasets on 12 OTSCC and paired normal tissue samples from The Cancer Genome Atlas (TCGA) Data Protal [tcga-data.nci.nih.gov]. The gene expression values were extracted as normalized count, and the microRNA levels were extracted as reads per million miRNA mapped from the datasets. The demographics of the patients were as follows: 6 male, 6 female and average age = 62 (range: 36-88), 1 stage T1 cases, 5 stage T2 cases, 3 stage T3 case and 3 T4 cases. Oral cytology samples were obtained from 13 patients with pathologically characterized primary OSCC of the tongue before tumor resection (including 6 stage T1 cases 6 stage T2 cases and 1 stage T3 case) as previously described [29,30]. These procedures are in compliance with the Helsinki Declaration, and was approved by the Ethical Committee of the First Affiliated Hospital, Sun Yat-Sen University (reference number: 2014-C-001). The informed consent was obtained from participants. Patients were excluded if there is a history of lung carcinoma or HNSCC elsewhere and may represent metastatic disease. The demographics of the patients were as follows: 8 male, 5 female and average age = 51.8 (range: 32-78). The total RNA was isolated using miRNeasy Mini kit (Qiagen), and quantified by a spectrophotometer or the RiboGreen RNA Quantitation Reagent (Molecular Probes).
Quantitative RT-PCR Analysis
The relative microRNA levels were determined by Taq-Man microRNA assays (Applied Biosystems) as previously described [16,31]. The relative mRNA levels were determined by quantitative two-step RT-PCR assay with pre-designed gene specific primer sets (Origene) as described before [16,31]. The relative microRNA and mRNA levels were computed using the 2 -delta delta Ct analysis method, where U6 and beta-actin were used as internal controls, respectively.
Western-blot analysis
Western blots were performed as described previously [16] using antibodies specific for HPGD (Cayman Chemical) and beta-actin (Sigma-Aldrich) and an immuno-star HRP substrate Kit (Bio-RAD).
Fluorescent immunocytochemical analysis
Immunofluorescence analysis was performed as previously described [16]. In brief, cells were cultured on 8 chamber polypropylene vessel tissue culture treated glass slides (Millipore) fixed with cold methanol, permeabilized with 0.5 % Triton X-100/PBS, and blocked with 1% BSA in PBS. The slides were incubated with primary antibodies against HPGD (1:500, Cayman Chemical). The slides were then incubated with a FITC-conjugated anti-rabbit IgG antibody (1:50, Santa Cruz). The slides were mounted with ProLong Gold antifade reagent containing DAPI (Invitrogen) following the manufacturer's protocol. The slides were then examined with a fluorescence microscope (Carl Zeiss).
Statistical analysis
Data was analyzed using the Statistical Package for Social Science (SPSS), version 17.0. Student's t-test was used to compare differences between groups. Pearson's correlation coefficient was computed for examining the relationship between the expression of microRNA and their target genes. For all analyses, p < 0.05 was considered statistically significant.
Results
We first developed a list of putative microRNA-mRNA regulatory modules (MRMs) based on the simple principle that inverse relationships should be anticipated in the expression of a specific microRNA and its functional target gene (mRNA). We used a total of 97 differentially expressed coding genes (44 up-regulated and 53 down-regulated mRNAs, see Additional file 1: Table S1A and S1B, respectively) and 9 differentially expressed The putative microRNA-mRNA regulatory module (MRM) was constructed based on microRNA and mRNA expression profiles of OTSCC, as we previously reported in [16] and [15], respectively b Differential expression of microRNAs and mRNAs was validated using dataset on 12 OTSCC and paired normal tissue samples that were extracted from TCGA. Genes that show statistically significant differential expression were identified with bold font c The candidate targets of a microRNA were predicted using a collection of 12 bioinformatics tools, including DIANAmT, miRanda, microCosm, miRDB, miRWalk, RNAhybrid, PicTar (4-way), PicTar (5-way), PITA, RNA22, TargetScan5, and TargetScanHuman 6.2. The number of bioinformatics tools (out of a total of 12 tools tested here) that predict a gene to be a microRNA target was presented. The gene/microRNA pairs predicted by at least 3 tools were listed in the table d Correlations of microRNA and mRNA levels were assessed using dataset on 12 OTSCC and paired normal controls that were extracted from TCGA. Inverted correlation (negative Pearson r value) is expected for a MRM, and p value was calculated e Correlations of microRNA and mRNA levels were assessed by quantitative real-time PCR based on 13 HNSCC cell line. Inverted correlation (negative Pearson r value) is expected for a MRM, and p value was calculated f Correlations of 4 pairs of microRNA and mRNA levels were assessed by quantitative real-time PCR based on 13 OTSCC patient oral cytology samples. Inverted correlation (negative Pearson r value) is expected for a MRM, and p value was calculated microRNAs (5 up-regulated and 4 down-regulated microRNAs, see Additional file 1: Table S1C) from our previous genomic profiling studies on OTSCC [15,16] for the development of this putative MRMs list. This putative MRMs list consists of 265 putative MRMs defined by microRNA up-regulation and mRNA downregulation, and 176 putative MRMs defined by micro-RNA down-regulation and mRNA up-regulation. We then tested these putative MRMs using a panel of 12 different sequence-based microRNA target prediction algorithms (DIANAmT, miRanda, microCosm, miRDB, miRWalk, RNAhybrid, PicTar (4-way), PicTar (5-way), PITA, RNA22, TargetScan5.1, and TargetScanHuman6.2) to refine our putative MRMs list. A total of 132 candidate MRMs were identified (predicted as microRNA target by at least 1 bioinformatics algorithm, see Additional file 2: Table S2A and Additional file 3: Table S2B). As shown in Table 1, 38 potential MRMs were predicted by at least 3 bioinformatics target prediction algorithms, where the up-regulation of the microRNA contributes to the down-regulation of mRNA, and 15 potential MRMs were predicted by at least 3 bioinformatics target prediction algorithms (Table 2), where down-regulation of the microRNA contributes to the up-regulation of mRNA.
The differential expression of microRNAs and coding genes (mRNAs) involved in these 53 potential MRMs (9 microRNAs and 34 mRNAs) was then validated using dataset on 12 OTSCC and paired normal tissues (extracted from TCGA Data Portal). As shown in Additional file 4: Table S3, statistically significant differential expression was observed for 8 out of 9 microRNAs and 23 out of 34 mRNAs tested in the validation OTSCC cohort.
To further evaluate these potential MRMs, we examined the correlative relationship between the microRNA levels and the expression of their target genes in these 12 OTSCC and 12 paired normal tissues (extracted from TCGA Data Portal), as well as 13 HNSCC cell lines ( Table 1 The putative microRNA-mRNA regulatory module (MRM) was constructed based on microRNA and mRNA expression profiles of OTSCC, as we previously reported in [16] and [15], respectively b Differential expression of microRNAs and mRNAs was validated using dataset on 12 OTSCC and paired normal tissue samples that was extracted from TCGA. Genes that show statistically significant differential expression were identified with bold font c The candidate targets of a microRNA were predicted using a collection of 12 bioinformatics tools, including DIANAmT, miRanda, microCosm, miRDB, miRWalk, RNAhybrid, PicTar (4-way), PicTar (5-way), PITA, RNA22, TargetScan5, and TargetScanHuman 6.2. The number of bioinformatics tools (out of a total of 12 tools tested here) that predict a gene to be a microRNA target was presented. The gene/microRNA pairs predicted by at least 3 tools were listed in the table d Correlations of microRNA and mRNA levels were assessed using dataset on 12 paired OTSCC and normal controls that was extracted from TCGA Data Portal. Inverted correlation (negative Pearson r value) is expected for a MRM, and p value was calculated e Correlations of microRNA and mRNA levels were assessed by quantitative real-time PCR based on 13 HNSCC cell line. Inverted correlation (negative Pearson r value) is expected for a MRM, and p value was calculated miR-21-HLF and miR-21-HPGD were also statistically significant in HNSCC cell lines.
We further explore the interaction of miR-21 and HPGD in our study. As shown in Fig. 2a, ectopic transfection of miR-21 mimic to UM1, UM2, SCC9 and Tca8113 cells led to a statistically significant reduction in HPGD mRNA level as compared to cells treated with control mimic. The miR-21 has no apparent effect on HPGD expression in HeLa cells. As shown in Fig. 2b and c, ectopic transfection of miR-21 mimic to UM1 cells led to reduced HPGD expression at protein level and reduced immunostaining of HPGD, respectively, as compared to the cells treated with control mimic. As shown in Fig. 2d, ectopic transfection of miR-21 mimic also enhanced the proliferation of OTSCC cells, which is consistent with previous observations [10,32], and confirmed the oncogenic effect of miR-21.
Bioinformatics analysis revealed that there are three miR-21 targeting sites located in the 3′-UTR of the HPGD mRNA (E1 at position 2652 to 2671, E2 at position 2880 to 2901, E3 at 2890 to 2911) and the targeting sites E2 and E3 are partially overlapped (Fig. 3a). The predicted minimum free energy (mfe) for the binding of these sites to miR-21 are −17.6, −11.4 and −16.5 kcal/ mol, respectively. To test whether the miR-21 directly interacts with these predicted targeting sites in HPGD mRNA, dual luciferase reporter assays were performed using constructs containing these targeting sites (Fig. 3b). When cells were transfected with miR-21, the luciferase activities of the construct containing targeting site E1 (pGL-E1) was significantly reduced as compared to the cells transfected with negative control. When the seed region of this targeting site was mutated (pGL-E1m), the effect of miR-21 on the luciferase activity was abolished. For sites E2 and E3, when cells were transfected with miR-21, the luciferase activities of the construct containing both targeting sites E2 and E3 (pGL-E2E3) was not changes as compared to the cells transfected with negative control. Interestingly, when the seed region of E2 was mutated (pGL-E2mE3), the miR-21-mediated down-regulation of the luciferase activity was observed. MiR-21 has no effect GPD1L (a, b, c), HLF (d, e, f), HPGD (g, h, i), and the correlation of miR-130b level with the expression of MGLL (j, k, l) were assessed, and the Pearson's correlation coefficient (r) was calculated on constructs with E3 mutation (pGL-E2E3m) or mutations of both E2 and E3 (pGL-E2mE3m).
As shown in Fig. 4a and b, both siRNA-mediated knockdown of COX2 and treatment with COX2 inhibitor (CelecoxiB) led to down-regulation of miR-21 in UM1 cells. As shown in Fig. 4c, directly apply PGE2 to the UM1 cells led to the up-regulation of miR-21, and knockdown of HPGD (Fig. 4d) also led to the upregulation of miR-21. As anticipated, treating cells with PGE2 and CelecoxiB led to up-regulation and downregulation of cell proliferation, respectively, which is consistent with previous observations [33,34] (Fig. 4e). These results are in agreement with observation made by Lu et al. in cholangiocarcinoma [35], which confirm the PGE2-mediated miR-21 up-regulation in OTSCC and suggest a PGE2-miR-21-HPGD positive feed-forward loop that contributes to tumorigenesis (Fig. 4f).
Discussion
Despite the significant increase in the number of experimentally validated microRNA-mRNA regulatory relationships, the majority of the microRNA targeted genes remains unknown. MicroRNA usually binds to its target genes through partial complementary. While numerous sequence-based bioinformatics methods for microRNA target prediction have been developed, these methods often lead to high false discovery rates [11]. However, the integration of these bioinformatics tools with mRNA/microRNA differential expression profiles often lead to the identification of high confidence microRNA-mRNA regulatory modules. In this study, we carried out this integrated analysis to identify MRMs in two steps. First, based on the simple principle that inverse relationships should be anticipated in the expression of a specific microRNA and its functional target genes, we developed a list of putative microRNA-mRNA regulatory modules by linking each microRNAs with all inversely regulated mRNAs based on the results of our previous mRNA and microRNA profiling studies on OTSCC [15,16]. The second step is to these putative MRMs bioinformaticsly using sequence-based micro-RNA target prediction algorithm. Since there are many available sequence-based microRNA target prediction tools, and each of these tools utilizes a different model to define targeting sequences that are associated with functionality, the predictions differ when applied to the same microRNAs, with each method having different levels of coverage and false positive prediction [11]. In order to reduce the potential false positives, we used a voting scheme to combine the predictions from the 12 commonly used bioinformatics tools, including DIANAmT, miRanda, microCosm, miRDB, miRWalk, RNAhybrid, PicTar (4-way), PicTar (5-way), PITA, RNA22, TargetS-can5.1, and TargetScanHuman6.2. With this integrated approach, we developed a list of 53 potential MRMs that are differentially expressed in OTSCC.
Since the microRNA regulates its target gene mainly at post-transcriptional level, inverse correlation between Deregulations of miR-21 and miR-130b, as well as deregulation of GPD1L, HLF, HPGD and MGLL have been reported either in HNOC or other cancer types [15,16,[36][37][38][39][40][41][42], and these MRMs represent significant functional relevance in OTSCC. GPD1L has the glycerol-3-phosphate dehydrogenase enzyme activity and is a regulator of HIF-1α stability [40]. And a recent study showed that the GPD1L expression is a strong predictor for local recurrence and survival in HNSCC [39]. HLF belongs to the PAR (proline and acidic amino acid-rich) subfamily of bZIP transcription factors [43,44], and plays a role in development and circadian rhythm regulation in the mammalian. HLF fusion proteins that resulted from chromosomal translocation (e.g., E2A-HLF) are often linked to leukemia. However, the role of HLF in OTSCC is not entirely clear. MGLL is involved in Prostaglandin E2 (PGE2) production in response to inflammation and infection which leads to fever [45]. Arachidonic acid (AA), a precursor for PGE2, is typically liberated from AA-containing phospholipids by the action of phospholipases A2 (PLA2s). MGLL is a monoacylglycerol lipase which hydrolyzes 2arachidonoylglycerol (2-AG), an endocannabinoid that functions in the central nervous system, to AA and glycerol, representing an alternative AA-producing pathway. MGLL may also play a role in certain types of cancer by regulating both endocannabinoid and fatty The base-pairing and the minimum free energy (mfe) for the binding of miR-21 to the targeting sequences were predicted using the RNAhybrid program [18]. b Dual luciferase reporter assays were performed to test the interaction of miR-21 and its targeting sequences in the HPGD mRNA using constructs containing the predicted targeting sequences (pGL-E1 and pGL-E2E3) and mutated targeting sequences (pGL-E1m, pGL-E2mE3, pGL-E2E3m, pGL-E2mE3m) cloned into the 3′-UTR of the reporter gene. Data represent at least 3 independent experiments with similar results. *: p < 0.05 acid pathways, and supporting protumorigenic metabolism [46]. This appears to be contradict with the apparent down-regulation of MGLL observed in OTSCC [15], and the miR-130b-MGLL regulatory module predicted here. Nonetheless, whether MGLL plays a role in OTSCC and, if so, by what mechanism are questions that remain unanswered. HPGD is a known anti-tumorigenic effecter, and it regulates the tumorigenic actions of Prostaglandin E2 (PGE2) by converts PGE2 to its biologically inactive metabolite, and down-regulation of HPGD has been observed in many human cancer types [47][48][49][50][51][52][53]. Since miR-21 is one of the most consistently observed up-regulated microRNA in OTSCC [16,54], the miR-21-HPGD regulatory module may represents a critical mechanism of regulating PGE2 signaling.
Our functional study confirmed the effect of miR-21 on HPGD expression level, and the direct interaction of miR-21 with the HPGD mRNA in OTSCC cells. We identified three miR-21 targeting sites located in the 3′-UTR of the HPGD mRNA, including a previously reported site (E1) [35], and two partially overlapped sites (E2 and E3). While we confirmed the miR-21-mediated and E1 site-dependent target gene downregulation, E2 and E3 sites appear to have no effect. This may be because that targeting sites E2 and E3 are partially overlapped, and may interfere with the proper interaction with the RISC complex. The elimination of E2 may partially restore the capability of E3 (which has a stronger binding affinity among the two sites) to binding to the RISC complex. This is different than our previous observation where miR-138 was able to interact with multiple overlapping target sites on the FOSL1 mRNA [55]. Additional studies are needed to explore this mutual exclusive phenomenon among multiple targeting sites. The HPGD gene has 6 known transcript variants (NCBI accession: NM_000860, NM_001145816, NM_001256301, NM_001256305, NM_001256306, NM_001256307), and all 6 variants have the same 3′-UTR. As such, the interaction between miR-21 and HPGD mRNA is not likely to be affected by alternative splicing. Interestingly, we did not observe any miR-21 effect on HPGD expression in HeLa cells (a cell line that originated from a cervical cancer case). It is possible that this apparent difference in the miR-21 effect on HPGD expression may be due to differences in cancer types. It is worth noting that the effect of miR-21 on HPGD expression has also been observed in other cancer type [35]. Alternatively, this difference may be cell-line specific. HeLa cells (or the OTSCC cell lines used here) may have specific mutation(s) that dictate the miR-21 effects on HPGD. More in-depth functional analysis will be needed to fully evaluate the miR-21-HPGD regulatory module in different cancer types and in other biological systems.
The levels of COX2 and its catalytic product PGE2 are increased in a variety of malignancies, including HNOC [56][57][58][59]. The tumorigenic actions of PGE2 are attributable to its modulation of cell proliferation, survival, migration, and invasion. The level of PGE2 is controlled by the status of PGE2 synthesis and degradation. Whereas the cyclooxygenases (COX1 and COX2) are rate-limiting key enzymes that control PGE2 biosynthesis, HPGD is a key enzyme that converts PGE2 to its biologically inactive metabolite, 13,14-dihydro-15-keto-PGE2, thus leading to PGE2 inactivation [60,61]. Consistent with the antitumorigenic effect of HPGD, the down-regulation of HPGD has been observed in many human cancer types [47][48][49][50][51][52][53]. Lu et al., first reported the PGE2-mediated up-regulation of miR-21 in cholangiocarcinoma, and suggested a positive feed-forward loop that involves PGE2, miR-21 and HPGD [35]. Our results are consistent with these previous observations, and confirm the existence of a PGE2-miR-21-HPGD positive feed-forward loop in OTSCC that contributes to tumorigenesis (Fig. 4f ).
|
2017-08-03T01:45:57.271Z
|
2016-08-25T00:00:00.000
|
{
"year": 2016,
"sha1": "1e66f7044705b9aa5951ada8e035043b8ab427a5",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-016-2716-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f8118d13ebd4b577d6edac9bf69e4aa629630e1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237385458
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge and Attitude of Recent Dental Graduates towards Smart/Bioactive Dental Composites
Aims: The purpose of this study was to explore the knowledge and attitude of dental interns vs newly graduated dentists (NGD) towards recently introduced smart/bioactive dental resin composites (SDCs). Study Design: Cross-sectional study. Place and Duration of Study: Restorative Dentistry Department, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia from March 2018 to June 2018. Methodology: An online questionnaire was distributed among randomly chosen interns and NGDs in multiple countries (n= 164). The questionnaire was formulated using Google Forms. Statistical analysis of the results was conducted using percentages and frequencies and compared using Chisquare test (P<0.05). Results: 59.15% of all respondents expressed interest in perusing restorative dentistry as a specialty. A statistically significant lower proportion of NGDs compared to interns reported a good understanding of dental restorative materials (P=.022), hearing of SDCs (P=.033), or ever using SDCs (P=.006). 50% of NGDs reported thinking that SDCs effectively regenerate dental hard tissue Original Research Article Hasanain et al.; JPRI, 33(32B): 34-44, 2021; Article no.JPRI.70020 35 was significantly (P=.016) lower than interns (70.5%). 97.6% of NGDs and 96.7% of interns reported willingness to attend a training workshop and/or a continuing education course about SDCs. Conclusion: Interns showed a significantly higher positive attitude towards SDCs use. Knowledge and effective use of SDCs by interns and NGDs could be improved by more training such as that offered in workshops/courses.
INTRODUCTION
According to the WHO, dental caries remains one of the most prevalent diseases around the world [1]. The carious process is a continuous process of multiple demineralization and remineralization cycles, which lead to localized dissolution of the tooth structure's mineral phase when the equilibrium between the two processes is disturbed [2]. Dental resin composites are among the most commonly used materials to restore carious lesions in patients' mouths. This is due to both the enhanced esthetic and physical properties of dental resin composites [3]. Despite their popularity, most of the currently available dental resin composites simply restore anatomical form and function to the defects created in the teeth without any further therapeutic value [4]. An estimated 40% of dental resin composite fillings need to be replaced within approximately 10 years of placement [5]. In theory, the longevity of restorations would be improved if restorative materials could remineralize tooth structure. Using a restorative material with the ability to eliminate the risk of further demineralization would help prevent secondary decay in restorations, which remains one of the main causes of restorative failure [6,7].
Dentists globally are moving towards the utilization of remineralization techniques and conservative restorative treatments [8,9]. The remineralization process simply involves the dissemination of both calcium and phosphate from multiple sources; saliva as well as topical products accelerated in the presence of fluoride to create a hyper-mineralized acid-resistant coating on existing crystal remnants [2]. Bioactive dental resin composite materials, which release ions to neutralize biofilm bacterial acids as well as to remineralize tooth structure are a relatively recent addition to the dental marketplace [10]. These materials are also known as "smart materials" as they are responsive to their external environment.
According to McCabe et al. [11], smart materials can be defined as "materials with properties which may be altered in a controlled fashion by stimuli, such as stress, temperature, moisture, pH, electric or magnetic fields". Based on the above definitions, the terms 'smart' and 'bioactive' may be used interchangeably in relation to dental resin composites. Thus, these materials will be referred to as smart/bioactive dental resin composites (SDCs) in this work.
Several studies investigated the effectiveness of dental bioactive restorative materials and found them clinically effective [4,12,13]. They are used in conservative dentistry to regenerate, repair and reconstruct dental hard tissue by inducing the formation of hydroxyapatite, thus making these materials more therapeutic than their predecessors. There are several SDCs currently available in the market. Examples include: Cention N (Ivoclar Vivodent), Biodentine Ⓡ (Septodent), Beautifil (Shofu), and Activa TM (Pulpdent).
To optimize the effect of such materials and provide patients with a potentially better outcome, clinicians must use them effectively in their practices. Materials are only fully utilized in practice when clinicians understand their properties, advantages, and disadvantages, thus becoming comfortable using them. As SDCs are a fairly recently introduced material, there is limited information in the literature regarding its use and teaching in undergraduate dental programs worldwide [14]. This study aims to look at the attitude of newly graduated dentists (NGDs) about SDCs, thus giving an impression regarding the likelihood of their usage in clinical practice.
MATERIALS AND METHODS
Ethical approval was obtained from King Abdulaziz university's research ethics committee prior to commencement of the work. The ethical approval number is 060-02-19. This cross-sectional study was conducted using an online questionnaire that was slightly modified from a validated questionnaire used in a previously published study [15]. The questions were modified to target knowledge, attitude and opinion of dental practitioners regarding SDCs.
The questionnaire was formulated into a Google forms survey consisting of a total of 19 questions; 15 of which were in multiple choice question (MCQ) format, while the remaining 4 were formulated in a free response format (Fig. 1).
Fig. 1. Google Form survey questions used in the study
The survey questions were checked to establish face validity and then distributed amongst a random sample of dental students to ensure the practicability, reliability, validity, and analysis of responses. The Google Forms questionnaire was sent by email to 200 randomly selected newly graduated dentists and dental interns in multiple countries. The distributed survey ensured anonymity and confidentiality of the participants by requiring no identity-revealing information to be disclosed. Participation in the survey was completely voluntary, which was stated both in the invitation email as well as in the introductory text in the Google form. The survey questions addressed the demographic and biographic characteristics of the respondents, their role in dental practice, whether they'd heard of SDCs and if they have used any of them previously, as well as their opinion regarding their use and effectiveness. It also included questions addressing the respondents' interest in restorative dentistry in general and pursuing further information and attending continuing education courses regarding SDCs. The time needed to respond to the survey question was estimated to be approximately 5 minutes. A reminder email was sent at the end of the second week of the study to non-respondents to encourage their participation in case they have missed the first email. The recorded responses were received within three weeks and the data collected was entered into a Microsoft Excel spreadsheet (n=164).
Statistical analysis of the results was conducted using SPSS computer software (Statistical Package for the Social Sciences, version 19.0, SPSS Inc., Chicago, IL, USA). Descriptive statistics were calculated for the received responses received and Crosstab tables were created. Responses were presented as frequencies and percentages. Chi square statistical test was performed to compare the interns' and newly graduated dentist's (NGD) responses to the survey questions at a significance level (P <.05).
Results
A total of 164 responses were received during a 3-week period: 122 dental interns and 42 NGDs. This represented a total response rate of 82 %. The results were analyzed statistically, and responses were presented as frequencies and Percentages as shown in Table 1. The total responding participants were made up of 84 males and 80 females: 122 NGDs and 42 interns. The mean age of respondents was 25.5 years old (Fig. 2).
The proportion of NGDs (71.4%) that reported having a good understanding of different restorative materials available in the market was statistically significantly lower than interns (P=.02) ( Table 3).
The proportion of NGDs (52.4%) who had heard about SDCs was statistically significantly lower than interns (P=0.03) ( Table 4). The difference between the proportion of NGDs and interns regarding the method they heard about the SDCs; articles, internet, or lectures was not statistically significant (P= 0.256) ( Table 5).
The proportion of NGDs (4.8%) reporting to ever have used SDCs in their practice was lower than interns (14.8%). However, the difference was not statistically significant (P=0.09) ( Table 7).
The proportion of NGDs (0%) that reported use of any type of SDCs in their practice was lower than interns (12.3%). The difference was statistically significant (P=0.017) ( Table 8).
The proportion of NGDs (50%) that reported thinking that SDCs effectively regenerate dental hard tissue was lower than interns (70.5%). The difference was statistically significant (P=0.016) ( Table 9).
Almost all NGDs and interns reported that they are willing to attend a training workshop and/ or a continuing education course about SDCs in restorative dentistry. There was no statistically significant difference between NGDs and interns regarding that question (P=0.77) ( Table 10).
Discussion
Composite resin materials are used for the restoration of both anterior and posterior teeth due to their esthetic and conservative characteristics, advantageous physical and mechanical properties, and relative affordability [15,16]. However, conventional composite resin restorative materials merely restore the dental structure that was lost due to caries rather than have any therapeutic effect. Secondary caries is notoriously known as the most commonly detected failure mode in composite resin restorations [17]. Recently, a new generation of SDCs has been introduced, with their focus being prevention of recurrence of dental caries and having a positive effect on caries-affected dental hard tissues [4,18]. Several approaches have been proposed to achieve this goal. Some materials incorporated calcium phosphate nanoparticles into their composition, while others release fluoride to inhibit the development of secondary caries and possibly arrest initial lesions [19,20]. Another group of SDCs work by suppressing biofilm growth, bacterial attachment and acid production leading to the inhibition of the caries process. thereby making it more difficult for bacteria to attach to the surface [4,19,21,22].
Surveys are a widely used method of assessing the knowledge and attitudes of dentists, among other groups [23][24][25]. An acceptable response rate to questionnaires has been approximated at 64% [19]. This survey achieved a response rate of 80%, which is well beyond the suggested 64 %. This cross-sectional survey study aimed at investigating the knowledge status and attitudes of NGDs and interns regarding SDRs. This would provide insight into the current status as well as the possibility of incorporating such materials in caries control clinical practices with more conservative reparative treatment modalities [26]. The results of this study showed that the proportion of NGDs (71.4%) considering specializing in restorative dentistry was almost statistically significantly (P=0.006) higher than interns (54.9%). However, the proportion of the total number of both NGDs and interns indicated they would like to specialize in restorative dentistry (total 59.1% of respondents). This result agrees with the findings from a study by Halawany et al that reported restorative and esthetic dentistry specialty was preferred by finalyear dental students in Saudi -Arabia [27].
Although the majority of respondents expressed their interest in restorative dentistry specialization, NGDs reported a significantly lower understanding of dental restorative materials (P=.02). This result contradicts the results reported by Saudi dentists in a previous study by Haider et al, where the longer clinical experience of dental practitioners compared to students and interns influenced the restorative materials knowledge [28]. This higher perceived knowledge of NGDs compared to interns and dental students was also noted in the choice of restorative material with, as reported by Barakah et al. [29].
As a result of self-reported lack of understanding of restorative dental materials in NGDs, significantly fewer NGDs reported ever hearing of smart/bioactive materials than dental interns (P=.03). This may be due to the fact that many dental interns are still in their respective academic institutions and attending workshops or seminars as part of their training, while NGDs are busier in their daily dental practices and may not be attending seminars regarding new restorative materials. Thus, the longer experience duration that NGDs had in this study didn't increase their reported knowledge of SDCs existence or their respective utilization in dental practice. A study by Nassar et al, reported a similar absence of clinical experience positive effect on the dentists' use of new bulk fill restorative resin materials [30].
Consequently, significantly fewer NGDs reported current use of SDCs in their practices (P=.017) or thinking that SDCs effectively regenerated dental hard tissues (P=.016). This result is similar to that from a study by Gomez et al, where it was found that Colombian dentists didn't completely utilize non-invasive conservative smart materials in their practice to control caries [31]. In the current study, there was a significantly greater enthusiasm towards future use of SDCs from the interns when compared to NGDs (P<.01).
There was no significant difference in the method of hearing about SDCs between interns and NGDs (P=0.26). The internet was reported to be most frequently used by interns (43%) and NGDs (59.1%) compared to other educational resources. This higher preference to electronic resources utilization is in the agreement with what Saudi dentists in Riyadh, KSA, used when faced with uncertainty in evidence-based dentistry, as reported by Al Malki et al. [32]. However, the limited access to resources was a major obstacle to continuous education according to Fedorowicz et al. [33]. There was a greater enthusiasm towards SDCs from the interns when compared to NGDs.
Interestingly, the majority of both dental interns (96.7%) and NGDs (97.6%) reported being open to learning more about and potentially using SDCs but indicated a need for more training. This shows a willingness to learn about and try new dental materials/therapies from young dentists, which is a promising finding.
CONCLUSION
Younger dental practitioners are moving towards the application of remineralization restorative products when treating carious teeth as indicated by 11% of the surveyed interns using SDCs. Despite the low percentage found, the majority of both groups indicated a willingness to learn more about SDCs. Within the limitations of this study, the authors believe that, further training would increase the utilization of SDCs by dentists and ultimately benefit patients.
DISCLAIMER
The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors.
CONSENT
As per King Abdulaziz University's standard guideline, participants' consent has been obtained and preserved by the authors.
ETHICAL APPROVAL
Ethical approval was obtained from King Abdulaziz University's research ethics committee prior to commencement of the work. The ethical approval letter number is 060-02-19.
COMPETING INTERESTS
Authors have declared that no competing interests exist.
|
2021-09-01T15:14:27.593Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "15aa3039a90f0c3f56b47fb42fe074527b5b1ef8",
"oa_license": "CCBY",
"oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/31738/59615",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9bd102c102acc53f7fc29e8a0d6717e4592e77de",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
}
|
246024003
|
pes2o/s2orc
|
v3-fos-license
|
Serious violence – a challenge and reason for reforms?
One of the great challenges to Nordic criminal law is the widespread perception that our world and our societies are becoming more brutal and that violence and serious crime are increasing rapidly. There is a sense that newspapers, TV news and social media focus on crime, brutality and war more than ever before and that the number of cases of deadly violence with firearms is increasing. The question of law and order and public safety has become one of the most important questions to the public in recent years.1 This coverage influences people in general and politicians specifically. For a politician wanting to take voters’ concerns seriously, there is clearly a point in discussing changes on the criminal policy agenda. This may be case regardless of the empirical research on, for example, the effectiveness longer prison sentences.2 Voices have been raised from many quarters, arguing that more repression and longer sentences do not reduce crime or violence. Conversely, others have continued to claim that it does. .
Introduction
One of the great challenges to Nordic criminal law is the widespread perception that our world and our societies are becoming more brutal and that violence and serious crime are increasing rapidly. There is a sense that newspapers, TV news and social media focus on crime, brutality and war more than ever before and that the number of cases of deadly violence with firearms is increasing. The question of law and order and public safety has become one of the most important questions to the public in recent years. 1 This coverage influences people in general and politicians specifically. For a politician wanting to take voters' concerns seriously, there is clearly a point in discussing changes on the criminal policy agenda. This may be case regardless of the empirical research on, for example, the effectiveness longer prison sentences. 2 Voices have been raised from many quarters, arguing that more repression and longer sentences do not reduce crime or violence. Conversely, others have continued to claim that it does. .
One of the answers to the great challenges posed by increasing serious crimes, gang-violence and gang-related criminality could of course be to find ways to reduce the criminality. Many such solutions could be and have been introduced. One way to prevent people from committing crime is to imprison the perpetrators. The Swedish Government has several times argued to increase penalty scale and thus the amount of repression in the criminal law to answer the public's sense of insecurity. The situation is a challenge for criminal law, as well as law enforcement authorities.
Another political response to the increased perception of insecurity may be to reform different provisions of the national criminal law. Violent crime thus can be subject to substantial changes to the penal system, as an answer to this perceived challenge.
In Sweden, increased public perception of the problem of serious crimes have resulted in changes to the criminal code, notably the possibility of a longer sentence for people charged with serious violent crimes, 3 and changed wording when it comes to serious unlawful coercion and unlawful threat. 4 In the preparatory works to this legislative changes, the research on the effectiveness of more stringent sanctions is generally only briefly discussed.
Rising serious crime requires that large resources be allocated to investigations and prosecutions. 5 These resources -police investigations, forensic work, custody and court sessions -must be taken from somewhere. If more cases of serious crime occur, they must be investigated, often rapidly, and prosecuted. This impacts 'less serious' crime as investigations of more ordinary crimes are delayed and sometimes put aside and not considered a priority compared to the more serious crimes for example serious assault, murder and people spending significant time in custody. Over the last years, two types of crime have become increasingly mediatised: the so-called 'shootings' , mainly involving young men in the outskirts of our main cities, and sexual violence. The last years we have seen a number of new crimes and offences introduced in the Swedish penal code. The offences concern actions that already constituted crimes, serious crime and offences, but where the legislator has created a new crime for the more serious actions of the kind. This makes the more serious crime stand out and lead to a more severe punishment. The actions in question are qualified as more severe when compared with the 'ordinary' crime and therefore are perceived as extremely serious crime. 7 All sexual crimes have their place in the discussion on challenges to Nordic criminal law and bring forth much of the same questions, arguments and answers as the increasing awareness of and actions towards serious violence. However, changes in law to this category of crime have a different dimension. 9 They require extensive analysis outside the scope of this chapter. Here, I discuss changes in criminal law from the year 2010 and onwards, but I do not offer an exhaustive overview on the regulatory actions taken against serious violent crime.
Earlier reasoning on punishment and consequences
Criminal justice issues are constantly high on the political agenda and are often the subject of public debate. 10 Looking back, these issues have long been relevant and criminal law has been debated, investigated and substantially changed on several occasions over the last century. This section reflects on the grounds of the penalty system, to which will be linked in the text at the end of the article.
The former Criminal Code from 1864 was characterised by the idea that punishment should be determined by merit and in proportion to the crime committed, in accordance with the principle of legality. During the first half of the 20th century, the idea that punishment should be replaced with care and treatment emerged as a reaction to the classic criminal justice school and these ideas were reflected in the new Penal Code. With the introduction of the new Penal Code in 1965, general prevention and individual prevention became the basic building blocks of the design of penalty system. Penalty scales were introduced for all crimes in the Penal Code, to highlight proportionality in this system. 11 This treatment concept, which has roots in the school of sociological criminal law, was also criticised. More and more voices were raised in the 1970s as a result of criminological research on the topic. Penalty scales were introduced for all crimes in the Penal Code, to highlight proportionality in this system. 12 During the first half of the 20 th century, the idea that punishment should be replaced with care and treatment emerged as a reaction to the classic criminal justice theory and these ideas were reflected in the new Penal Code. This treatment concept, which has roots in the theory of sociological criminal justice, was also criticised. More and more voices were raised in the 1970s as a result of criminological research on the topic. A report from the Swedish National Council for Crime Prevention, BRÅ, argued that, when choosing a new penalty, one should do away with the treatment ideology and embrace a more realistic view of what can be achieved within the framework of criminal penalties. 14 The report was a breakthrough for so-called neoclassicism, with the introduction of the key notions of penal value and proportionality as driving forces of the penalty choice in Swedish criminal law rather than the idea of treatment or care. The 1980s saw a number of research projects into criminal law and the basic principles of the penalty system. This led to the return of the concept of proportionality in the Criminal Code through the 1988 penalty reform. The new legislation came into force on the 1 January 1989. The reform put the principles of proportionality and equivalence at the forefront while somewhat limiting the importance of the prognosis in a certain case. The value of predictability was also emphasised. 15 At the same time, there was a widespread stance on prison sentences among criminal justice researchers and in legislative debates, namely that prison sentences were negative and that imprisonment should be avoided as far as possible. 16 There have been a number of subsequent research into the penalty system since, but apart from changes in 1999, they have not resulted in any extensive or paradigm-shifting changes. What has been done and continues to be done are changes regarding penalties and certain types of crime. 17
Changed perception of serious violent crime
The directives to the 2010 Culpability Reform stated that the public view on serious violent crime had changed. 18 The public opinion on how serious a certain crime or an offence is can change over time on the basis of societal developments. When the public opinion has undergone a more durable change that cannot be accommodated within the frames of the current legislation, the law must change to maintain its legitimacy. Such a change in the public opinion is, according to The public tolerance towards violence is reflected in the report statistics, as analysed in the BRÅ-report 'More severe penalties for serious violent crime' . 20 BRÅ was of the opinion that nothing points towards a situation where violence is increasing or becomes more evil. 21 However, from the annual National Safety Survey, BRÅ concludes that the exposure to crimes such as assault, sexual crime, robbery and threat is increasing. 22 The Government's view has been that the risk of being affected by permanent physical injuries has radically decreased because of the increased standard of living and technical development. Many measures to prevent accident are taken in working life and on roads. Nowadays there are also far better opportunities to treat or prevent accidents and diseases thanks to good access to medical treatment and years of progress in the field of medicine. In this context, as everyday life involves less risk than before, violent crime is seen as relatively more dangerous. There is also the knowledge that violent crime not only causes the victim physical violence but often also psychological injuries and that even relatives of the victim can be affected by worries for the victim and anxiety and insecurity for themselves. According to the Government's proposals, this necessitates more severe penalties. Serious crimes can also provoke fear to numerous people, for example if the crime are committed in public places or target specific groups of people. 23 There has also been a long standing conversation on the so-called 'crimes of a certain nature' in the provision in Chapter 30, section 4 part 2 of the Penal Code. This provision states that the court when considering the sentencing may consider not just the penal value, but also the nature of the crime or crimes. These considerations are normally made at a later stage, when the court decides which sanction to impose upon the offender, but point to a difference in the nature of crimes that according to the lawmaker can be specifically considered. These crimes considered to be of a certain nature include crimes that have become more widespread, crimes that are difficult to prevent and detect, and crimes that have adopted more malignant forms. 24 treated and how the 'certain nature' of these crimes works are often ambiguous. 25 However, some parallels can be drawn to the current situation, where certain types of crimes are considered more serious and, while they do not lead to 'automatic' prison sentence, trigger rule changes in the form of a larger possible range of severe penalties. In the preparatory works, the legislator notes the view that serious violent crime has changed over time and has come to be seen as more blatant than before.
The government proposal 2009/10:147 states that a steady increase in the number of violations reported to the police confirm that some crimes are now perceived as more socially threatening which justifies reviewing the penalties for these crimes. 26 According to the proposal, ''[t]his stricter view should be reflected in a level of punishment for violent crimes that is proportionate to their gravity and is higher than the level of interventions for mainly the crimes of property' . 27
A gradually increasing penalty level for serious violent crimes with emphasis on attacks on life or health
The 2010 law amending the Penal Code introduced new rules in the general penalty measurement rule in chapter 29, section 1, part 2. This meant that the courts, when assessing the punishment value, consider the damage, violation or danger that the act caused, what the defendant realised or should have realised about this, and the intentions or motives that the defendant had. In particular, the court should consider if the act involved a serious attack on someone's life, health or safety. The crimes targeted by the measure are mainly gross assault and other serious forms of violence, namely crimes with a penalty value exceeding six months before the reform, such as kidnapping, unlawful deprivation of liberty, gross violation of integrity or of a woman's integrity, robbery and gross robbery, and serious cases of unlawful coercion, unlawful threats, extortion, violence or threats to civil servants, and interference in a judicial matter. Sexual crimes can also be covered by the wording of serious attack, as well as attempts to commit, prepare for and conspire to commit any of these crimes.
Assault, gross assault and extremely serious assault
The crime of gross assault was divided into gross assault and extremely serious assault in 2010. The criminal offenses for these violent crimes have since been further tightened. Shortly after the 2010 reform, the government in a 2013 directive stated that there was a need to review the penalties for serious violent crimes with the aim of achieving an increased level of punishment. 29 The government justified introducing the crime of extremely serious assault on the basis of its apparent disappointment that the penalty scale, as stated in its 2009 Government proposal, 'is used to a limited extent and that punishment is often determined strikingly close to the penalty minimum' . 30 The underlying motive of the legislator, found in all preparatory works creating a particularly serious crime, is to make the courts consider all the circumstances relevant to the punishment value and to determine a proper punishment for each act. 31 Here, the lawmaker hoped that dividing the gross crime into one gross crime and one extremely serious crime would emphasise the objective circumstances characteristic of the most serious violent crime. It was considered that the most serious crimes were so reprehensible that the punishment should be closer to or even higher than the punishment for manslaughter which has a penalty term of six years. By way of comparison, the proposal notes that the minimum penalty for serious rape and gross robbery is four years. It, thus, points out that the previous punishment level for the most serious offenses was not proportionate to the seriousness of the crime, nor was it considered to be at a reasonable level compared to other types of crime. 32 The 2017 Government's proposition led to a law that further increased the penalties in Chapter 3, Section 6 of the Penal Code, with a minimum penalty for the gross assault of one year and six months and of five years for extremely serious assault. 33 The same law increased the minimum penalty for gross unlawful coercion and gross unlawful threat to nine months and for gross robbery to five years. The government with the 2017 reform wanted a stronger reaction to intentional serious violent crime than had been the case before the 2010 reform, together with a secure and lasting realisation of the reform's purpose.
Several (criminal law) tools against organised crime
A category of crimes that has drawn much attention in both national and international contexts in recent years is the crime that is characterised by a higher degree of organisation. There is no agreed definition. 35 Yet more organised crimes are considered to pose one of the greatest challenges to the community. It is therefore important that criminal law tools to counter this type of crime be effective. A 2015 Government's proposal increased punishment to commit, prepare for and conspiracy to commit serious crimes. 36 This applies, among other things, to attempts to commit, prepare and conspire to commit gross unlawful threats, gross extortion, interference in a judicial matter and gross and extremely serious unlawful possessing of weapons.
The proposal also contained a call for a new regulation that anyone who has a controlling influence in an association and who does not prevent specific serious crimes within the framework of the association could be sentenced for this negligence. 37 The parallel is found in Chapter 23, Section 6, paragraph 2 of the Penal Code. Criminalising a passive behaviour is not in itself controversial, but so far there had only been limited obligations to, under certain circumstances, report or reveal some serious crimes -the criminalisation for failing to prevent a crime is an odd bird in the Swedish penal forest.
The government further proposed to introduce new circumstances to consider when assessing whether a crime is serious, as a way of ensuring that aggravating circumstances impact sentencing. 38 Crimes containing elements of threats must be considered in assessing whether the crime is gross. This includes whether the act includes threats made significantly more serious by means of weapons, explosives, a weapon dummy, includes alluding to violence capital, or has otherwise been of a serious nature. 39 Such wordings are, since becoming law, also found in Chapter 4, Section 4, part 2 of the Penal Code, as circumstances that must be considered in the offense of gross unlawful coercion and, in the same way, in the criteria in Chapter 4, Section 5 of the Penal Code, when it comes to gross unlawful threat.
More stringent views on serious crime required increased punishment for unlawful possession and use of weapons
In 2018, 306 shootings and 162 cases of destruction causing public endangerment by blasting occurred in Sweden. Such actions have become a priority issue for both law enforcement authorities and the government. 41 The crime of extremely serious unlawful possessing of weapons was introduced in Chapter 9, Section 1 a § of the Offensive Weapons Act, 42 along with some other changes such as penalties and permission issues. This is described in the 2013 Government's proposal for sharpening the Offensive Weapons Act. 43 The regulation's purpose targets tools that are often part of serious violent crimes, namely firearms. The proposition emphasises that the punishment for crimes concerning the possessing or use of weapons was not changed during the first 20 years of the law, but that there have been changes in crime, for example that criminal groups and gangs have access to weapons to a greater extent than before and that the use of weapons has also increased. An increasing number of shootings, resulting in dead or severely injured victims have been observed. According to the government, this development, coupled with a stricter view of intentional attacks on individuals' physical integrity, which also led to previously mentioned higher penalties for violent crimes and the introduction of a new crime of extremely serious assault, justifies a more stringent view of unlawful possessing and use of weapons and more severe penalties for weapon-related crime. 44 The more repressive approach to serious violent crimes and weapon-related crime has also recently led to higher penalties. 45 The penalties for gross unlawful possessing and use of weapon has, through the law amending the Offensive Weapons Act, 46 changed from imprisonment between one to four years to imprisonment between two to five years. Simultaneously, the minimum penalty for extremely serious weapon-related crime was raised from three years of imprisonment to four years, with a maximum penalty of six years. The maximum penalty for the normal degree of these crimes was increased from two to three years. The law's amendment took place relatively quickly which was criticised by the Council of Legislation. The same government proposal also dealt with the illegal occurrence and use of explosives, mainly hand grenades, which use has increased in recent years. 48 To counteract this development, it was therefore also proposed that the penalties for gross violations of the permit requirement for explosive goods be changed from imprisonment for at least one and not more than four years to a minimum of two and a maximum of five years; that the minimum custodial penalty for the extremely serious crimes be raised from three years for four years; and that the maximum penalty for breach of the permit requirement for explosive products of the normal degree be changed from two years in prison to three years. The amendments are now included in the law on flammable and explosive goods. 49 Another preparatory work on the topic is a memorandum entitled ' A stricter view of weapon related crime and smuggling of weapons and explosives' . 50 The proposed amendments in the Offensive Weapons Act would result in more weapon-related crimes be judged as gross or extremely serious and that customs authorities' powers be extended so that consignments assumed to contain weapons or explosive goods, such as hand grenades, can be stopped.
Penalty scale of murder -an ongoing change
In 2014, the provision in Chapter 3, Section 1 of the Penal Code was amended so that the penalty for murder with aggravating circumstances is now life imprisonment. The change aimed to ensuring that life imprisonment be used to a greater extent and constitute the normal punishment in the majority of cases. 51 In February 2016, the Supreme Court ruled that the wording of the law was not consistent with the statements in the preparatory works, and that the wording would take precedence, which meant continuing to assess the punishment value in the same way as the courts had done since the 2009 reform of the penalty scale for murder. 52 Thus, the 2014 enactment to increase the punishment for murder did not have any effect. 53 The situation had almost been foreseen by the Council of Legislation which had objected to the government's proposal and, among other things, asserted that the proposal's wording would in practice limit the possibilities of using the life sentence. The Council of Legislation had also pointed out that the requirement of aggravating circumstances to receive a life sentence typically exists in a murder case and that it appeared as contradictory that such circumstances would exist in most case. 54 The 2017 memorandum 'Lifetime imprisonment for murder' proposes an alternative wording to increase the use of life in prison sentences for murder. 55 The memorandum has been reviewed with varying responses. 56 In a 2018 government's proposal, the Government concludes that the purpose of the new proposed amendment in Chapter 3 Section 1 of the Penal Code is that the lifetime penalty be considered to a much greater extent than before. 57 Overall, the proposal means that the provision for murder would state that a sentence of life imprisonment should be considered when the act was preceded by careful planning, characterised by cunningness, aimed at promoting or concealing other crime, entailed severe suffering for the victim, or was otherwise particularly reckless. 58
A stricter view of crime consisting of attacks on social functions
Attacks, violence and threats to socially useful functions and against emergency services, including police and ambulance personnel, have been increasingly noticed in recent times. The National Prosecutor General was appointed special investigator in December 2016 to consider legal changes to achieve a modern criminal law which protects 'blue light personnel' and other socially useful functions, professionals and persons who have the task of helping others. His report questioned whether current penalties reflect sufficiently well the very serious consequences that can arise in attacks on so-called blue light activities, not only for the people who are exposed to the attack but also in relation to the interests that these activities are set to protect. The report also questions whether the existing applicable penalties sufficiently reflect the gravity of the attack on blue light personnel and activities. 59 The sub-report 'Strengthened criminal law protection for blue-light activities and other socially useful functions' suggested a new crime of blue light sabotage and a, increased penalty scale for gross violence against civil servants. 60 A new Government proposal did not adopt a proposed change to Chapter 29 of the Penal Code which would made an aggravating circumstance that the accused attacked someone with violence or threat of violence in or due to his or her professional practice as an aggravating circumstances. 61 The consideration was deemed too wide. To strengthen the protection of civil servants, the Government agreed with the proposal on violence or threats against civil servants.
Changes in the penal code: Symbolic measures without support in research or an adaptation to societal developments and public views on crime?
The fundamental basis for penalty scales is the seriousness of the crime. The penalty should reflect how serious, reprehensible and blameworthy, a certain crime is. A more serious crime should be punished more severely than a less serious crime and equally serious crimes should be punished equally severely. 62 These principles of proportionality and equivalence are expressed in the central provision on sentencing in Chapter 29 Section 1 of the Swedish Penal Code. According to this provision, penalties must be determined within the scope of the applicable penalty scale according to the blameworthiness of the actual crime and in line with the key notion of the uniform application of law. When assessing the penalty value, the judge must consider the damage, violation or danger that the act has caused; what the defendant was aware of or should have been aware of as to the consequences of that act; and the intentions or motives behind that act. It is particularly important to take into account if the act constituted a serious attack on someone's life, health or safety.
The perception of how serious a certain crime is may change over time, notably as a result of societal developments. Other reasons may also influence this perception, such as increased media reporting and more emphasis on victims and public safety. The legislature can influence how a crime shall be punished by determining the penalty scale. The Government considers, as stated in several preparatory works discussed above, that there are reasons for a more stringent approach to serious crimes, including a tightening of penalties. The purpose of hardening sentences for gross and extremely serious crimes is clear. The government's proposal on firearms and explosives, notably, posits that stronger punishments will lead to higher respect for the law. 63 These proposals, new laws and reforms of the criminal system have sometimes been criticised. The proposals have been criticised for having been pushed through too quickly; worded in excessively vague terms so that they do not lead to changes in practice; or not being supported in criminal policy research. Indeed, research suggests that longer penalties may not decrease criminality and that stricter penalties are therefore not the way to go. An important piece of research in the criminological debate about just prison sentences and tighter penalties is a 2010 article by the late professor in penal law, Per Ole Träskman. 64 The article discusses several criminal law theories, highlights several reasons for being critical of imprisonment as a punishment, and advocates for reducing the use of prisons. Träskman points to, among other things, research disputing the rehabilitative function and generally preventive function of custodial sentences. The author also emphasises research indicating there is no clear connection between the proportion of the population wanting more stringent penalties and rates of criminality. Conversely, however, there is a connection between people's perception of crime development and calls for more severe punishment. 65 A large body of research makes similar claims as Träskman. 66 Another body of research has made opposite conclusions and argued that prison sentences can have deterrent effects. 67 Westberg, for instance, presents different research on the topic, that has studied among other Italian and French prison amnesties combined with various harsh penalties for recidivism. He concludes that harsh sentencing can recidivism retaliation. Westberg also presents three studies that posit that longer prison sentences reduce to some extent violent crime and discusses the fact that Sweden has the second lowest carceral population in the European Union. 68 Given this background, what should be the reaction of legislators when initiating changes in penalty scales? Contemporary reforms, according to Asp, are not characterised by any belief that reforming the criminal justice system will have a tangible effect on crime. 69 Asp states that proportionality can be invoked both by, on the one hand, those who have a more instrumental view of criminal law and believe that criminal law can affect individuals' actions and, on the other hand, those who have a more pessimistic view of the importance of criminal law on criminality.
From this very limited glimpse of the criminological debate on levels of punishment, one cannot draw far-reaching conclusions. However, one can state that the Swedish legislator, in preparatory works, has not focused on the question of whether longer punishments has any effect in one way or the other. The consistent starting point has been the public's perception on crime and calls for harsher penalties. The legislator has not discussed the effect of such criminal policy reforms. This suggests that public opinion, rather than research, has a greater influence on the legislator's view of what is the correct level of proportionality and equivalence between crime and punishment and of what is perceived to be a just and proportional punishment for a certain crime.
|
2022-01-19T16:06:30.492Z
|
2022-01-14T00:00:00.000
|
{
"year": 2022,
"sha1": "e57a2e7d16d268a7b4159ab7975173a5f20d15c0",
"oa_license": "CCBY",
"oa_url": "https://boap.uib.no/index.php/BJCLCJ/article/download/3527/3361",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2d29449c7c1cac63e4597179784e742f9ac7aa70",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
}
|
256217244
|
pes2o/s2orc
|
v3-fos-license
|
Exercise Preferences in Young Adults with Cancer—The YOUEX Study
(1) Background: Strong evidence supports the persuasive positive effects of exercise for cancer patients and survivors. Different approaches of exercise programs have been established; however, the special interests of young adults (YAs) with cancer have rarely been considered in exercise interventions. Therefore, the study YOUng EXercisers (YOUEX) aimed to investigate exercise preferences in YAs. (2) Methods: YOUEX was a three-arm, patient preference-based non-randomized, longitudinal, pre–post exercise intervention, offering three different exercise modules to YAs during or after acute therapy (Module 1: online supervised group-based (M1); Module 2: online unsupervised (M2); Module 3: in-person supervised (M3)). The intervention period was 12 weeks with another 12-week follow-up period, the modules could be changed or amended after 6 and 12 weeks. (3) Results: 92 YAs were allocated to the study. At baseline, 50 YAs (54%) chose M2, 32 YAs (35%) M1 and 10 YAs (11%) M3. The analysis revealed high acceptability and feasibility of the online exercise programs (M1, M2). There was a high impact of the COVID-19 pandemic on the execution of M3. YAs showed diverse preferences in module selection due to differences in, e.g., cancer therapy status or favored level of supervision. (4) Conclusions: YAs need personalized exercise programs that consider their individual interests and needs. Online exercise programs can be a promising addition to existing exercise opportunities. They are an effective way to increase physical activity levels in YAs.
Introduction
A strong body of evidence demonstrates the beneficial psychological and physiological effects of physical activity (PA) and exercise in cancer patients and survivors before, during and after treatment. Hundreds of exercise interventions have revealed the reduction in highly prevalent cancer-and treatment-related side effects, such as fatigue [1,2], physical disabilities [3,4], polyneuropathy [5][6][7], or lymphedema [8,9]. Several systematic reviews and meta-analyses have shown the positive effects on overall quality of life [10][11][12]. Moreover, regular PA during and after cancer treatment is associated with improved treatment efficacy [13] and increased cancer-specific survival rates [14]. Based on the high amount of evidence, the American College of Sports Medicine (ACSM) defined specific exercise guidelines for individual side effects in oncology. They recommend to reduce sedentary time [15] and to reach at least 150 min of moderate-intensity exercise (or 75 min of vigorous-intensity exercise) and two strength-training sessions per week [16]. These recommendations correspond to the World Health Organization guidelines on physical activity and sedentary behavior [17]. However, to date, the vast majority of studies have been conducted with cancer patients over the age of 50 years, underrepresenting young adults and their special needs and preferences [18,19]. A review by Munsie et al. [20] highlights the lack of high-quality studies that examine the effects of physical activity in this cohort.
Commonly, the term 'adolescents and young adults' (AYA) includes individuals between the ages of 15-39 years. Today, cancer occurs about 66,000 times in AYA per year in Europe [21]. Due to improvements in treatment and care, the five-year relative survival of AYA diagnosed with cancer is 80-85%. However, long treatment regimens and periods of isolation away from their peer groups compromise their physical and psychological well-being. Further, long-term sequelae of cancer treatments can range from mild to severe. Late effects involve, for example, cardiovascular diseases, lung problems, high risk for osteoporosis or increased risks to develop other types of cancer later in life [18,22]. In light of the special life situation of AYA, the adoption and implementation of exercise programs need to involve adjustments according to the various factors, such as physical and mental health, financial position, time, and family role [23]. At present, there are very few specialized exercise programs that focus on the interests and needs of AYA. Most of them are tailored for children undergoing cancer treatment or childhood cancer survivors [24][25][26][27][28][29]. According to reports of the German Foundation for Young Adults with Cancer, there is a lack of attractive exercise programs for young adults with cancer aged 18-39 (YAs) as well as a lack of research about their feasibility and efficacy [30]. Although most YAs are highly interested in PA support and increasing PA levels [31], only a few studies have examined the feasibility and acceptance of exercise programs in YA [19].
With a focus on YAs, we developed a health care research study, called YOUEX (YOUng EXercisers) that addresses and investigates the needs and preferences of YAs aged 18-39 years to participate in a structured exercise program. Therefore, we implemented three different exercise modules that included different online tools or in-person training sessions. The YOUEX study is based on a comprehensive evaluation design and is supported by the German Foundation for Young Adults with Cancer.
The goal of the YOUEX study was to investigate the feasibility, acceptance and individual module selection of the three exercise modules by YAs with cancer to gain knowledge about how exercise programs should be structured for this young target group.
Study Design
We conducted a three-arm, patient preference-based non-randomized longitudinal pre-post exercise intervention for YAs with cancer with three eligible exercise modules. The main intervention period was 12 weeks with another 12-week follow-up period. We defined four time points for the intervention evaluation (T0: baseline; T1: after 6 weeks of intervention; T2: 12 weeks of intervention; T3: follow-up 24 weeks). The study protocol was approved by the ethics committee of the medical faculty at Heidelberg University (S-932/2020). The study was registered at clinicaltrials.gov (NCT05613699).
Participants and Recruitment
Participants were eligible if they were aged between 18-39 years, had a cancer diagnosis within the past five years and confirmed the study letter of consent. Exclusion criteria were the lack of physical exercise clearance from the attending oncologist (e.g., in case of fragile bone metastases), subjectively perceived cancer-related cognitive impairment, current participation in another exercise intervention or insufficient German language skills. We recruited patients via social media, clinical websites, flyers and from survivor groups of the German Foundation for Young Adults with Cancer from September 2020 to April 2021. Interested patients were contacted via e-mail or telephone for further information and to check inclusion criteria.
YOUEX Exercise Intervention Modules
Patients who fulfilled the inclusion criteria and provided informed consent received a comprehensive exercise consultation via phone or in person at the exercise department of the National Center of Tumor Disease (NCT), Heidelberg, or the department of sports medicine at Charité-Universitätsmedizin, Berlin. In the first consultation, study coordinators collected information about the cancer diagnosis and therapy, cancer-and cancertreatment-related side effects, medical history, past and current physical activity levels and patient's preferences to exercise. Further, they explained to the participants that they were free to choose one of three different exercise programs. The different modules were developed in exchange with the German Foundation for Young Adults with Cancer. Based on a survey that was carried out by the foundation and asked young adults for their wishes regarding different exercise options, the following three modules were developed: Module 1 (M1): supervised, group-based, online exercise program once a week 2.
Module 2 (M2): unsupervised, individual home-based training with an online-training app at least once per week 3.
Module 3 (M3): participation in a supervised, in-person exercise program close to place of residence at least once per week At baseline (T0), patients choose one of the three exercise modules. The selected module had to be followed obligatory for the first six weeks. After 6 weeks (T1), the initial module could be replaced or amended by another study module. This or these selected module(s) had to be followed for another six weeks. The same procedure was repeated after 12 weeks (T2) of the exercise intervention. The main intervention ended after 12 weeks (T2). Thereafter, participants could voluntarily maintain one or up to three modules for another unsupervised 12-week time period. The follow-up ended at the 24-week time point (T3).
The supervised M1 took place once per week at a fixed, pre-scheduled time, via an online video conference platform. The training sessions lasted 60 min. They always started with a general 10 min warm-up, followed by a 40 to 45 min workout with specific exercises and finished with a 5 to 10 min stretching or relaxation part. The main workout focused on a different aspect of exercise each week (e.g., resistance training for lower extremities, sensorimotor training, home-based endurance training). The aim was for the YAs to learn exercises that they could do independently at home. For participants starting with M1, the study coordinator further recommended independent physical activity, such as walking or cycling, 1-2 times per week or to maintain the current volume of PA. Exercise recommendations for M2 were personalized and included primarily a combination of endurance and resistance training, 2-3 times a week, depending on the patient's needs. Endurance training should be performed with moderate intensity for at least 30 min duration (or less, if the patient needed to adopt the exercise recommendations due to their current health status). The type of endurance exercise (e.g., walking, cycling, swimming) was chosen according to the individual interest of the patient. Resistance training consisted of various strength exercises for the large muscle groups (at least two each for lower and upper extremities and two for trunk muscles) and was aimed to improve muscular strength. Additional types of exercise (e.g., sensorimotor training) were added if therapy-related side effects were present. If necessary, exercise trainers conducted one introductory training session as a video conference to check for exercise techniques and answer any individual questions. Thereafter, M2 was executed as application-guided home-based intervention. M3 was executed in a certified exercise facility of the network OnkoAktiv and supervised by special qualified exercise trainers. The weekly recommendation was to participate in a personalized in-person exercise program 1-2 per week plus independent physical activity (or to maintain the current level of PA). Evolution of the training load was recommended in each of the three modules if this was possible for the patients.
Due to the differences in the three modules regarding frequency and content, the overall exercise recommendations in all study modules were guided by the present exercise guidelines of the ACSM, aiming to reach at least 150 min of moderate PA per week plus two strength-training sessions per week [16]. The subjects are asked to independently carry out that part of the overall recommendations that cannot be achieved via the module.
Physical Activity
Physical activity levels were determined by the standardized Godin-Shephard Leisure-Time Questionnaire [32]. The questionnaire was used to ask for pre-diagnosis-, postdiagnosis-and pre-post-intervention physical activity levels within three categories: light, moderate and vigorous physical activity in minutes per week. Participants were categorized in a sufficiently active and insufficiently active subgroup with a threshold of 150 min of moderate or 75 min of vigorous physical activity per week (or a combination of both) based on the ACSM guidelines.
Module Selection and Exercise Preferences
The YOUEX study is based on a comprehensive evaluation questionnaire that focused on the main outcomes: module selection and exercise preferences. The individual module selections were queried and documented at the first three time points (T0, T1, T2). Reasons for any module selection and why other modules were not selected was collected through open answer questions while multiple answers were possible. The subgroups of the initial module selection were analyzed according treatment and employment status. Further, we asked for module preferences under COVID-19-free conditions.
Impact of the COVID-19 Pandemic
The questionnaire about the impact of COVID-19 was self-developed and used internally in other studies at the National Center of Tumor Diseases (NCT), Heidelberg, but has not been published. The COVID-19 questionnaire consisted of six items and surveyed the impact of COVID-19 on a patient's current job situation, leisure-time activities, physical activity levels, self-efficacy, anxiety and mental health. Further, we asked whether COVID-19 had any effect on the participant's module selection. The COVID-19 questionnaire was submitted later during the ongoing study due to the COVID-19 lockdown in November 2020. Therefore, not all participants completed the COVID-19 questionnaire.
Statistical Analysis
The statistical analysis followed an exploratory approach applying descriptive and inferential statistics using the programs IBM SPSS Statistics 28 and Microsoft Excel 2016. The inference statistical pre-analysis for the normal distribution hypothesis was conducted using Shapiro-Wilk test and optical representation by histograms and Q-Q-diagrams. For inference statistics, non-parametric Wilcoxon, Friedman and Pearson chi-square tests were applied. We also conducted the Dunn-Bonferroni test as an equivalent post hoc procedure to the Friedman test. Correlations between the categorical variables were estimated applying Cramer's V. A 95% confidence interval was defined for all significance tests and all tests were two-sided. Due to the exploratory approach, procedures for multiple test adjustments were dispensed [33]. Effect sizes for median differences were calculated using the Pearson correlation coefficient r. To measure the effect size of the Friedman test, we used Kandell's W. Cramer's V, Pearson's r and Kandell's W were reported according to the interpretation by Cohen (small ≥ 0.1; medium ≥ 0.3; large ≥ 0.5) [34].
Qualitative Analysis
The qualitative data were analyzed by structured content analysis in Microsoft Excel 2016. We coded all open answers and sorted them into categories based on Kuckartz et al. [35]. Then, we counted the number of codes (quantitative) and sorted them according to their number of occurrences.
Results
From September 2020 to April 2021, 106 young adults contacted us regarding participation in YOUEX across all social media platforms which were deployed for recruitment. One-hundred and four of those met the inclusion criteria and 92 patients confirmed the letter of consent and started at baseline (T0). Figure 1 shows the flow diagram from enrollment to analysis including the number of and reasons for dropouts.
The qualitative data were analyzed by structured content analysis in Mic 2016. We coded all open answers and sorted them into categories based on Ku [35]. Then, we counted the number of codes (quantitative) and sorted them a their number of occurrences.
Results
From September 2020 to April 2021, 106 young adults contacted us regar ipation in YOUEX across all social media platforms which were deployed for r One-hundred and four of those met the inclusion criteria and 92 patients co letter of consent and started at baseline (T0). Figure 1 shows the flow diagra rollment to analysis including the number of and reasons for dropouts.
Physical Activity
During the primary intervention period the mean amount of light physical activity did not differ significantly between the time points (see Table 2). Significant improvements were found in both, moderate PA level (chi-square (3) Compared to pre-diagnosis, the following significant differences were determined: The PA with vigorous intensity decreased significantly from pre-diagnosis to post-diagnosis (z = 0.819, p = 0.001, r = 0.10) and the PA with moderate intensity increased significantly from pre-diagnosis to T1 (z = 0.621, p < 0.05, r = 0.10). There was no significant change from pre-diagnosis to T2. Before diagnosis, 63% fulfilled the international physical activity recommendations of ACSM. This proportion dropped to 40% after diagnosis.
Initial Module Selection at T0
With regard to module preferences at baseline (T0), 50 participants (54%) chose the online-training app (M2), 32 participants (35%) chose the supervised, group-based online exercise program (M1) and 10 participants (11%) chose the in-person exercise program (M3). However, due to the COVID-19 national lockdown and several restrictions on exercise facilities, M3 could not be served from November 2020 to March 2021. Participants who performed M3 at that time could switch to either M1 or M2. The most frequently mentioned reason for choosing M2 (56% of a total of 82 qualitative answers) was the flexibility in terms of time. The second most given reasons were both, personal reasons (e.g., "I want to lose weight") and the personalized training schedule (respectively 15% of 82 qualitative answers).The reasons for choosing M1 (a total of 56 qualitative answers were given) were the fixed training date (30%) and doing sports with other cancer patients (27%). Sixteen percent of the YAs indicated that M1 was easy to integrate into everyday life. The most common reason for choosing M3 (a total of 19 qualitative answers were given) was the individual supervision by an exercise therapist (47%). Additionally, fixed dates (26%), closeness to residence (16%) and social contact (11%) were mentioned in regard to M3.
Module Change at T1
Eighty from ninety-two participants completed the first 6 weeks of intervention and reached the first time point of intervention (T1). At T1, 43% (n = 34) of participants replaced or amended their initial module (see Figure 2). Eleven participants (14%) replaced their initial module, of which 55% (n = 6) chose M2, 27% (n = 3) chose M3 and 18% (n = 2) chose M1. Twenty-three participants (29%) amended the initial module of which thirteen patients added M2 to M1 (57%), six patients added M1 to M2 (26%), three patients added M2 to M3 (13%) and one patient added M3 to M1 (4%). Reasons for replacing or amending the groupbased online exercise program (M1) were the wish to increase activity through adding another module (33% of 39 given answers) and the wish to receive more individual advice by an exercise therapist (26% of 39 given answers). YAs who chose the online-training-app (M2) (a total of 44 qualitative answers were given) named the wish for more interaction with trainers (27%), problems with COVID-19 restrictions (23%) and that they wanted to try another module (14%) as reasons for replacing or amending the initial module. The most common reason for replacing M3 (a total of 7 qualitative answers given) was COVID-19 restrictions (43%). Seventy-four from eighty participants reached the T2 (12 week) time point and the end of the main exercise intervention. Between T1 and T2, 71% took part in one module and 29% took part in two different modules. Table 3 shows the patients' characteristics regarding treatment status and employ- Table 3 shows the patients' characteristics regarding treatment status and employment within the three exercise modules at T0. A total of 18 patients were undergoing acute therapy during their participation. These patients were distributed relatively evenly across the three modules (M1: 39%, M2: 33%, M3: 28%). Looking at distribution within the module selection, 50% of those who chose M3 were undergoing acute therapy during their participation, while only 22% from M1 and 12% from M2 were under ongoing therapy. The correlation analysis indicated a significant correlation between module selection and treatment status (chi-square (2) = 7.81, p = 0.02, V = 0.29). Of the 36 patients who were employed at the time of the intervention, the majority chose M2 (53%), 39% chose M1 and 8% chose M3. There was no significant correlation between module selection and employment status. Additionally, physical activity level, distress and social media behavior were analyzed but showed no significant correlations.
Impact of COVID-19 Pandemic
Eighty-five percent of the participants reported that the COVID-19 pandemic influenced their module selection. Thirty-three percent expressed that they felt unsafe to exercise in local facilities and were afraid of infection. More than half of all surveyed (52%) mentioned that their module selection was influenced by severe COVID-19 restrictions within the exercise institutions. The results coincide with the interest in different modules under COVID-19-free circumstances (see Table 4 and Figure 3). The analysis of the general impact of COVID-19 illustrates that the status and circumstances of employment changed in 60% of the participants due to the COVID-19 pandemic (e.g., changing to home office (26 YAs), changing to short-time work (4 YAs) or other changes (16 YAs) such as constant new regulations as a teacher or extension of parental leave). The impact of the COVID-19 pandemic on the physical activity level was diverse across the YA population. While 39 YAs (53%) stated that their PA level had been reduced a little to a lot due to the COVID-19 pandemic, 27 YAs (29%) reported that they became more active during the pandemic. Seventy-five percent of all the participants felt, that their quality of life was impaired, 70% of the YAs felt stressed, 40% felt anxious and 37% felt helpless because of the COVID-19 restrictions during the pandemic. The analysis of the general impact of COVID-19 illustrates that the status and circumstances of employment changed in 60% of the participants due to the COVID-19 pandemic (e.g., changing to home office (26 YAs), changing to short-time work (4 YAs) or other changes (16 YAs) such as constant new regulations as a teacher or extension of parental leave). The impact of the COVID-19 pandemic on the physical activity level was diverse across the YA population. While 39 YAs (53%) stated that their PA level had been reduced a little to a lot due to the COVID-19 pandemic, 27 YAs (29%) reported that they became more active during the pandemic. Seventy-five percent of all the participants felt, that their quality of life was impaired, 70% of the YAs felt stressed, 40% felt anxious and 37% felt helpless because of the COVID-19 restrictions during the pandemic.
Discussion
The YOUEX study investigated the exercise preferences and module selections of young adults with cancer. Our analysis revealed high acceptability and feasibility of online training programs. The YOUEX participants showed diverse preferences in exercise selection due to differences in cancer therapy status, interests in exercise options and favored level of supervision.
Discussion
The YOUEX study investigated the exercise preferences and module selections of young adults with cancer. Our analysis revealed high acceptability and feasibility of online training programs. The YOUEX participants showed diverse preferences in exercise selection due to differences in cancer therapy status, interests in exercise options and favored level of supervision.
Physical Activity
Only 40% of all YOUEX participants met the physical activity recommendations after cancer diagnosis and therefore 60% can be classified as insufficiently active. This highlights the importance of exercise programming for YAs, especially right after their cancer diagnosis and during treatment. Interestingly, the participations in one or more of the three study modules had a significant effect on YAs moderate and vigorous PA levels after 12 weeks of the study intervention. Both activity levels increased by about 35%. The number of patients who met the ACSM guidelines after 12 weeks increased from 40% to 59%. A comparison of the self-reported physical activity level before and after cancer diagnosis revealed a significant difference in exercise intensity. YAs decreased their vigorous exercise by about 40% after diagnosis. This phenomenon has also been described by different authors in older cancer patients [37][38][39].
The effectiveness of exercise programs has been investigated by many researchers in the field of exercise oncology [4,11,12,40]. Friedenreich et al. [14] underlined the importance of post-diagnosis PA levels in their current review and meta-analysis. The authors revealed a significant difference in the mortality rates in cancer patients for those with low vs. high post-diagnosis PA levels. The benefits of physical activity for YAs have also been stated in the review by Munsi et al. [20]. Further, several European studies investigated the positive effects of exercise interventions in children and adolescents [28,41,42], pointing out that monitoring PA levels is important to accomplish positive PA effects [37].
Module Selection and Exercise Preferences
The evaluation of module selection at baseline showed the highest interest in module 2 followed by module 1. However, the strong influence of the COVID-19 pandemic and the comprehensive restrictions have had a high impact on module selections. Since module 3 could not be offered during the majority of the study period, no clear statement can be made about the actual interest in the in-person exercise program. Nevertheless, the hypothetical question about module interest under COVID-19-free circumstances showed very high interest in M3. At the same time, study participants showed high interest in both online modules (M1 and M2). After 12 weeks, more YAs participated in the individual home-based program M2 than the fixed group-based program M1. The most named reasons for choosing M2 was the flexibility in terms of time and individual training programming, whereas M1 was chosen because of the fixed training date and being motivated by others.
Further, the module changes at T1 highlighted interesting findings. Less than half of the participants changed their initial module. The amendment or replacement of modules were related to different reasons such as increasing the physical activity, interest in other modules or the wish to receive more individual advice by exercise trainers. Interestingly, despite the possibility to amend the initial module, no increase in PA could be determined between T1 and T2. Overall, comparing the two online modules, M1 (supervised, groupbased online exercise program) was less popular than M2 (home-based individual training via app). Around 60% of those who chose M1 at baseline decided to amend or replace that module after 6 weeks. Compared to M2, only around 30% wanted to replace or amend that module. All in all, only around 20% of the YAs took part in two different modules during the 12-week intervention. Adams and colleagues outlined in their study with 533 AYA that the majority of patients preferred home-based (79%) and online (47%), but less hospital-based (25%) programs. Further, a significant higher proportion of AYA preferred individually supervised programs (82%) rather than group-based programs (63%). Interestingly, most AYA preferred to exercise ≥30 min on ≥3 days per week [31]. Another digital health intervention showed that a group-based intervention with a mobile app was accepted by YAs and revealed significantly greater improvements in muscle strength but had limited reach due to the competing needs experienced [43]. A systematic review on social media interventions targeting exercise in people with non-communicable diseases (including cancer) investigated five RCTs that improved the exercise behaviors and concluded overall feasibility of social media intervention among specific populations [44].
Our analysis highlights that exercise preferences of YAs are highly individual and diverse. First of all, the demography of patient characteristics showed the high variety of different diagnosis, treatment status, employment status, family situations and physical activity levels among YAs. At the same time, the given reasons for choosing a module or for not choosing a module were very diverse, some were even contradictory (e.g., flexibility in terms of time vs. fixed training dates). The subgroup analysis in which patient characteristics of the three different module groups were examined, brought only little insights. A significant correlation between treatment status and module selection was found. YAs undergoing acute therapy seem to prefer supervised training; however, due to the little sample size, non-randomization and limitations in the context of COVID-19 pandemic, we cannot conclude any clear statement about which exercise program fits the individual treatment status. Further, neither employment status nor physical activity level, distress or social media usage behavior seemed to have a definite influence on module selection. We therefore conclude that in order to be able to respond to different needs of YAs, a wide range of exercise programs must be created.
Implementation of the YOUEX Exercise Programs and the Impact of COVID-19
The comparatively small number of young cancer cases in Germany [45] leads to the challenge that region-specific group trainings might not be accessible to all patients. However, the need for high-quality exercise programs during and after cancer therapy still applies for YAs. Exercise programs offered digitally could be a suitable solution to consolidate YAs nationwide. Different studies analyzing digital health interventions showed the feasibility and acceptance in YAs with cancer [43,46,47]. Similarly, this study indicated that the digitally offered modules 1 and 2 were well-accepted. Especially during the COVID-19 pandemic, online-programs have had many advantages. Compared to the only in-person program (M3), major benefits of M1 and M2 were the independency against pandemic restrictions and the low risk of infections. However, M2 was intended as an unsupervised home-based program. Different studies show significant positive effects of the supervised training interventions on treatment-related side effects compared to unsupervised training [11]. Additionally, the qualitative evaluation of M1 and M2 revealed some critical aspects of online programs from the patient's perspective. M1 was supervised in a group context but still a quarter of participants claimed that the training had not been individual enough. Still, online supervised training has its boundaries due to technical limitations (e.g., restricted field of vision). Further, one out of four patients in M2 (online, individual training plan) wished more personal interaction with the exercise therapist.
Limitations
Our study needs to be interpreted in light of several limitations. First, our data showed a great selection bias which resulted in a non-representable group of YAs. Our participants were mostly already active, with a high educational level, mostly women and non-smokers. We conclude that our reach was limited to the already interested, active group of potential YAs and people that were active on social media and engaged in self-help groups. In regard to the methodological approach, our study was a non-randomized intervention only, with no control group. A randomized, inactive control group could have shown causal differences in the patient-related outcomes and would have allowed a reasonable interpretation of our results. Further, when interpreting changes in the PA level it is imperative to consider that the study participants only subjectively estimated their PA level, there was no objectively measured method. Götte et al. highlighted that PA should be assessed by objective methods in pediatric cancer patients [48]. By using the Godin-Shepard Leisure-Time Questionnaire no distinction can be made between endurance or resistance training. Additionally, the study questionnaire did not cover the concrete PA levels at baseline and only asked for the pre-and post-diagnosis PA levels. In addition, there is a lack of data on the adherence of the participants to each module, which must be taken into account when interpreting the results. The different modules vary in terms of frequency, content and volume. A comparison of the modules with regard to the effects of each module on physical activity is therefore not possible. We also included a selfdeveloped questionnaire about the impact of COVID-19, three months after we had started the patient recruitment. Overall, the YOUEX study was intensely impacted by the COVID-19 pandemic. The COVID-19 restrictions biased our measured outcomes (e.g., PA level), module selections and the entire execution of M3. The impact of COVID-19 on our study outcomes needs to be respected with important meaning.
Further Research
YOUEX has shown that social media tools are effective for participant recruitment in our young target group. However, it is not clear how to reach the broad range of YAs regarding their interest in PA participation, cancer diagnosis or treatment status. Further, the question of how to ensure adequate training stimuli for the right dose-effect in the context of online and/or home-based exercise programs remains unanswered and should be the subject of further research. Therefore, more data on the adherence of YAs participating in (online) exercise programs is needed. Additionally, the long-term effects of online exercise programs in YAs are yet to be evaluated and should be taken into account in future studies. Additionally, the question of how special exercise offers for YAs can be implemented into existing healthcare structures remains problematic. Further interventions should focus on the adaption, long-term implementation strategies and maintenance of exercise programs (including the long-term adherence of YAs) to provide a sustainable impact on PA levels and health-related improvements for this target group.
Conclusions
We found that young adults with cancer recruited via social media and different online websites have a wide range of interests and needs regarding exercise programs during and after cancer therapy. According to our findings, YAs need specific exercise programs that include their individual interests and needs. In this context, online exercise programs in different forms (e.g., group-based, individual program) can be an addition and/or an alternative to existing exercise options. It must be underlined, that such online programs were highly accepted in our study (which partly took place during lockdown periods in the COVID-19 pandemic) and can be effective in increasing YA's physical activity levels.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the medical faculty at Heidelberg University (S-932/2020).
Informed Consent Statement:
Written informed consent has been obtained from the patients to publish this paper.
Data Availability Statement:
The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest: J.W. invented and founded the network OnkoAktiv and is currently a member of the association board. The other authors declare no conflict of interests. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
2023-01-25T16:01:53.490Z
|
2023-01-21T00:00:00.000
|
{
"year": 2023,
"sha1": "16d9951fde75f9aea98a4d34047b58e5597d869c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1718-7729/30/2/113/pdf?version=1674295264",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f19126f4ea6aa1e983c093bca60bfed32229753c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
19507191
|
pes2o/s2orc
|
v3-fos-license
|
Two cases of female hydrocele of the canal of nuck
The processus vaginalis within the inguinal canal forms the canal of Nuck, which is a homolog of the processus vaginalis in women. Incomplete obliteration of the processus vaginalis causes indirect inguinal hernia or hydrocele of the canal of Nuck, a very rare condition in women. Here, we report 2 cases of hydrocele of the canal of Nuck that were diagnosed with ultrasonography in both cases and magnetic resonance imaging in 1 case to confirm the sonographic diagnosis. High ligation and hydrocelectomy were conducted in both patients. In 1 patient, 14 months later, the occurrence of contralateral inguinal hernia was suspected, but did not require surgery. The other patient had a history of surgery for left inguinal hernia 11 months before the occurrence of right hydrocele of the canal of Nuck. In both cases, the occurrence of an inguinal hernia on the contralateral side was noted.
Introduction
The round ligament is attached to the uterus and a small evagination of the parietal peritoneum accompanies the round ligament through the inguinal ring into the inguinal canal in the female 1) . This small evagination of the parietal peritoneum is the canal of Nuck in the female, which is homologous to the processus vaginalis in males. The canal of Nuck is normally obliterated in the first year of life. Failure to achieve complete obliteration results in an indirect inguinal hernia or hydrocele of the Canal of Nuck 1,2) . Here, we report two cases with ultrasound and magnetic resonance (MR) findings and we review the literature. Several cases with this rare anomaly have been reported, but MR findings have not previously been reported in the Korean literature. Kim et al. 3) reported three cases which were diagnosed with sonography and CT. Park et al. 4) reported one case which were diagnosed with sonography and MR imaging. In both cases, the occurrence of contralateral side inguinal hernia was noted lesion. MR imaging was requested to confirm the ultrasound finding and identify the relations with the surrounding structures. MR imaging showed a cystic mass in the right inguinal region (Fig. 2). The mass was well-defined cystic lesion without abnormal septation and solid portion.
High ligation and hydrocelectomy were performed. The round ligament with the cystic tense mass was excised. The pathologic examination revealed findings compatible with a hydrocele of the Canal of Nuck. Fourteen months later, she revisited the hospital because of a left inguinal mass that was considered to be a left inguinal hernia. However, the mass was reduced spontaneously and did not protrude for the next 4 years.
Case 2
An 18 month-old female patient presented with a small, right inguinal mass with tenderness. Ultrasound examination revealed a small sized, well-defined cystic mass about 0.74 cm×0.63 cm (Fig. 3). She had a history of herniorrhaphy of a left inguinal hernia which was diagnosed with ultrasound examination at the age of 7 months. High ligation of the hernial sac and hydrocelectomy was done, which was confirmed by the pathology report. There was no occurrence of contralateral hydrocele or hernia for the next 3 years.
Discussion
The processus vaginalis, which is a homologue to the canal of Nuck, may be obliterated at any point between the internal inguinal ring and the scrotum, or it may happen incompletely. These variations account for the diverse classification of hernias (the extraabdominal passage of viscus) and hydroceles (the extra-abdominal passage of peritoneal fluid), including complete or scrotal hernias, communicating or non-communicating hydroceles, hydroceles of the spermatic cord in males, and the canal of Nuck in girls 5) .
Hydroceles are common in children, especially in boys. But hydrocele of the canal of Nuck is an uncommon clinical presentation 6) .
Although the exact process is unclear, it is generally agreed that obliteration of the processus vaginalis occurs only after the seventh month of gestation 7) , thus accounting for the higher incidence of inguinal hernias in the premature infant 8) . Although not clearly defined, there is a known heredity factor associated with the occurrence of hernias that is more frequent in twin gestations and in infants who have a family history of hernia 9) .
Swelling of the inguinal region in a female may result from a number of conditions, including inguinal hernia, tumor (lipoma, leiomyoma, sarcoma), cyst, abscess, lymphadenopathy, or hydrocele of the canal of Nuck 1) .
In most cases, the canal of Nuck should be distinguished from hernia. Bowel sounds over the swelling are strongly suggestive of a hernia.
Hydrocele of the canal of Nuck typically presents as a painless, translucent swelling in the inguinolabial region 6) . There is no nausea or vomiting. If possible, the child should be examined in the supine and standing positions 6) . In older children, Valsalva's maneuver should be performed to rule out hernia. If the bulge is apparent in the standing position and disappears while supine, a hernia is more plausible 10) . Infection is an uncommon complication of hydroceles in children, and until now only 5 cases have been reported in the English language literature, all in boys. Infected hydrocele of the canal of Nuck is rare 11) .
In the past, a palpable inguinal mass was removed via surgery without an imaging study. Currently, sonography is the imaging modality of choice for evaluation of this type of mass 12) .
Because sonography is a relatively easy, accessible and accurate procedure 4) , we recommend that sonography be performed routinely in those females with an inguinal hernia containing an asymptomatic movable mass. But, we cannot identify whether there is a communication between cystic mass and peritoneal cavity with sonography. MR imaging can give more precise images including septation and a communication between cystic lesion and peritoneal cavity, and information on the anatomical relation with adjacent structures 4,13) . Therefore, MR imaging can help to diagnose in patients with inguinal cystic mass. The differential diagnosis of a cystic mass in the female groin region includes round ligament cysts, varicosities of the round ligament, inguinal herniation of the ovary, cystic lymphangiomas, epidermal inclusion cysts, abscesses, and pseudoaneurysms 14) .
The final diagnosis is made during surgery and confirmed by pathological examination 6) . Surgical resection of the hydrocele and ligation of the neck of the processus vaginalis should be considered as standard therapy. Han et al. 15) reported that in 2.6% of the patients who had been operated of unilateral hydrocele, contralateral hernia occurred. They also reported that the occurrence of the contralateral hydrocele after the operation of unilateral hydrocele was 2.0% in males. Flum et al. 16) reported that the recurrence rate in inguinal hernia is about 10% but has not been reported for a hydrocele of the canal of Nuck yet. In our cases, the occurrence of inguinal hernia was on the contralateral side from the hydrocele and operation was needed in one case. Therefore, one should be aware that the ocurrence rate of the contralateral hernia is not low in patients with hydrocele of the canal of Nuck. So, we should consider the hydrocele of the canal of Nuck in patients who have a inguinal cystic mass and have a history of inguinal hernia on the other side.
In these cases, we suggest that MR imaging can help to identify an inguinal cystic mass when the diagnosis is vague by its sonographic findings in female. Because the inguinal hernia on the other side was detected either before or after the operation of hydrocele in both cases, we should consider the hydrocele of the canal of Nuck in patients who have an inguinal cystic mass and have a history of an inguinal hernia on the other side. Also, we should consider the risk of occurrence of contralateral inguinal hernia in patients with hydrocele of canal of Nuck.
|
2014-10-01T00:00:00.000Z
|
2012-04-01T00:00:00.000
|
{
"year": 2012,
"sha1": "4a238ecb4dfb4c481e13f055667e3e9b4e12e29d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3345/kjp.2012.55.4.143",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a238ecb4dfb4c481e13f055667e3e9b4e12e29d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
30141488
|
pes2o/s2orc
|
v3-fos-license
|
Assessing Primary Neurogenesis in Xenopus Embryos Using Immunostaining
Primary neurogenesis is a dynamic and complex process during embryonic development that sets up the initial layout of the central nervous system. During this process, a portion of neural stem cells undergo differentiation and give rise to the first populations of differentiated primary neurons within the nascent central nervous system. Several vertebrate model organisms have been used to explore the mechanisms of neural cell fate specification, patterning, and differentiation. Among these is the African clawed frog, Xenopus, which provides a powerful system for investigating the molecular and cellular mechanisms responsible for primary neurogenesis due to its rapid and accessible development and ease of embryological and molecular manipulations. Here, we present a convenient and rapid method to observe the different populations of neuronal cells within Xenopus central nervous system. Using antibody staining and immunofluorescence on sections of Xenopus embryos, we are able to observe the locations of neural stem cells and differentiated primary neurons during primary neurogenesis.
Introduction
In vertebrates, the development of the central nervous system comprises several distinctive yet consecutive stages. The first step is neural induction, when naive ectodermal cells are specified toward a neural fate rather than an epidermal fate. Several interconnected regulatory mechanisms are involved during this stage in Xenopus and other model systems 1,2 . This process is primarily coordinated by secreted factors produced by the underlying mesoderm, such as chordin, noggin, and follistatin [3][4][5][6][7] . After neural induction, a subset of neural progenitors exit the cell cycle and begin to differentiate in a process referred to as primary neurogenesis. Not all neuronal precursors differentiate at this time. The remaining neural precursor cells continue to proliferate, thereby maintaining the stem cell pool needed for continued growth of central nervous system throughout development and into adulthood.
These proliferating neural precursor cells are characterized by their expression of the SRY (sex determining region Y)-box 3 (sox3) gene [8][9][10][11] . The other population of cells, which exit the cell cycle and commit to a differentiated fate, are identified by the expression of the differentiation gene markers, tubulin, beta 2B class IIb (tubb2b,N-tub) and myelin transcription factor 1 (myt1) [12][13][14] . Such differentiated neuronal cells eventually give rise to different categories of neurons including, but not limited to, motor, inter-, and sensory neurons positioned in distinct areas within the neural tube [15][16][17] . While significant efforts have been devoted to uncovering the regulatory mechanisms that govern the patterning and fate determining events in the anterior neuroectoderm, less attention has been made on investigating the neurogenic events that occur following the initial patterning stage. Indeed, signal transduction, transcriptional regulations, as well as post-translational modifications are all involved in this later stage, controlling both timing and lineage specification during neurogenesis [18][19][20] . Further investigations into these mechanisms require a reliable method to easily visualize and distinguish different populations of neuronal cells. The above-mentioned neural markers, including, Sox3, Myt1, and Ntub, can provide a means for identifying these different cell populations, thus providing the necessary foundations for revealing the underlying mechanisms of neuronal differentiation [21][22][23] .
Although differential labeling of neuronal cell populations have been demonstrated in other model organisms, relatively few studies have exploited the Xenopus system to its fullest in this regard. This is mainly due to a paucity of compatible antibodies that reliably identify the various neuronal cell populations in the neural tube. Here, we describe a method for visualizing neuronal differentiation in early Xenopus embryos via immunostaining, which provides a robust and convenient approach for investigating primary neurogenesis in Xenopus. This protocol should give sufficient guidance for researchers interested in the early development of Xenopus central nervous system between stage 26 and stage 45. for at least 1-2 weeks). 8. Prepare 4% paraformaldehyde in TBS (for subsequent staining involving phalloidin) by dissolving 4 g of paraformaldehyde powder in 100 ml of 1x TBS solution Heat the solution to 60 °C and add a few drops of 10 M NaOH to assist dissolving. Aliquot in 5-10 ml volume and freeze in -20 °C. Do not re-freeze once thawed. CAUTION: Paraformaldehyde powder is an irritant and is toxic if inhaled, thus the weighing step should be performed in a fume hood. 9. Label as many 4 ml glass vials with screw caps prior to sample collection. 10. Prepare 15% gelatin/15% sucrose by pouring 20 ml of 40% fish gelatin (pre-heat in a 50 °C water bath) into a 50 ml centrifuge tube.
Collection and Fixation of
Add 8 g of sucrose and fill the tube to the 50 ml line with 1x TBS. 11. Place the gelatin tube on a rotary mixer or rolling bed to mix overnight at room temperature. This gelatin solution is stable at 4 °C for 1 week. Do not use expired solution and do not freeze-thaw.
2. Prepare and Fix X. laevis or X. tropicalis Embryos Cultured to Desired Stages. 1. Culture the fertilized X. laevis or X. tropicalis embryos in 0.1x MMR with gentamycin until the desired stages. NOTE: Generally, collect embryos between stages 23 and 40. Later collection of embryos after stage 40 is possible, especially when observing axonal growth from the spinal cord, but keep in mind that additional gelatin penetration time may be required. The embryos may be wild-type, transgenic, mutant, inhibitor-treated, Morpholino (MO)-injected, or electroporated 23,24 . 2. Collect 20-50 embryos in each 4 ml glass vial, remove as much medium as possible and replace with MEMFA. NOTE: If subsequent staining involves phalloidin, use 4% paraformaldehyde instead of MEMFA since commercial supplied formaldehyde solutions usually contain up to 10% methanol as a stabilizer that will interfere with phalloidin staining. 3. Fix overnight at 4 °C or, if in an urgency, at room temperature for 2 hr on a rotary mixer. The embryos will be stable in fixation solution for at least 1 week at 4 °C. 4. After fixation, wash the embryos 3 times for 20 min using 1x TBS with 0.05% Triton X-100. After the final wash, remove as much TBS-Triton as possible and add 3 ml of 15% gelatin/15% sucrose into each vial. 5. Place the vials on a roller bed overnight at room temperature. For embryos older than stage 40, use at least 24 hr of penetration time.
After penetration, proceed immediately to section 2.2 on the next day.
Mounting and Cryosectioning of Xenopus Embryos
1. Prepare Reagents and Materials for Experiments. 1. Take one box of positively charged slides, ideally unopened. NOTE: If opened, keep the slides in a dry condition (such as a dry box) and use within 1 month to ensure the static charge on the slides is maintained. Do not use expired slides since samples will fall off during immunostaining. 2. Pre-chill the cryostat chamber to -30 °C. Set the instrument parameters as -35 °C for microtome and 12 µm section thickness. Install the thick cover glass plate onto the stage and let the cryostat equilibrate for at least 30 min before section starts. 3. Prepare painting brushes for moving section strips, keep inside the cryostat chamber. 4. Prepare pencils for writing on slides, put them at room temperature. 5. Wear gloves during cryosection. Do not use bare hands.
Mounting Xenopus Embryos
1. Carefully aspirate 5-10 embryos out of the glass vial using a plastic or glass pipette without introducing air bubbles. Transfer the embryos into the mounting chamber and observe under a stereoscope. Fill up the mounting chamber with gelatin solution to ensure the rigidity of the section block.
2. Arrange the embryos as per Figure 1 using a pair of fine-tip forceps. Mark the orientation of heads by drawing an arrow on the rim of the chamber using a cryogenic-compatible marker. For multiple groups of embryos, write down the description of each group on the rim of the corresponding chamber as well. 3. Carefully place the chamber horizontally in a foam box half-filled with dry ice and close the lid. Observe the mounting chamber freeze in 5-10 min. Process each chamber serially (i.e. place the previous chamber onto dry ice before proceeding to the next one) as this will leave sufficient time for each chamber to freeze and prevent the dry ice box to become overcrowded. NOTE: Frozen mounting chambers do not need to stand horizontally and can be stacked up inside the box. 4. Proceed with cryosectioning or, if required, keep frozen samples at -80 °C for at least 1-2 week without losing immunogenicity.
3. Cryosection of Mounted Xenopus Embryos 1. Remove a frozen sample block from the chamber by pressing the bottom of the chamber using a blunt stick (such as a pencil). 2. Add several drops of tissue-freezing medium, (a polyethylene glycol and polyvinyl alcohol-containing medium) onto the sample holding disc and mount the sample block with the anterior end of the embryos points up (Figure 2). Allow the mounted-block stand inside the cryostat chamber for approximately 1 min or until the tissue-freezing medium becomes opaque. 3. Immediately install the sample holding disc onto the microtome with the bottom side of the sample block facing up. Trim off a portion of the sample block using a blade when the sample block is still relatively "soft" to reduce the length of each section (if desired). Leave the sample holding disc on the microtome for at least 5 min to allow its temperature to reach equilibrium. 4. Gradually trim the sample block down until the heads of tadpoles are visible through the translucent gelatin. To increase trimming speed, apply a higher (thicker) setting of section thickness (e.g. 20-25 µM) at this stage (such as enabling the "trim" option) but not too thick because sample block can fall off from the sample holding disc. Observe the performance of cryostat, such as the sharpness and angle of the blade at this stage to ensure the generation of a long strip of subsequent sections. 5. Once the tadpole heads become visible, adjust the settings of the cryostat back to normal (e.g. 10-12 µM) and clean both the microtome and stage using a paint brush. Gently make 2-3 sections as a trial using the hand wheel setting (do not use the motorized setting) to make sure that finished slices can form strips, not overlapping or sticking to the blade. 6. Continue trimming until the heads of tadpoles are almost exposed (this may need some experience but achievable). Brush off any remnant sections on the stage and the collection of sample sections will begin. 7. Make approximately 10-15 sections and let them form a long strip. 8. Flip over the thick cover glass plate to the side and gently remove the strip from the blade using a fine-tip paint brush and arrange it on the stage with long axis parallel to the blade. 9. Pick one positively charged slide at room temperature, label it with a pencil (which will not be washed off during subsequent staining treatments), press it quickly but firmly onto the strip with the label side facing down and remove the slide from the cryostat chamber. If done correctly, the strip should immediately stick to the positively charged slide (Figure 3A). 10. Repeat this step to have 20-30 slices on each slide and arranged in parallel ( Figure 3B). This is sufficient to cover the entire brain region of X. tropicalis embryos and most of forebrain and midbrain regions of X. laevis embryos. 11. Air-dry the slides for 10 min, proceed immediately or, store the slides in a slide box at -80 °C for at least 3-6 months without losing immunogenicity.
Immunostaining of Sectioned Xenopus Embryos
1. Prepare Reagents and Materials for Experiments. 1. Prepare 0.05% TBS-Triton X-100 by adding Triton X-100 into 1x TBS to achieve a final concentration of 0.05%. This solution is stable at room temp for 3-4 days. Do not use expired solution. 2. Prepare 5% heat-inactivated goat serum or 5% BSA in TBST as blocking buffer. Firstly, heat-inactivate the goat/lamb serum by placing approximately 20-30 ml of serum in a 65 °C water bath for 30-60 min. Aliquot into 1.5 ml centrifuge tubes and freeze in -20 °C. No reinactivation is required upon using. 3. Dilute the heat-inactivated serum or BSA in TBST to achieve a final concentration of 5% before use. 4. Prepare appropriate primary antibodies according to dilutions in blocking buffer (e.g. 1:500 anti-Sox3 2. Immunostaining 1. Remove the frozen slides from -80 °C and place them on a piece of towel paper inside a ventilation hood for at least 1 hr to eliminate any condensed water droplets, then bake the dried slides on a 85-90 °C heat block with the slices facing up for 15 min to activate the adhesion mechanism. Finally, allow the slides cool down to room temperature (at least 10 min). 2. Fill the staining jar with pure acetone and incubate the slides in acetone for 10 min to remove fish gelatin. If multiple slides are to be treated, arrange them on a staining rack. Air-dry the treated-slides for 15 min in a ventilation hood. Do not re-use the acetone. 3. Carefully draw a ring around the samples on the slide using a PAP pen without touching the samples. Ensure that the ring is selfenclosed, otherwise antibody solution will flow out during staining. Fully dry the PAP pen ring. 4. Fill another staining jar with 1x TBS without Triton X-100. Insert the dried-slides into the staining jar and allow rehydration for at least 1 hr. In the meantime, prepare the blocking buffer by diluting either heat-inactivated goat serum or BSA to 5% final concentration using 1x TBS with 0.05% Triton X-100. 5. Make a wet box by placing 1-2 6-well plates inside a click-lock food box and fill the wells halfway with ultrapure water. Remove rehydrated slides from the staining jar and place them horizontally on the 6-well plates (without the plate lid).
Copyright © 2016 Creative Commons Attribution 3.0 License April 2016 | 110 | e53949 | Page 4 of 8 6. Carefully add 300-600 µl of blocking buffer inside the PAP ring. Seal the wet box by locking the lid into position and incubate at room temperature for at least 1 hr. 7. Dilute the appropriate primary antibodies in blocking buffer. Generally, use 100-150 µl diluted antibody solution per slide. If multiple slides are processed, scale up the volume proportionally. 8. After blocking, carefully remove the blocking buffer from slides by aspiration and quickly add primary antibody solution to prevent dryout, then seal the wet box and incubate at 4 °C overnight. 9. On the next day, remove the primary antibody solution from the slides by aspiration. Wash the slides by inserting into staining jars filled with 1x TBS with 0.05% Triton X-100, 3 times, 15 min each. Meanwhile, dilute the appropriate fluorescent secondary antibodies with blocking buffer (with or without DAPI or phalloidin). 10. Carefully add 100-150 µl of secondary antibody solution onto the slides. Place the slides inside the wet box and seal the lid. Incubate the slides for 1-2 hr at room temperature. 11. Wash the slides in staining jars filled with 1x TBS with 0.05% Triton X-100, 3 times, 15 min each. At the final wash, thaw the anti-fade mounting medium in a 50 °C water bath for 10 min if stored at -20 °C. Cool down the mounting medium to room temperature before use. 12. Add approximately 20 µl (one drop) of mounting medium onto the slide and apply a large cover slip (at least 22 mm x 64 mm) over the samples. Observe using fluorescent or confocal microscope within 6 hr post mounting.
1. If imaging cannot be done on the same day, seal slides using nail polish and place them inside the wet box at 4 °C overnight. NOTE: The anti-fade mounting medium will gradually get oxidized after a few days (and turn brownish) so it is recommended to take the images as soon as possible.
Representative Results
The representative results show cross-sections of stage 30 Xenopus embryos at different levels, namely the forebrain, midbrain, hindbrain, and spinal cord, stained with different antibodies (Figure 4). As mentioned, Sox3-labeled neuronal progenitor cell population locates in the proximity of the lumen of the neural tube, while MyT1-labeled differentiated primary neurons migrate outward and locate near the marginal layer (basal lamina) of the neural tube. Anti-acetyl-tubulin labels axons in differentiated neurons, which can be observed within the whole neural tube.
Side-specific gene knockdown or overexpression is a broadly used method in neurobiology to evaluate whether modulating the expression of specific gene(s) disrupts the growth and differentiation of neuronal cells. In such cases, either Morpholinos (MOs) or DNA construct(s) carrying promoter-driven gene(s) are injected into one of the two blastomeres at two-cell stage 23 . Alternatively, they can be injected into the brain ventricle followed by electroporation, which will result in knockdown or over-expression of desired gene(s) 27 . In both cases, the effects will be restricted to one side of the embryo, making the opposite side of the embryo as untreated internal control.
After fixation, sectioning, and immunostaining, the impacts of gene knockdown/over-expression are quantified by counting and comparing the cell numbers of different populations at both sides of embryo. By accumulating these data from several embryos, statistical analysis can be performed. In our representative results, no perturbation was made in the embryos. Examples of gene perturbation and their effects on different neuronal cell populations can be found in the references The protocol is generally robust with very high level of reproducibility. For embryos that are at pre-hatching stages (i.e. up to stage 28), we recommend manually removing the vitelline membrane prior to fixation as this allows embryos to fully uncurl before fixation. This is especially important when collecting embryos at relatively early stages (before hatching), since bent embryos are extremely difficult to arrange at the desired orientation inside the mounting chamber. We have not experienced a loss of immunogenicity after MEMFA or PFA fixation hence additional antigen retrieval step is not necessary for this protocol. In addition, this immunostaining procedure can be carried out in samples from chromogenic/fluorescent in situ hybridization. In such scenario, it is recommended to skip the protease K treatment step during the in situ hybridization protocol. After chromogenic/fluorescent substrate reaction, samples can be washed with TBS and embedded in fish gelatin solution in a similar way to MEMFA/PFA fixed samples. Sample vials can be protected from light by wrapping the vials in aluminum foil if fluorescent in situ hybridization will be imaged together with immunofluorescent staining later.
In some cases, embryos may shrink excessively after gelatin penetration. This is most likely caused by either insufficient fixation or insufficient TBS-Triton extraction. Ensure that embryos have been fixed in PFA/MEMFA for at least 2-3 hr or preferably overnight at 4 °C. If the problem persists, increase the washing time of TBS-Triton to 3 for 1 hr.
During cryo-sectioning, the rigidity of sample block may vary as different batches of fish gelatin have different properties. This problem can be partially compensated by adjusting the temperature of the microtome. A lower temperature will result in a "harder" sample block however under excessive low temperature the sample block becomes crisp and difficult to section. Some fine-tuning or practice (using mock sample blocks without embryos) before each sectioning may be beneficial prior to use on particularly valuable samples.
A commonly encountered problem during sectioning is the thin sections sometimes tend to stick onto the thick glass plate rather than stay flat on the steel stage. This can be particularly disrupting since it constantly interrupts the continuous sectioning process. Such cases are usually caused by one of two reasons: the section chamber and (especially) the thick glass plate not being cold enough, or excessive static charge on the machine and the operator, especially in dry weather. The former can be solved by turning down the temperature of the section chamber and leaving the glass plate within for extra time (possibly overnight); and the latter by properly grounding the cryostat. Connecting the metal surface of the cryostat to a metal tap or similar water tubing system made of metal can be an alternative way to release the static charges. After each use, the thick glass plate should be washed well with non-corrosive detergent (such as dishwashing liquid), rinsed by ultrapure water, sprayed with pure ethanol, and wrapped in towel paper to protect the surface and edges from damaging.
The antibodies listed in the protocol are generally specific and we rarely encounter nonspecific adsorption or high background signal. It should be noted that, if using heat-inactivated goat/lamb serum as blocking agent, excessive heat-inactivation (as visualized by the formation of fluffy precipitant in the serum) should be avoided as this precipitant is highly attractive to secondary antibodies and may contribute to a source of high background.
It is possible, and sometimes desired, to perform double-staining to visualize different populations of neuronal cells simultaneously. Such doublestaining is possible and generally gives satisfactory results with the following combinations: Sox3/N-tub or Myt1/N-tub. However, double-staining of Sox3 and Myt1 is complicated by the fact that the two antibodies are both from rabbit origin. We have tested several antibody direct labeling kits, however no satisfactory results were observed (low signal-to-noise level), possibly due to the lack of signal amplification from secondary antibodies. One possible approach to circumvent this problem would be to raise transgenic lines in Xenopus, as discussed below.
One of the main restrictions of this protocol is that, while this protocol could sufficiently distinguish two main pools of neuronal cells, namely, the Sox3-expressing neural stem cell pool and the Myt1-expressing differentiated neurons, it lacks the ability to reveal different sub-populations of differentiated neurons. Such sub-populations, which including, but are not limited to, primary motor neurons, interneurons, and sensory neurons, are usually characterized by their differentially-expressed marker genes [28][29][30] . As mentioned above, recent advances in the development of multicolor in situ hybridization combined with antibody-based immunofluorescence detection method in Xenopus embryos may fill in the gap to reveal these sub-populations of differentiated neurons for potential investigation 31 . Alternatively, as has been demonstrated in mice model, it would also be desired to raise a more comprehensive set of cell type-specific antibodies to distinguish such different cell populations in Xenopus.
It is also worth mentioning that, as an addition to this method, recent advances in optical imaging and image analysis, such as multiphoton microscopy, 3D reconstruction, and segmentation, can also be applied after initial assessments to achieve more comprehensive observations of Xenopus oocytes as well as early embryos, particularly under a live setting 32,33 . Therefore, to track proliferation, differentiation, and movement of neuronal cells in live animals, it would be desired to establish one or more transgenic lines that harbor fluorescent proteins driven by cell typespecific promoter to allow live observation of such cell populations in early Xenopus embryos. The establishment of a X. laevis line with neurospecific β-tubulin promoter driving tauGFP and its applications have provided a nice example 23,34 . With the full promoter sequences of both sox3 and myt1 characterized in vertebrates [35][36][37] , it is should be relatively easy to establish additional transgenic lines in Xenopus which should contribute extensively to both the Xenopus community and a more general field of primary neurogenesis research.
Disclosures
The authors have nothing to disclose.
|
2016-10-11T02:19:10.865Z
|
2016-04-12T00:00:00.000
|
{
"year": 2016,
"sha1": "ade5d1d7eb5a58575a50de7bb8788d426e559179",
"oa_license": "CCBY",
"oa_url": "https://www.jove.com/pdf/53949/assessing-primary-neurogenesis-in-xenopus-embryos-using-immunostaining",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6eb235072ed56730b99ea565355428725e2f666",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
52231481
|
pes2o/s2orc
|
v3-fos-license
|
Experience and Cost of Pediatric Gastroenteritis to Families : A Survey of Malaysian and Vietnamese Parents
of gastroenteritis are caused by virus, followed by 10% to 20% by bacteria and less than 10% are due to protozoa [2,47]. The common causative organisms are rotavirus, nontyphoidal Salmonella, Campylobacter species, Shigella species and Escherichia coli [4-7]. Each year, an estimated two billion cases are reported globally with approximately 440,000 annual deaths, particularly affecting children below the age of five in developing countries [1,8]. Furthermore, gastroenteritis is the second most common cause of death in children in this age group after pneumonia [1]. Among the common causative organisms, rotavirus gastroenteritis causes approximately 111 million episodes of gastroenteritis per year requiring home care, 25 million clinic visits, and 2 million hospital admissions.
of gastroenteritis are caused by virus, followed by 10% to 20% by bacteria and less than 10% are due to protozoa [2,[4][5][6][7].The common causative organisms are rotavirus, nontyphoidal Salmonella, Campylobacter species, Shigella species and Escherichia coli [4][5][6][7].Each year, an estimated two billion cases are reported globally with approximately 440,000 annual deaths, particularly affecting children below the age of five in developing countries [1,8].Furthermore, gastroenteritis is the second most common cause of death in children in this age group after pneumonia [1].Among the common causative organisms, rotavirus gastroenteritis causes approximately 111 million episodes of gastroenteritis per year requiring home care, 25 million clinic visits, and 2 million hospital admissions.
Studies in Asia have shown that rotavirus contributes 43% to 60% of the diarrheal admissions in Vietnam, 32.9% in Taiwan and 24 to 55% in Malaysia [11][12][13].Young children between 6 months and 3 years old have been reported to have a higher prevalence rate of rotavirus gastroenteritis [12,[14][15].In terms of gastroenteritis in general, studies indicated that the peak incidence of gastroenteritis related hospital admissions in Malaysia occur in the rainy season [12], generally peaking at the beginning and end of each year [13] but the months may differ according to different studies.
Taking into consideration the economic burden and costs associated with gastroenteritis, studies conducted have tended to focus on rotavirus-associated gastroenteritis and this has been reported in several studies [9][10][16][17].In Vietnam, annually, the economic burden of rotavirus gastroenteritis was estimated to be USD 3.1 million in medical costs, USD 685,000 in non-medical costs and USD 1.5 million in indirect costs [9].In Malaysia, such an economic study has not been performed but a study by Chai and Lee estimated that the cost incurred associated with hospitalization for rotavirus gastroenteritis was USD 194 [18].The direct medical costs per annum associated to rotavirus gastroenteritis was estimated at USD 10 to 16 million in Taiwan and USD 57 million in Japan [10,17].In the larger context of developing countries, a study by Rheingans et al estimated the cost of rotavirus gastroenteritis in developing countries in general and the subsequent cost-effectiveness of rotavirus
Introduction
Gastroenteritis is one of the leading causes of pediatric morbidity and mortality, worldwide [1][2][3].In terms of causative organisms in children, approximately 70% episodes vaccination [19].Within this context, Malaysia and Vietnam were classified as upper-middle-income countries respectively [19].According to the Rheingans study, the cost-effectiveness of rotavirus vaccine at $7.50 per dose was USD 291 and USD 329 per DALY in Vietnam and Malaysia, respectively [19].
Most studies on economic impact have focused on rotavirus infections.Yet, the economic impact of gastroenteritis specifically on families in South East Asia has not been reported thus far.In this study we consider South East Asia here as defined by the Association of South East Asian Nations which include Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam [20].Within the wider Asian continent, a study conducted in United Arab Emirates had explored the impact of gastroenteritis [21].The study was a survey among parents which found that an average of 1.4 days of work was missed by parents per episode of gastroenteritis [21].The study reported that 87% of parents had sought medical care for their affected children and 10% of cases required admission to hospital with an average length of stay of 2.6 days [21].Furthermore, the average cost of a gastroenteritis episode to parents was USD 64 [21].On the other hand, a study by Musawi et al. conducted in Bahrain focused on rotavirus infections reported a mean hospitalization duration of 4.1 days [22].
We conducted the present study to assess the impact of gastroenteritis as experienced by parents and evaluate the cost to families and productivity losses when their children have gastroenteritis.Malaysia and Vietnam were selected as two countries representing different South East Asian countries in our study.The two countries have some similarities and intercountry trade is common.However, there are also differences in terms of population size, language and economic development [23,24].
Study design and setting
This was a cross-sectional survey conducted over a period of 9 months between August 2012 to April 2013 in Hanoi and Bac Giang in Vietnam and in the Klang Valley, Malaysia.The survey targeted parents or caregivers with young children below the age of 10.The survey was conducted using convenience sampling and the study participants were recruited in residential areas as well as public spaces.
Data collection
The questionnaire was prepared in English and in the Vietnamese language.The questionnaire contained a total of 22 questions encompassing socio-demographic details as well as the respondent's experience about their child's last gastroenteritis episode.This included the type of treatment, duration of hospitalization, cost of hospitalization, mode of payment of hospitalization fees and the number of days off from work to care for their child who was affected by gastroenteritis.Verbal consent was obtained from the respondents prior to completing the self-administered questionnaire.
Data analysis
Descriptive analysis was performed.For ease of comparison, the costs were converted from local currency units (MYR and VND) to US dollars (USD) based on the currency conversion rate in May 2013 (1 USD=3.06MYR; 1 USD=20,964.60VND).
Demographics
Survey questionnaires were completed by 245 and 307 respondents from Malaysia and Vietnam, respectively.Table 1 presents the demographic information of the respondents.Among the total respondents, 92.0% of the total respondents (234 from Malaysia and 275 from Vietnam) were parents of children below the age of 10.Of these, 72.8% of the total respondents were female rather than male.The majority of respondents from Malaysia (54.6%) reported to have more than 5 children whereas majority of Vietnamese respondents (97.1%) reported to have 1 or 2 children.The most common category of household income reported was USD 850 to USD 1749 (43.6%) among Malaysian respondents while among Vietnamese respondents it was USD 350 to USD 849 (47.1%).
Clinical Presentation
Collectively, vomiting and diarrhea were the more commonly experienced symptoms during a gastroenteritis episode (Figure 1).In Vietnam, majority of the respondents reported that diarrhea was the most common symptom experienced, 74.8% during clinic visits and 78.0% during hospitalization (Figure 1).On the other hand, Malaysian respondents reported that the common symptom experienced was vomiting, 81.0% during clinic visits and 82.4% during hospitalization (Figure 1).
In addition to that, our study also found a percentage of gastroenteritis reported among children aged from 0 to 5 years old in both countries surveyed (Figure 2).In Malaysia, 63.3% of the respondents reported that their child who had gastroenteritis was between 0 to 1 year followed by 33.2% reporting that their child had gastroenteritis between the ages of 1 to 2 years old and only a minority of respondents reported gastroenteritis episode between age group of 2 to 5 years or above (Figure 2).In contrast, in Vietnam, the proportion of reported gastroenteritis was more commonly reported in children aged between 2 to 5 years old (38.5%) and less than 1 year of age (36.3%) (Figure 2).
Clinic Visit and Hospitalization
Overall, 143 (52.0%) of respondents in Vietnam, and 216 (89.3%) respondents in Malaysia reported that their child had experienced gastroenteritis in the past requiring a visit to the clinic (Figure 3).In addition to that, 41.0% of respondents in Vietnam and 21.1% of respondents in Malaysia reported gastroenteritis episodes requiring admission to hospital (Figure 3).
Experience and Cost of Pediatric Gastroenteritis to Families: A Survey of Malaysian and Vietnamese Parents
Copyright: © 2015 Azmi et al.
Type of hospital
A majority of respondents in Vietnam, 96.6% reported that their child was admitted to public hospitals (data not shown).On the other hand, an almost equal distribution of Malaysian respondents reported admissions to public (51.0%) or private (49.0%) hospitals (data not shown).
Duration of hospital admission
On average, most survey respondents (76.5% from Malaysia and 67.8% from Vietnam) reported the length of stay in hospital between 1 to 5 days (Figure 4).This was followed by 23.5% and 27.1% reporting 6 to 10 days of hospital admission in Malaysia and Vietnam, respectively.Furthermore, 5.1% of Vietnamese respondents reported admissions of more than 11 days while no Malaysian respondents reported hospital admissions of more than 10 days (Figure 4).
Hospitalization fees
In terms of the fees charged for gastroenteritis associated hospitalization, the majority of Malaysian respondents (41.2%) reported paying more than USD 350 whereas the most common fee category reported by Vietnamese respondents (45.9%) was USD 1 to USD 34 (Figure 5).Overall, the fee charged for admission to public hospital due to pediatric gastroenteritis was USD 1 to USD 34 in both countries (data not shown).In contrast, the common gastroenteritis associated hospitalization fee charged by private hospitals in Malaysia was more than USD 350 (data
Experience and Cost of Pediatric Gastroenteritis to Families: A Survey of Malaysian and Vietnamese Parents
Copyright: © 2015 Azmi et al. not shown).In Vietnam, this remained unchanged at USD 1 to USD 34 (data not shown).
Time away from work
The majority of the respondents, 47.5% from Vietnam and 56.9% from Malaysia reported taking 2 to 5 days off from work to care for their affected child (Figure 6).Respondents in Malaysia (3.9%) reported that they spent up to 10 days while some respondents in Vietnam (3.4%) reported that they needed more than 10 days away from work to care for their affected child (Figure 6).
Discussion
We found that there were greater differences than similarities observed.One similarity was that the age range of children affected by gastroenteritis was usually between 0 to 2 years old.This was consistent with the common age range reported in the previous studies in Malaysia, 6 to 17 months [12] and in Vietnam, 13 to 24 months [25].Another similarity observed was in the days taken off from work by parents or caregivers when their children had gastroenteritis.The majority of parents from both countries reported 2 to 5 days away from work to care for their sick child.This number of days is similar to that found by Howidi
Experience and Cost of Pediatric Gastroenteritis to Families: A Survey of Malaysian and Vietnamese Parents
Copyright: © 2015 Azmi et al.
et al in the United Arab Emirates as mentioned above [21].
There were differences in terms of the presentation of gastroenteritis as reported by parents was treatment seeking patterns, type of hospital and the length of hospitalization associated with pediatric gastroenteritis between Vietnam and Malaysia.A greater proportion of respondents in Malaysia reported their child required a clinic visit compared to hospital admission.In contrast, hospital admissions due to gastroenteritis were more common in Vietnam.The Malaysian findings corroborate with findings by Poo et al who reported that 77% of the patients recruited for a hospital gastroenteritis study had sought a clinic visit before hospitalization [26].Related to duration of admission, some respondents in Vietnam reported needed more than 10 days away from work whereas none reported this in Malaysia.We speculate that due to socioeconomic circumstances or other reasons, a child's condition in Vietnam may be more severe at the time they are admitted into the hospital.The reason for this difference could not otherwise be elucidated from the survey and may be investigated further in a future study.
On the other hand, we also noted some differences in the access to the health system in the two countries surveyed.From the data collected, we observed that public hospital admissions were more common in Vietnam compared to Malaysia.This is an interesting finding considering that public hospitals are widespread throughout Malaysia providing highly subsidized care.This could be due to a lack of representativeness of our sample.And yet, it may also raise the possibility that when it comes to care of their children, parents are willing to pay for what is perceived as higher quality or greater comfort and hence opt for the private care despite the higher cost to them.The length of hospitalization was similar in both countries with the greatest proportion of respondents reporting that their child was hospitalized for 1 to 5 days.A recent study by Goh et al performed in a tertiary hospital in Sabah, Malaysia also reported similar common range of hospitalization duration, between 1 to 6 days [27].This length of time is significant considering that in many cases both parents may be working.As well, due the large number of people working in the informal sector or working for daily wages in these two countries, the loss of a week's work could be a significant burden on a family's income.In Malaysia, the percentage of people working in the informal sector varies from less than 10% in some states up to approximately 15% [28].In Vietnam, this is estimated to be as high as 82% [29].Therefore, the impact of the number of days spent not working may have worse impact on families in Vietnam if there are more parents earning daily wages.
We found a significant variation in the hospitalization fee charges for gastroenteritis-associated hospital admission in children in Malaysia and Vietnam.The current study observed that the cost of hospitalization per episode was significantly lower in Vietnam compared to Malaysia.Related to this, it was observed that the majority of the parents or caregivers in Malaysia paid more than USD 350 in hospitalization fees per episode whereas the average hospitalization fee in Vietnam was only between USD 1 to USD 34.For Vietnam, our findings are similar to a study by Fischer et al in 2004 which reported that the estimated costs of diarrheal disease in Vietnam was USD 36 [9].For Malaysia, the cost reported is slightly higher than prospective study by Chai in 2007 in a single academic center which showed that the estimated outof-pocket expenditure incurred by caregivers for an admission to hospital due to rotavirus caused gastroenteritis was USD 194 [18].This is likely due to the large number of survey respondents reporting their child was admitted to a private hospital.On the other hand, we note that Malaysian respondents receiving public hospital care reported hospitalization fees less than USD 34 only which is similar to that reported by Vietnamese parents in our survey and lower than that reported by Chai.This point can be explained since academic centre fees are generally slightly higher than that charged by Ministry of Health hospitals and lower than that of private hospitals.It is important to reiterate here that our study is intended to report the fees paid by parents as users of the healthcare system.The fees charged by government run hospitals, including academic settings, are greatly subsidized by the governments of both countries [30,31].
There were some recognized limitations in our study.Firstly, this was an exploratory survey and respondents were recruited using a convenience sampling method.Furthermore, the total number of respondents recruited was small.Hence, our survey participants cannot be considered representative of all parents of the two countries.Besides that, our survey data was based on the respondent's recall of their child's most recent gastroenteritis episode.We collected categorical data to ease recall but this has its inherent limitations.In addition, there would have been disruption to other family activities and care of other children in the family particularly with the occurrence of a hospital admission.Additional family helpers may have had to be called to assist with family responsibilities while parents accompanied their child during admission.This cost was not taken into account in our survey.Aside from methodological issues, there were also important differences in the income level, healthcare system variation and possibly cultural differences between the two countries which may explain our findings.Despite these limitations, findings from our study may provide useful insights into the experience of gastroenteritis from the parental perspective.Out-of-pocket costs incurred, time away from work and the related potential loss of income are important factors to consider.Our study may also have indicated that parents are willing to pay the higher private hospitals fees, at least in Malaysia, to obtain treatment for their child despite its cost.This study highlights that despite economic development in these two countries, gastroenteritis still has significant impact on families.We hope that the findings from our study help to add impetus to a commitment to better educate parents on improving hygiene and preventing childhood gastroenteritis.Simple steps like maintenance of oral hydration through consumption of liquids during an episode of gastroenteritis should be stressed to parents.For instance, breastfeeding infants should continue to breastfeed and older children should be coaxed to drink sips of water more frequently [32].As rotavirus is one of the most common pathogens causing acute gastroenteritis, vaccination should be considered as a means to reduce the impact on families if the vaccination costs are within affordable reach [19].
Conclusion
In conclusion, gastroenteritis still continues to have a great impact on families even in the urban parts of Malaysia and Vietnam.A large majority of surveyed parents had experience of gastroenteritis in their child.Furthermore, 41.0% of Vietnamese parents and 21.1% of Malaysian parents reported that their child's condition required a hospital admission and a parent needed 2 to 5 days off from work to care for the sick child.There were expected variations observed in the hospitalization costs between the two countries surveyed due to differences in health care systems.Despite the limitations in our study, we believe that our study shows the important economic impact of childhood gastroenteritis on families.This calls for greater attention to prevention including education and vaccination.
Figure 1 :
Figure 1: The symptoms experienced during a clinic visit or hospitalization reported by survey respondents in Malaysia and Vietnam.
Figure 2 :
Figure 2: The age range of children during gastroenteritis episode reported by survey respondents in Malaysia and Vietnam.
Figure 3 :
Figure 3: Clinic visit or hospitalization due to pediatric gastroenteritis reported by survey respondents in Malaysia and Vietnam.
Figure 4 :
Figure 4: Length of hospital admission reported by survey respondents in Malaysia and Vietnam.
Figure 5 :
Figure 5: Fees paid for hospitalization reported by survey respondents in Malaysia and Vietnam.
Figure 6 :
Figure 6: Number of days parents were away from work reported by survey respondents in Malaysia and Vietnam.
Table 1 :
Demographic details of survey respondents.
|
2018-08-31T21:13:15.468Z
|
2015-02-20T00:00:00.000
|
{
"year": 2015,
"sha1": "5ae61af2c519cf8d37db163e93ee65d8b43ff912",
"oa_license": "CCBY",
"oa_url": "https://symbiosisonlinepublishing.com/gastroenterology-pancreatology-liverdisorders/gastroenterology-pancreatology-liverdisorders27.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5ae61af2c519cf8d37db163e93ee65d8b43ff912",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202674429
|
pes2o/s2orc
|
v3-fos-license
|
An oligomeric state‐dependent switch in the ER enzyme FICD regulates AMPylation and deAMPylation of BiP
Abstract AMPylation is an inactivating modification that alters the activity of the major endoplasmic reticulum (ER) chaperone BiP to match the burden of unfolded proteins. A single ER‐localised Fic protein, FICD (HYPE), catalyses both AMPylation and deAMPylation of BiP. However, the basis for the switch in FICD's activity is unknown. We report on the transition of FICD from a dimeric enzyme, that deAMPylates BiP, to a monomer with potent AMPylation activity. Mutations in the dimer interface, or of residues along an inhibitory pathway linking the dimer interface to the enzyme's active site, favour BiP AMPylation in vitro and in cells. Mechanistically, monomerisation relieves a repressive effect allosterically propagated from the dimer interface to the inhibitory Glu234, thereby permitting AMPylation‐competent binding of MgATP. Moreover, a reciprocal signal, propagated from the nucleotide‐binding site, provides a mechanism for coupling the oligomeric state and enzymatic activity of FICD to the energy status of the ER.
A Summary of deAMPylation rates of wild-type and mutant FICD proteins. Shown are deAMPylation rates of BiP V461F -AMP FAM by the indicated FICD proteins (at 0.75 µM or 7.5 µM) as detected by a change in fluorescence polarisation. Mean decay rate constant values AE SD from normalised raw data fitted to monoexponential decay functions of at least four independent measurements are presented. B, C The effect of FICD overexpression on a UPR reporter. (B) Flow cytometry analysis of wild-type (wt) and FICD À/À CHO-K1 CHOP::GFP UPR reporter cells transfected with plasmids encoding wild-type or the indicated FICD derivatives and a mCherry transfection marker. Shown are the median values AE SD of the GFP fluorescence signal of mCherry-positive cells from three independent experiments (fold change relative to wild-type cells transfected with a plasmid encoding mCherry alone). Note that only Glu234Gly-containing, deAMPylation-deficient FICDs activate the reporter. (C) Flow cytometry raw data of a representative experiment. D AMP production by FICD dimer interface or relay mutants is BiP dependent. AMP production in the presence of [a-32 P]-ATP was measured by TLC and autoradiography (as in Fig 2B). Plotted below are mean AMP values AE SD from three independent experiments. E-G Characterisation of covalently linked S-S FICD A252C-H363A-C421S dimers-a trap for BiP-AMP. Figure EV2.
The EMBO Journal
Luke A Perera et al
EV4
The EMBO Journal e102177 | 2019 ª 2019 The Authors ▸ Figure EV3. FICD dimer relay mutants produce a pool of AMPylated BiP in vitro, and FICD AMPylation activity correlates with increased flexibility.
A Radioactive in vitro AMPylation reactions with the indicated FICD proteins at the indicated concentrations, [a-32 P]-ATP, and BiP T229A-V461F were analysed by SDS-PAGE. The radioactive signals were detected by autoradiography and proteins were visualised by Coomassie staining. Note the enhanced production of AMPylated BiP in the presence of dimer relay mutants, FICD K256S and FICD E242A , relative to the wild-type protein and a further increase in the production of AMPylated BiP by the monomeric FICD K256S-L258D double mutant relative to monomeric FICD L258D . Also note the auto-AMPylation signals of the monomeric FICDs detectable at high enzyme concentration. B, C In vitro deAMPylation of fluorescent BiP V461F -AMP FAM by the indicated FICD proteins (at 7.5 µM) measured by fluorescence polarisation. A representative experiment (data points and fit curves) is shown and rates are presented in Fig EV2A. Note the impaired deAMPylation activity of the monomeric FICD K256S-L258D double mutant in (C). D DSF T m analysis of wild-type (wt) and mutant FICD proteins in absence (Apo) or presence of ATP or ADP. Nucleotide concentrations in mM are given in parentheses.
Non-oxidised and oxidised forms of FICD A252C-C421S were assayed in buffer lacking reducing agent (which did not affect the T m of wild-type FICD; see source data). Shown are the mean T m values AE SD from three independent experiments. Note that FICD K256A is more stable than FICD K256S but less than wild-type FICD. Furthermore, the stabilities of oxidised and non-oxidised FICD C421S-A252C relative to the wild-type correlate inversely with their AMPylation activities (Fig 3B). For the wild-type FICD, FICD E242A , FICD G299S , FICD L258D and FICD K256S-L258D , in the apo state, the same data are presented in Fig 4E.
E
Plot of the increase in FICD melting temperature (ΔT m ) against ATP concentration as measured by DSF (derived from Fig 4F). Note the similarity in the K 1/2 s of ATP-induced T m increase (annotated) between FICD L258D (mFICD) and the wild-type dimer (dFICD). Shown are mean ΔT m values AE SD of three independent experiments with the best-fit lines for a one-site binding model. ▸ Figure EV4. Monomerisation allows ATP to bind to FICD in a mode conducive to BiP AMPylation.
A Mutation of the dimer relay residue Lys256 does not result in large conformational changes in FICD. Shown is the alignment (residues 213-407) of the molecules in the asymmetric unit. Structures are coloured as indicated. Glu234, ATP and Mg (where applicable) are shown as sticks. The inhibitory alpha helix (a inh ) and gross domain architecture is annotated. The FICD:Apo structure is from PDB: 4U04. B Electron density of both MgAMPPNP and the inhibitory Glu234, from monomeric FICD L258D co-crystallised with MgAMPPNP. Unbiased polder (OMIT) maps are shown in blue and purple meshes, contoured at 3.0 and 5.0r, respectively. All residues and water molecules interacting with MgAMPPNP are shown as sticks and coloured by heteroatom. Mg 2+ coordination complex pseudo-bonds are show in purple dashed lines. C Unlike wild-type FICD, monomeric FICD L258D binds ATP and ATP analogues in an AMPylation-competent conformation. The indicated structures and distances are shown as in Fig 5C, with ATP interacting residues shown as sticks and annotated. The position of the a-phosphate relative to Val316 in the FICD:ATP structure (see distances in right-hand side panel) would preclude in-line nucleophilic attack (see D, E). The inset is a blow-up displaying distances (i-iv) between the c-phosphates and Glu234 residues. A potentially significant difference in the Glu234 position between the FICD L258D :MgAMPPNP and FICD:ATP structures is apparent: hypothetical distance (ii) (2.68 Å, between Glu234 of FICD:ATP and AMPPNP c-phosphate of FICD L258D ) is less favourable than the observed distance (iii) (2.94 Å, between the AMPPNP c-phosphate and Glu234 of FICD L258D ). Note, His363 of FICD:ATP is in a non-optimal flip state to facilitate general base catalysis (see Fig 5B). D (i) The mode of ATP binding in wild-type dimeric FICD sterically occludes the nucleophilic attack required for AMPylation. Shown are semi-opaque 3 Å centroids centred on Pa and Val316 (Cc1). The putative BiP Thr518 nucleophile (depicted by the cross) is positioned in-line with the scissile phosphoanhydride (parallel to the plane of the paper) and 3 Å from Pa. This nucleophile position lies within the Val316 centroid (indicating a steric clash). For clarity, the FICD:ATP structure is overlaid with a thin slice of the FICD:ATP structure in the plane of the Pa-O3a bond. (ii) In the monomeric AMPylation-competent FICD L258D :ATP structure, the nucleophile lies outside the Val316 centroid in proximity to His363 (the general base). E The ATP a-phosphates of monomer or dimer relay mutants are in the same position as that competently bound to the AMPylation unrestrained dimeric FICD E234G .
Shown are all AMPylation-competent MgATP structures overlaid as in (C) and Fig 5C. The dimeric FICD E234G :MgATP (dark blue, PDB: 4U07) is also included as a reference for an active AMPylating enzyme. Figure EV5. AMPylation activity correlates with enhanced flexibility of the dimer interface and Glu234.
The residue average B-factors, for the four FICD complexes co-crystallised with ATP, are shown [in (i-iv)] with a cold to hot colour code. They display a trend of increasing Bfactors in the dimer interface and in the inhibitory glutamate region. This increase in B-factor is indicative of increasing flexibility and correlates with greater AMPylation activity of the corresponding FICD. All of these structures have almost identical dimer packing in their respective crystals and limited crystal contacts around the inhibitory helix (see Appendix Fig S2). Note, structure averaged B-factors are comparable (see Table 1). For clarity, the TPR domain (up to residue 182) is not shown.
A Immobilised BiP responds allosterically to, is saturated by and retains ATP for the duration of BLI kinetic assays. BLI traces of the interaction between FICD L258D-H363A and immobilised biotinylated BiP T229A-V461F in different nucleotide states. Before exposure to FICD L258D-H363A immobilised BiP:Apo was subjected to two consecutive incubation steps (activation and wash) in the presence or absence of ATP as indicated. FICD association and dissociation steps (shown) were then conducted in a nucleotide (Nt.)-free solution. Note that BiP only interacts with FICD L258D-H363A when pre-saturated with ATP. Importantly, ATP pre-bound BiP retains its affinity for FICD L258D-H363A even if subsequently washed in a buffer lacking ATP (compare red and green traces). Thus, the majority of BiP retains its bound ATP for the duration of the kinetic experiment, experimentally uncoupling the effect of nucleotide on the FICD analyte from its effect on the immobilised BiP ligand. B Cartoon schematic of the BLI assays presented in Fig 6A and B. The pre-AMPylation complex is formed between the immobilised BiP:ATP "ligand" and the FICD "analyte". C The BLI association and dissociation traces from Fig 6B are shown. The immobilised biotinylated BiP T229A-V461F was saturated with ATP and then exposed to nucleotide-free FICDs. Dissociation was performed in absence or presence of ATP, as indicated. [mFICD H363A : FICD L258D-H363A ; dFICD H363A : FICD H363A ]. D Quantification of the biphasic exponential decay fitting of dissociation traces shown in Fig 6B. Relative ATP-induced changes of these kinetic parameters are given in Fig 6D. Shown are mean values AE SD from three independent experiments. Note the greater relative contribution of fast dissociation of mFICD in presence of ATP versus absence. E Representative BLI traces of an FICD dimer dissociation experiment. The legend indicates the form of unlabelled FICD incubated with the N-terminally biotinylated FICD (at a 100-fold molar excess, prior to biosensor loading) and also the ligand present in the dissociation buffer (at 5 mM) if applicable. F Representative dissociation data derived from (E). Probes loaded with biotinylated FICD incubated with mFICD H363A act as controls for non-specific association and dissociation signals, these were subtracted from the respective dFICD H363A traces in (E). Mono-exponential decay best-fit lines are also displayed; resulting off rates are shown in Fig 6E(ii). Step None ATP ADP Figure EV6.
|
2019-09-19T09:10:44.719Z
|
2019-09-18T00:00:00.000
|
{
"year": 2019,
"sha1": "321bcc25990b9d37cc9b62f33d58ee38e3128821",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15252/embj.2019102177",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "200b6211c1ff5c80a10a0cf0dc2677f4a8902cd1",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247768113
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and 10-Year Stability of Personality Disorders From Adolescence to Young Adulthood in a High-Risk Sample
Background With the implementation of the 11th edition of the International Classification of Diseases (ICD-11) in early 2022, there will be a radical change in the framework and process for diagnosing personality disorders (PDs), indicating a transition from the categorical to the dimensional model. Despite increasing evidence that PDs are not as stable as previously assumed, the long-term stability of PDs remains under major debate. The aim of the current paper was to investigate the categorical and dimensional mean-level and rank-order stability of PDs from adolescence into young adulthood in a high-risk sample. Methods In total, 115 young adults with a history of residential child welfare and juvenile-justice placements in Switzerland were included in the current study. PDs were assessed at baseline and at a 10-year follow-up. On a categorical level, mean-level stability was assessed through the proportion of enduring cases from baseline to follow-up. Rank-order stability was assessed through Cohen’s κ and tetrachoric correlation coefficients. On a dimensional level, the magnitude of change between the PD trait scores at baseline and at follow-up was measured by Cohen’s d. Rank-order stability was assessed through Spearman’s ρ. Results The prevalence rate for any PD was 20.0% at baseline and 30.4% at follow-up. The most frequently diagnosed disorders were antisocial, borderline, and obsessive-compulsive PDs, both at baseline and at follow-up. On a categorical level, the mean-level stability of any PD was only moderate, and the mean-level stability of specific PDs was low, except of schizoid PD. Likewise, the rank-order stability of any PD category was moderate, while ranging from low to high for individual PD diagnoses. On a dimensional level, scores increased significantly for most PDs, except for histrionic traits, which decreased significantly from baseline to follow-up. Effect sizes were generally low. The rank-order stability for dimensional scores ranged from low to moderate. Conclusion The findings indicate low to moderate stability of Pds and Pd traits from adolescence to adulthood, which supports the growing evidence that categorical diagnoses of Pds are quite unstable. This in turn, emphasizes the use of the upcoming ICD-11 that Acknowledgments Pds to be only “relatively” stable.
INTRODUCTION
The introduction of personality disorders (PDs) in the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) (1) led to a substantial increase in empirical research and clinical interest (2). Yet, the advent of specific diagnostic criteria and a multiaxial approach that differentiated PDs (i.e., Axis II) from clinical syndromes (i.e., Axis I) set the stage for an ongoing controversy about the conceptualization and diagnosis of PDs. While PDs were defined as discrete, distinct categories, the shortcomings of such a categorical classification model became quickly apparent (3)(4)(5), and a shift to a more dimensional model, in which PDs are perceived as extreme variants of normal personality dimensions, became inevitable (6,7). With the upcoming 11th edition of the International Classification of Diseases (ICD-11) (8), the conceptualization of PDs is finally in transition, acknowledging PDs to be only "relatively" stable (9)(10)(11). For over decades, however, temporal stability consisted in one of the major distinguishing features between Axis I and Axis II disorders with the stability of PDs being substantially higher than for other mental disorders. Yet cumulative findings slowly appeared to question the stability of PDs, by suggesting considerable improvement over time (12,13). Thus, against the common assumption that PDs are "enduring, " "inflexible, " and "stable" the categorical stability of PDs has found to be not much higher than the stability of other mental disorders (14). Indeed, the Collaborative Longitudinal Study of PDs (CLPS) (15), which investigated the stability of schizotypal, borderline, avoidant, and obsessive-compulsive PDs over time, found that fewer than half of PD patients still met the criteria for a diagnosis after 2 years (16). With regard to borderline PD (BPD), 85% of the original sample had remitted after 10 years (17).
Nevertheless, as outlined in Morey and Hopwood's narrative review (18), temporal stability is a complex notion and has to be examined with respect to several factors. First, estimates tend to vary as a function of the type of stability being assessed. In the present study, the focus relies on the two types of stability that have been studied most frequently, namely mean-level and rank-order stability. Mean-level stability refers to the degree to which the average level of a PD or a PD trait in a given sample changes over time. Rank-order stability, on the other hand, refers to the consistency of an individual's relative ordering compared to others in a given sample, capturing, thus, the extent to which interindividual differences persist over time (18). Rank-order stability is high if the participants in a given sample maintain their ordering with regard to a specific PD or PD trait relative to each other over time, even if the sample as a whole increases or decreases with regard to that PD or PD trait. As such, rank-order changes are independent of mean-level changes (19). Second, estimates depend in part on the type of PD construct being assessed (i.e., categories or traits), suggesting higher stability for dimensional traits rather than for distinct categories (20)(21)(22). In their narrative review, Grilo and McGlashan (21) reported that the rank-order stability for meeting any PD diagnosis is fair to moderate, while individual PD diagnoses often exhibit lower stability. In contrast, dimensional scores tend to show slightly higher stability estimates. Durbin and Klein (20) confirmed these findings by showing that rank-order stability was low to fair for categorical PD diagnoses over a 10-year followup in depressed outpatients, while rank-order stability for dimensional PD traits was fair to moderate. According to Grilo et al. (23), mean-level stability, when assessed dimensionally, is generally lower than rank-order stability, which indicates that symptoms tend to decrease on average, but the rank-ordering of individuals within a defined sample remains roughly the same. Third, estimates may be affected by the assessment method being used to measure PDs. Self-report questionnaires tend to show a relatively higher stability than clinical interviews (20,24). For instance, the findings from Samuel et al. (22) for dimensional ratings showed significantly greater rank-order and mean-level stability for self-report questionnaires compared to clinical interviews. Findings regarding categorical PD diagnoses, in contrast, indicated comparable rank-order and mean-level stability. Finally, Morey and Hopwood (18) outlined how the clinical status and age range of a given sample are critical factors affecting PD stability estimates over time. Studies investigating the course of PDs, however, seem to focus mainly on adult samples, and studies on children and adolescents are scarce. This paucity of research has been in part due to the widespread reluctance to diagnose PDs in youth (25,26) and to the belief that personality in adolescence is inconstant and characterized by emotional outbursts and impulsive behavior (27,28). Existing literature, however, clearly states that PDs can be validly and reliably diagnosed among juveniles (27,28) and that the stability of PDs in adolescence is found to be comparable to the stability in adulthood (29,30).
Given the apparent number of developmental tasks [e.g., achieving emotional independence from parents, developing close relationships with peers, preparing for a professional occupation (31)], the transition from adolescence to adulthood seems to be a salient period for investigating the stability of PDs (18,32). To the best of our knowledge, however, only two studies have explicitly investigated the stability of PDs from adolescence to early adulthood. The Children in the Community (CIC) study investigated the stability of PD traits in a community sample ranging in age from 9 to 28 (33). Findings show that mean PD traits were highest in adolescence and declined linearly to adulthood, although effect sizes were small. Rank-order stability was found to be low to moderate, and cluster C traits seemed to be less stable than cluster A and B traits (34). Similarly, Bornovalova et al. (35), who investigated the stability and heritability of BPD in a community sample, showed a significant mean-level decline from age 14 to 24, although rank-order stability was high. A third study, namely the study from Chanen et al. (36), investigated the 2-year stability of PDs in older adolescent outpatients, aged 15-18 years, and found that 74% of those diagnosed with a PD at baseline still met the criteria for a PD at follow-up. Regarding dimensional ratings, both rank-order and mean-level stability ranged from low (PD NOS) to moderate (borderline, histrionic, and schizotypal) to high (antisocial and schizoid) (36).
Given the apparent role of developmental influences on the etiology of PDs, studies about the stability of PDs in high-risk samples are surprisingly lacking. The aim of the present study was therefore to examine the prevalence of PDs and their stability over a 10-year period from adolescence to adulthood in adolescents placed in residential care and juvenile-justice institutions. Due to multiple risk factors -such as childhood adversities (37), unfavorable parenting practices, low socioeconomic status, parental mental disorders (38), early mental-health problems (e.g., ADHD, oppositional defiant disorders, and attachment disorders), symptoms of depression and anxiety (39), substance use (40), self-harming behavior (41), psychopathic traits, and youth delinquency (42) -adolescents in residential care and juvenile-justice institutions are particularly at risk of developing a PD, and PD prevalence rates among them are high, ranging from 18 to 40% across studies (43)(44)(45). To account for conceptual and methodological factors, both categorical and dimensional mean-level and rank-order stability were investigated.
Study Design
Baseline Data was obtained from the longitudinal "Swiss Study for Clarification and Goal-Attainment in Child Welfare and Juvenile-Justice Institutions" [German: Modellversuch zur Abklärung und Zielerreichung in stationären Massnahmen (MAZ)] (46). The study was conducted between 2007 and 2011 with the primary aims of describing the mental health of children and adolescents in residential care and of investigating the effects of residential youth care over an approximately 1-year period in Switzerland. Child welfare and juvenile-justice institutions accredited by the Swiss Federal Ministry of Justice were invited to participate, of which 64 institutions agreed to take part. Juveniles who had been living for at least 1 month in 1 of these 64 included child welfare and juvenile justice institutions and possessed sufficient language skills in German, French, or Italian as well as sufficient intelligence scores (IQ > 70) were eligible for participation. The juveniles had been placed in the child welfare and juvenile-justice institutions by penal law, by civil law, or voluntarily. Both voluntary placement and placement by civil law were due to severe mental distress or precarious living conditions. Prior to participation, juveniles, parents or legal guardians, and social workers were asked to provide informed consent. Participants then completed computer-administered questionnaires as well as semistructured clinical interviews regarding mental health, psychosocial problems, and offending behavior. Assessment was conducted by trained psychologists and research assistants. Overall, 592 children and adolescents aged 6-26 years (mean age = 16.3 years) participated at baseline. Of those participants, 511 agreed to be contacted for a possible follow-up study. The study procedure was approved by the Ethics Committees on Research Involving Humans at the University of Basel and the University of Lausanne (Switzerland) and by the Institutional Review Board at the Ulm University (Germany).
Follow-Up
After a follow-up period of approximately 10 years, participants were reassessed in the study "Youth Welfare Trajectories: Learning from Experiences" [German: Jugendhilfeverläufe: Aus Erfahrung Lernen (JAEL)], which is currently being conducted to examine participants' psychosocial development over time and their transition out of care. Participants were contacted by postal mail, phone, email, and social media. Of the 511 participants, 231 (45.2%) agreed to participate in the followup. Despite considerable efforts, 8 (1.6%) participants could not be located, 121 (23.7%) could not be reached, 99 (19.4%) refused to participate, 44 (8.6%) did not provide informed consent, and 8 (1.6%) were deceased. A study flow-chart is provided in Supplementary Figure 1. An analysis of the sample attrition showed no significant differences in sociodemographic features (i.e., age, gender, number of former placements, and average duration in residential care) between the participants who took part in the follow-up and those who did not. The follow-up assessment consisted primarily of a set of online questionnaires that participants could complete from home. Participants were then invited to a face-to-face meeting, where they were reassessed using semistructured clinical interviews and semistructured qualitative in-depth interviews regarding mental health, psychosocial problems, and offending behavior. Assessment was conducted by trained psychologists, doctoral students, and research assistants. The study procedure was approved by the Ethics Committee Northwestern and Central Switzerland (EKNZ, Ref.: 2017-00718).
Participants
As the primary aim of this study was to investigate the stability of PDs from adolescence to adulthood, only participants with complete data from the Structured Clinical Interview for DSM-IV-TR Axis II Personality Disorders (SCID-II) (47) at baseline and at follow-up were included, which left a study sample of 138 participants. In addition, participants younger than 12 years of age or older than 18 years at baseline were excluded. The final sample included 115 participants (39.13% female) with a mean age of 15.82 (SD = 1.93; range 12-18) at baseline and a mean age of 25.89 (SD = 2.18; range = 21-30) at followup ( Table 1). Excluded participants revealed no statistically significant differences from participants at baseline in age
Sociodemographic Characteristics
Sociodemographic information -age, gender, number of former placements, average duration in residential care (i.e., total time spent in residential care and juvenile-justice institutions), and current mental-health treatment -was collected using a computer-based questionnaire at baseline and at follow-up. Participants' data on social welfare, disability, and unemployment insurance were only assessed at follow-up.
Mental Disorders
Mental disorders at baseline were assessed with the Schedule for Affective Disorders and Schizophrenia for School-Age Children -Present and Lifetime Version (K-SADS-PL) (48). The K-SADS-PL is a semistructured clinical interview that provides a reliable and valid measurement of DSM-IV diagnoses in children and adolescents. At follow-up, mental disorders were examined with the Structured Clinical Interview for DSM-5 Disorders -Clinician Version (SCID-5-CV) (49). The SCID-5-CV is a semistructured clinical interview based on DSM-5 diagnoses covering the most common diagnoses seen in clinical settings: depressive and bipolar disorders, schizophrenia spectrum and other psychotic disorders, substance-use disorders, anxiety disorders, obsessive-compulsive disorder, post-traumatic stress disorder (PTSD), attention-deficit hyperactivity disorder (ADHD), and adjustment disorder. In addition, the SCID-5-CV screens for 17 additional DSM-5 diagnoses. Items and diagnoses are scored based on dichotomous "present" and "absent" response options. The SCID-5-CV presents excellent reliability, with Cohen's κ ranging from 0.70 to 0.75 (50).
Personality Disorders
Personality disorders were assessed at baseline and at followup using the SCID-II (47). The SCID-II is a semistructured interview designed to yield PD diagnoses based on the DSM-IV and DSM-IV-TR (i.e., paranoid, schizoid, schizotypal, histrionic, borderline, antisocial, narcissistic, avoidant, dependent, obsessive-compulsive, depressive, and passive-aggressive PDs) and consists of 134 items, which are rated on a 3-point Likert scale (1 = absent, 2 = subthreshold, and 3 = threshold).
Since depressive and passive-aggressive PDs were removed in the DSM-5, both disorders were included in the PD NOS section in the following analyses. Categorical diagnoses are provided according to the specific diagnostic thresholds of PDs the DSM-IV. Dimensional scores are provided by summing the scores from each individual item for each separate PD.
Interrater reliability for categorical diagnoses varies from 0.48 to 0.98 (Cohen's κ), and internal consistency ranges from 0.71 to 0.94 (51). At baseline, the diagnosis of antisocial PD was assigned only if study participants were over 18 years old. Due to participants' young age, most of them could not be given the diagnosis. To anticipate later analyses of the stability of antisocial PD, the criteria for antisocial PD were nevertheless collected for participants both under and over 18 years old. The present analyses therefore include antisocial PD diagnoses in participants who were both younger and older than 18 years old at baseline.
Statistical Analysis
First, to determine the prevalence rates of PDs at baseline and at follow-up, we performed descriptive statistical analyses. Group comparisons regarding social benefits between participants with and without a PD were assessed at follow-up using χ 2 tests. Second, categorical mean-level stability was measured by the proportion of enduring cases from baseline (t1) to follow-up (t2), that is, the number of participants meeting the criteria for a PD at both measurement times divided by the total number of participants with a PD at baseline. Categorical rankorder stability was calculated by Cohen's κ and tetrachoric correlations (r tet ). Cohen's κ is one of the most commonly used statistics to test diagnostic agreement between diagnoses assigned at baseline and at follow-up. A negative value indicates an agreement worse than expected or even a disagreement. A value between 0 and 0.20 represents a low agreement, and a value ranging from 0.21 to 0.40 a fair agreement. A κ between 0.41 and 0.60 indicates a moderate agreement, a κ between 0.61 and 0.80 a substantial agreement, and 0.81-1.0 a perfect agreement between two assessments (52). While Cohen's κ takes into account the possibility of an agreement occurring by chance, tetrachoric correlation coefficient (r tet ) measures the mere relationship between binary baseline and follow-up scores with the assumption of bivariate normality (53). Similar to Pearson's r, a value between 0.1 and 0.3 is considered to be low, a value between 0.3 and 0.5 moderate, and a value between 0.5 and 0.8 high. Finally, for dimensional PD ratings, meanlevel stability was measured by calculating mean trait scores and standard deviation at baseline and at follow-up, resulting in a mean-difference score. Cohen's d was used to estimate the effect size of the magnitude of change between baseline and followup scores. According to Cohen (54), an effect size of 0.20 is considered a small effect, an effect size of 0.50 a moderate effect, and an effect size of 0.80 a large effect. Dimensional rank-order stability was measured using Spearman's ρ (r s ), given a substantial positive skew. The interpretation of Spearman's ρ (r s ) is similar to that of Pearson's r. Additional explorative sensitivity analyses regarding the prevalence as well categorical and dimensional mean-level and rank-order stability of PD according to specific age ranges at baseline (12-
Prevalence Rates of Current Mental Disorders at Baseline and at Follow-Up
Findings regarding the prevalence rates of mental disorders at baseline and at follow-up are presented in Table 1). Participants with a PD at followup were significantly more likely to report disability insurance than participants without a PD at follow-up [χ 2 (1) = 6.10; p = 0.010] ( Table 2) [see (56)].
Prevalence Rates of PDs at Baseline and at Follow-Up
Findings regarding the prevalence rates of PDs at baseline and at follow-up are presented in Table 3. At baseline, 23 (20.0%) participants met the criteria for any PD. While 10 (8.7%) participants met the criteria for one PD diagnosis, 5 (4.3%) met the criteria for two, and 8 (7.0%) met the criteria for three or Participants with multiple PDs are displayed more than once. a Including participants younger than 18 years at baseline. b PD not otherwise specified (NOS).
more PD diagnoses. With a prevalence rate of 8.7%, borderline PD was the most common diagnosis, followed by antisocial PD (6.1%). Every participant with a PD at baseline also met criteria for another type of mental disorder at baseline. At follow-up, the prevalence rate for any PD was 30.4%. Overall, 18 (15.6%) participants met the criteria for only one PD, while 8 (7.0%) had two PD diagnoses, and 9 (7.8%) met the criteria for three or more PD diagnoses. The most frequently diagnosed disorders were antisocial (16.5%), borderline (7.8%), and obsessive-compulsive PDs (7.0%). At the cluster level, cluster B PD disorders were the most prevalent diagnoses, both at baseline (13.9%) and at followup (20.0%). All participants with a PD at follow-up, except one, met the criteria for another type of mental disorder.
Categorical Stability
Findings regarding the categorical stability of PDs from baseline to follow-up are presented in Table 4.
Mean-Level Stability
The number of enduring cases from baseline to follow-up could only be calculated for PDs diagnosed at baseline. Since no participants met the criteria for a schizotypal PD at baseline, mean-level stability could not be calculated for this disorder. Of the 23 participants who met the criteria for one or more PDs at baseline, 11 still met the criteria for a PD diagnosis at follow-up, resulting in a categorical mean-level stability of 47.8%. Overall, 12 of these 23 participants improved from baseline to follow-up by no longer meeting the criteria for a PD, while 24 of 92 participants with no PD at baseline met the criteria for a PD at followup. With only one participant out of 10 meeting the criteria for borderline PD at both assessments, the categorical meanlevel stability of borderline PD was low (10.0%). For schizotypal, histrionic, narcissistic, antisocial, avoidant, dependent, PD NOS, and passive-aggressive PDs, none of the participants met the criteria at baseline or at follow-up.
Rank-Order Stability
Cohen's κ and tetrachoric correlations (r tet ) could only be calculated for PDs for which there were participants who met the criteria at baseline or at follow-up or at both measurement points. Since no participants met the criteria for a schizotypal PD at baseline, and no participants met the criteria for a histrionic PD at follow-up, Cohen's κ and tetrachoric correlations (r tet ) could not be calculated for either of these disorders. With a Cohen's κ of 0.18 for any PD, the concordance between baseline and follow-up assessments was low. For individual diagnoses, κ was likewise low, except for schizoid PD (κ = 0.48). The tetrachoric correlation coefficient (r tet ) from baseline to followup for any PD was 0.33, which indicates a moderate rankorder stability. For individual PDs, rank-order stability ranged from low (borderline, avoidant, PD NOS, and passive-aggressive PDs) to moderate (paranoid, narcissistic, antisocial, obsessivecompulsive, and depressive PDs) to high (schizoid, dependent PDs). With a tetrachoric correlation coefficient (r tet ) of 0.60, rank-order stability was by far the highest for cluster A disorders.
Dimensional Stability
Findings regarding the dimensional stability of PDs from baseline to follow-up are presented in Table 5.
Mean-Level Stability
Overall, the mean-level scores of dimensional ratings increased for most disorders. The total score significantly increased from baseline to follow-up, although the effect size was small
DISCUSSION
The aim of the current study was to examine the prevalence rates as well as the mean-level and rank-order stability of PDs over a 10-year follow-up in adolescents placed in residential care and juvenile-justice institutions. Both the stability of PD categories and the stability of dimensional PD traits were analyzed from adolescence to adulthood. The present findings indicated high PD prevalence rates in young adults with a history of child welfare and juvenile-justice placements, while PD diagnoses and PD traits exhibited only low to moderate stability over the 10year follow-up. At least three findings have to be discussed in more detail. First, PD prevalence rates substantially increased from adolescence to adulthood in this high-risk sample. While the normative course of BPD during adolescence is described as an increase of BPD pathology from puberty to young adulthood (57), most previous findings indicate a general decline in PDs and PD traits beginning in young adulthood (17). On the other hand, the prevalence rates of any PD as well as of specific PDs are consistent with the existing literature; the prevalence rates of PDs in institutionalized youth and young adults with a history of out-of-home care have been found to range between 18 and 40% across studies (43)(44)(45). A recent meta-analysis on mental disorders in incarcerated youth, which included 30 studies of 8,000 participants, indicated that antisocial and borderline PDs were relatively common in both males and females, while the prevalence of narcissistic and schizotypal PDs was comparably low (58). The current study seems to confirm this pattern, as antisocial and borderline PDs were among the most frequently diagnosed disorders, both at baseline and at follow-up. An increase in PD diagnoses from adolescence to adulthood in this sample, may, thus, be explained by the fact that many adolescents in residential care and juvenile-justice institutions have experienced severe childhood adversities (e.g., child abuse and neglect), which are shown to significantly contribute to the development of PDs (59,60).
For instance, the meta-analysis by Porter et al. (37) found that patients with borderline PD were over 13 times more likely to report childhood adversity than non-clinical controls. In addition, participants in this high-risk sample were likely to have experienced a range of other critical risk factors, such as unfavorable parenting practices, low socioeconomic status, childhood psychopathology, including high substance use, selfharming behavior, and youth delinquency, which have also been shown to be significantly associated with the development of PDs over time (38)(39)(40)(41)(42). Given the multifaceted nature of problems faced by juveniles in child welfare care and juvenilejustice institutions, the institutions often lack the professional and financial means to detect personality problems at an early stage, leading to delays in diagnoses and appropriate treatment. Delaying appropriate diagnoses, in turn, carries clinical risk, as evidence is accumulating that many of the harms associated with PDs occur early in the course of the disorder (61), and delay tends to lead toward greater impairments and poorer outcomes (62). Second, on the categorical level, the mean-level stability of any PD was only moderate, and the mean-level stabilities of specific PDs were low to moderate, except for schizoid PD (high). The concordance between baseline and follow-up assessments (i.e., Cohen's κ) was low, both for any PD and for individual PDs, except for schizoid PD (moderate). The rank-order stability (i.e., tetrachoric correlation (r tet ) of any PD category was moderate. For individual diagnoses, the rank-order stability ranged from low (i.e., borderline, avoidant, PD NOS, passive-aggressive PDs) to moderate (i.e., paranoid, narcissistic, antisocial, obsessivecompulsive, depressive PDs) to high (schizoid, dependent PDs). Regarding categorical mean-level stability, Chanen et al. (36) found a higher proportion of enduring cases (74%) compared to our findings (47%), which may be due to the shorter follow-up interval (2 years), the clinical status of participants (outpatients), and the narrower age range (15-18 years old) in their study. Indeed, the explorative age-sensitive analyses in the Supplementary Material revealed a higher categorical meanlevel stability for the participants who were 15-18 years old than for the participants who were 12-14 years old, although the stability still seems to be lower than that found by Chanen et al. (36). Categorical mean-level stabilities for individual PDs, however, were similar to those found by Chanen et al. (36). As such, participants may have changed specific PDs (from one PD category to another category) but did not discard the general diagnosis of a PD over time. Noteworthy, however, is that 24 (20.9%) participants first developed a PD in young adulthood. As the explorative age-sensitive analyses revealed, older adolescents (15-18 years) were more likely to meet a PD diagnosis first at follow-up than younger adolescents (12-14 years). This suggests that the onset of a PD indeed lies in later adolescence and that some of the present sample had not yet passed the critical age. Another explanation might be that PDs in (young) adolescence are more difficult to detect (63). In addition, older adolescents with a PD diagnosis between 15 and 18 years may have already had longer and more stable patterns of personality pathology, which, therefore, may be more predictive of unfavorable longterm outcomes. Nevertheless, a total of 12 (10.4%) participants improved from baseline to follow-up and no longer met the criteria for a PD in adulthood. While this could have been due to several factors (e.g., treatment or spontaneous remission), it is also possible that these participants no longer met the diagnosis of a PD but still exhibited PD symptoms. This, in turn, is a major concern of the categorical classification system, as it is based on an arbitrary diagnostic threshold that can be easily met (PD diagnosis) or not met (no PD diagnosis) by an increase or decrease in a single criterion.
Regarding categorical rank-order stability, the poor concordances between the baseline and follow-up assessments (i.e., Cohen's κ) for any PD and for individual PD diagnoses are consistent with those found by Chanen et al. (36). Findings regarding rank-order stability measured with tetrachoric correlations (r tet ) are difficult to compare across studies, since Cohen's κ remains the most common statistical measure for assessing the rank-order stability of categorical data. Overall, rank-order stability nevertheless seemed to be higher for specific PD diagnoses (i.e., paranoid, narcissistic, avoidant, dependent, PD NOS, and passive-aggressive PDs) than mean-level stability for these PD diagnoses, which suggests that even if the specific diagnoses did not remain the same over time, the rank ordering of participants with such a disorder appeared to be more or less the same. Both the rank-order stability and the mean-level stability of borderline PD were particularly weak, which indicates that on average, neither the category nor the rank ordering of participants with a borderline PD remained the same over time. While this may seem somewhat surprising, it is consistent with the narrative review from Bondurant et al. (64), which suggests that there is only little diagnostic borderline PD stability in adolescence. Interestingly, both Cohen's κ and tetrachoric correlation coefficients (r tet ) were considerably higher for older adolescents at baseline (15)(16)(17)(18) years) compared to younger adolescents (12-14 years old) at baseline (see Supplementary Table 2), which suggests that diagnoses in early adolescence should be treated with caution. Third, on the dimensional level, PD scores significantly increased for most of the disorders, except for schizotypal, avoidant, narcissistic, borderline, dependent, and passiveaggressive traits. Histrionic traits significantly decreased from baseline to follow-up. Effect sizes were generally low, except for antisocial and obsessive-compulsive traits. In contrast to our findings, Johnson et al. (34) found a significant mean-level decline in dimensional ratings from adolescence to adulthood, and Chanen et al. (36) found neither a significant increase nor a decrease in PD traits, except for paranoid (increase), antisocial (increase), and depressive PDs (decrease). One explanation is that the study by Johnson et al. (34) was conducted in a community-based sample, while the study by Chanen et al. (36) was conducted with older adolescent outpatients. The overall low to moderate dimensional rank-order stability in the present study was, however, consistent with the rank-order stability found in the studies by Johnson et al. (34) and Chanen et al. (36). This indicates that although mean-level PD traits tended to increase among adolescents in residential care and juvenilejustice institutions through adulthood, their individual rank ordering seemed to be less stable, emphasizing interindividual differences among participants. The additional explorative agesensitive analyses revealed higher dimensional mean-level and rank-order stability estimates regarding older participants (15-18 years old) than younger participants (12-14 years old). On the one hand, this highlights the presence of PD traits in early adolescence but on the other hand, suggests that PD diagnoses before the age of 15 should be interpreted with caution.
Strengths
The current study fills an important gap in the existing literature on the stability of PDs by explicitly presenting findings from adolescence to adulthood in a high-risk sample. Indeed, only a few studies have investigated the stability of PDs from adolescence to adulthood, and to the best of our knowledge, none have yet investigated the stability of PDs from adolescence to adulthood in adolescents in residential care and juvenile-justice institutions. Yet these adolescents have a particularly high risk of developing a PD due to a cumulation of risk factors. Considering the apparent role of developmental tasks in the transition from adolescence to adulthood in the development of PDs, this study is particularly valuable. Another strength of the current study is the long follow-up interval of 10 years. This is noteworthy given that young-adult care leavers (i.e., juveniles who left residential care or juvenile-justice institutions) are often difficult to locate, since many live in rather unstable and changing circumstances (65) or suffer from severe mental-health disorders (66).
Limitations
Nonetheless, the findings of this study must be interpreted under the consideration of some limitations. First, the relatively small sample size of 115 participants must be emphasized. As a result, the number of cases for categorical PDs were small, which made it difficult to adequately assess categorical stability and, therefore, the results must be interpreted with caution and replications including larger sample sizes are highly needed. Second, although no significant differences were found in the sociodemographic baseline data between included and excluded participants, a selection bias cannot be completely ruled out. Indeed, positive self-selection may occur in longitudinally followed-up high-risk samples, as participants with severe PDs may have declined to participate at follow-up or could not be located due to difficult life circumstances. On the other hand, it may be that participants who remained connected to mental health care were more likely to participate in the current follow-up study, which could explain the high prevalence rates of PDs. Third, the current study only allowed PDs to be assessed using a twomeasurement-point design. The amount of change between two measurement points is, however, not fully informative about the shape of each person's individual growth trajectory. In addition, a two-wave design cannot distinguish true change from measurement error (67) and is unable to evaluate the impact of regression-to-the-mean effects; that is, a statistical artifact making naturally occurring variations look like true changes when particularly large or small scores are followed by scores closer to the mean (68). Fourth, the dimensional approach taken within this study does not precisely correspond to the dimensions within the ICD-11, as the latter go beyond a mere sum of features within a categorical diagnosis. However, the dimensional approach adopted in the current study can be considered as a proxy, as no empirical evidence was yet available for the dimensional approach proposed by the ICD-11 at the time of the baseline study. Finally, while the present study explicitly focused on the stability of PDs from adolescence to adulthood, the cutoff age of 18 years at baseline is somewhat arbitrary, although adulthood is traditionally described as beginning at the age of 18 years. Indeed, based on psychosocial characteristics, recent studies have suggested that emerging adulthood is a period between adolescence (18 years) and full-fledged adulthood (25 years) (69). Specifically, with regard to etiological influences on the development of personality traits, Hopwood et al. (70) defined late adolescence at age 17, emerging adulthood at age 24, and young adulthood at age 29. Future studies should consider the prolongation of adolescence or emerging adulthood, which is currently taking place, especially in Western societies (69), in order to adequately assess the stability of PDs from adolescence to adulthood.
CONCLUSION
Three main findings can be drawn from the current study. First, the prevalence rates of PDs in young adults with a history of child welfare and juvenile-justice placements are high. Second, most categorical PD diagnoses and dimensional PD traits increased from adolescence to adulthood in our sample. Third, overall, the findings indicate low to moderate stability of PDs and PD traits from adolescence to adulthood, although the extent of stability differed according to the PD construct (i.e., categorical diagnoses or dimensional traits), the type of stability (i.e., mean-level or rank-order stability) and the specific PD and PD trait being assessed. As a result, the current findings are in accordance with the growing evidence, that PDs are not that stable. This in turn, emphasizes the current shift to a more dimensional model and highlights the use of the upcoming ICD-11 that acknowledges PDs as only "relatively" stable.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving humans participants were reviewed and approved by the Ethics Committees on Research Involving Humans at the University of Basel and the University of Lausanne (Switzerland) as well as the Institutional Review Board at the University of Ulm (Germany). The follow-up study procedure was approved by the Ethics Committee Northwestern and Central Switzerland. Written informed consent to participate in this study was provided by the participants and the participants' legal guardian/next of kin, if participants were under 18 years old.
ACKNOWLEDGMENTS
We would like to thank the youth welfare and juvenile-justice institutions involved in the study, participants, and their assigned caseworkers. In addition, we would also like to thank the study members for recruiting and assessing participants.
|
2022-03-29T14:11:57.590Z
|
2022-03-24T00:00:00.000
|
{
"year": 2022,
"sha1": "99129472ae67e11b3961eac07731639b79924661",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "99129472ae67e11b3961eac07731639b79924661",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
122154247
|
pes2o/s2orc
|
v3-fos-license
|
An analysis of the temperature field of the workpiece in dry continuous grinding
The recent model for heat transfer during intermittent grinding described in Skuratov, Ratis, Selezneva, Pérez, Fernández de Córdoba and Urchueguía (Appl Math Model 31:1039–1047, 2007) is considered. This model is particularized to the case of continuous dry grinding, where an alternative solution is obtained in the steady state. This alternative solution is analytically equivalent to the well-known formula of Jaeger (Proc R Soc NSW 76:204–224, 1942) for the steady–state temperature field created by an infinite moving source of heat and proves to be very useful for evaluating the maximum point of the temperature.
Introduction
Two relevant contributions on mathematical modelling of the grinding problem [1,2] use coupled systems of twodimensional partial differential equations to calculate the evolution of the temperature fields in the wheel, the workpiece and the grinding fluid. These models are nonlinear in that they allow for temperature-dependent contact heat source and heat-exchanger coefficients. These works extend an improved model allowing heat-flux variation along the grinding zone [3]. Recently, a simplified mathematical model was proposed in terms of a two-dimensional boundary-value problem where the interdependence among the grinding wheel, the workpiece and the coolant was described by two-variable functions in the boundary condition [4]. An analytical expression for the evolution of the workpiece temperature field during intermittent wet grinding was given. The heat partitioning to other heat sinks [5] is then fully decoupled. A numerical analysis of this solution was presented in [6]. In the present paper, we show that the solution of the mathematical problem [4] for the particular case of dry continuous grinding and in the steady-state is given by an expression that is analytically equivalent to the well-known integral formula of Jaeger [7]. The proof of the equivalence of both expressions (3.6) and (4.8), is derived from the uniqueness of the solution of the problem (see Sect. 5
.2)
Our setup is depicted in Fig. 1. The workpiece moves at a constant speed v d and is assumed to be infinite along Ox and Oz, and semiinfinte along Oy. The plane y = 0 is the surface being ground. The contact area between the wheel and the workpiece is an infinitely long strip of width ε located parallel to the Oz-axis and on the plane y = 0. Both the wheel and the workpiece are assumed rigid. Although our equations below allow for the case of wet and intermittent grinding, we will mostly consider the case of dry and continuous grinding.
This paper is organized as follows. Section 2 presents the differential equations governing our problem which are extracted from [4]. Section 3 gives a brief summary of the steady-state solution to the these equations following [7]. The time-dependent thermal field is derived in Sect. 4 and analyzed in detail in Sect. 5. Emphasis is laid on determining the maximum of the temperature for the workpiece that takes place in the steady-state. Our conclusions are summarized in Sect. 6.
Problem formulation
Our problem is modelled by the heat equation in the presence of a convective term [4] subject to the initial condition T (0, x, y) = 0, (2.2) and to the boundary condition is the heat exchange coefficient between the workpiece and the grinding wheel, and d(t, x) is the heat generated by friction between the two. Under the assumption of dry grinding, the workpiece is thermally insulated, and b(t, x) = 0. In this case we have that d(t, x) equals the heat flux φ between the wheel and the workpiece. Now the heat flux φ across the plane y = 0 is where n is the unit normal to the plane y = 0, pointing in the direction of y > 0. Since v d · n = 0 we have (2.5)
The steady-state solution
We will first review the steady-state solution to (2.1)-(2.3) as given in [7]. In the absence of lubrication fluid, the workpiece is assumed to be thermally isolated form the environment, so we can set Moreover, in the case of continuous grinding, we may assume that the physical contact between the wheel and the workpiece extends over an interval x ∈ (0, ε), friction being zero outside. This can be modeled as where Q s is the frictional heat-generation source term into the workpiece and H (x) is a step function. For the details of parameter Q s see references [6,8]. That is, the heat flux due to friction is localized exactly on the contact area (an infinitely long strip of width ε) between the wheel and the workpiece. This solution is constructed from the expression for the Green function corresponding to (2.1) when v d = 0. For a point source of heat power Q p located at (x , y , z ), this function is given by The superposition principle is applied to (3.3). This is first done for v d = 0. The temperature field T (x, y) corresponding to an infinite linear source along the z-axis on the plane y = 0 can be obtained by superposition of point sources such as (3.3). Then the motion of the source along the Ox-axis, with a speed v d , is modeled by changing coordinates to a moving reference frame. In the stationary state, when t → ∞, one finds that an infinitely long, infinitely thin linear source causes a temperature field within the medium given by where Q l is the heat power of the infinite linear source and K 0 (x) is the modified Bessel function of order zero [9, Sect. 9.6]. Further, applying the superposition principle to (3.4), one obtains the temperature field created by an infinitely long source of finite width ε moving exactly as above. If the band releases heat at a rate Q s , then the steady-state temperature field is found to be given by .5 gives the steady-state temperature field created by an infinitely long flat band of width ε located on the plane y = 0 along the z-axis, within an infinite medium along the axis Oy. This implies that the temperature field (3.5) does not solve the boundary-value problem (2.1)-(2.3). In order to obtain the temperature field created by grinding a semi-infinite workpiece (above called medium) in contact with a grinding wheel (above called source), without lubrication, we multiply the temperature field (3.5) by a factor of 2: Now (3.6) does solve the boundary-value problem (2.1 )-(2.3) along with (3.1)-(3.2). We can verify that the thermal flux across y = 0, Eq. 2.5, indeed is nonzero only on the contact area. In order to compute the thermal flux corresponding to (3.6), we first introduce the dimensionless variables X := v d x/2k and Y := v d y/2k. We can also extend the integral (3.6) to the whole line if we include the appropriate Heaviside functions H (u): where = v d ε/2k. This allows one to compute the derivative at y = 0 if one uses the representation of the Dirac δ-function given in (5.13), to be derived in Sect. 5.3. We find as expected.
The time-dependent solution
For the time-dependent solution to (2.1)-(2.3) we refer to [4]; what follows is a brief summary thereof. We apply a number of integral transformations on the variables x, y, t; this will turn the differential equation (2.1) into an algebraic equation that can be readily solved. Transforming back into the original variables will yield the solution.
Since t ≥ 0 and y ≥ 0, the natural transformation to apply on them will be Laplace's. On the other hand, x varies on the whole real line R, so it will be Fourier-transformed. Details can be found in [4]. The result is where and Neither T (0) nor T (1) separately is a solution to the boundary-value problem (2.1)-(2.3), but their sum is. For convenience we will use the notations for the temperature field in the steady state, so (4.2) simplifies to and It will be useful to re-express (5.1) as In the steady state, some manipulations reduce (5.3) to 5) or, equivalently, to with Passing again to the steady state, one may reduce (5.7) to In the dimensionless variables X = v d x /2k, X = |v d |x/2k and Y = |v d |y/2k one finds The above integral contains the kernel function Below we prove that this kernel behaves like a Dirac δ-function on the workpiece surface, lim as one approaches y = 0 from above. Equivalently we can write (5.12) as Then by (5.12) in (5.10) we find , and a standard argument establishes that the solution is unique. We conclude that, under the assumption of continuous dry grinding, the steady-state solutions of Sects. 3 and 4 must be equal: y). (5.19) This is stronger than our previous result (5.16), at the cost of imposing the additional assumption that lim x→±∞ T (x, 0) = 0, which was not required to derive (5.16). 5.3 Proof of (5.12) In (5.11) we have, for X = X , However, when X = X , we have because of the singularity of the Bessel function K 1 at the origin. This behavior is reminiscent of the Dirac δfunction. It remains to compute the integral of the kernel function (5.11) and to prove that it is finite, in order to conclude that N is indeed a multiple of the Dirac δ-function. Let us therefore consider the integral Performing the change of variables χ = X − X and remembering that the integrand is even, we can write Now the substitution u = Y 2 + χ 2 leads to Finally, setting u = Y cosh z, we find A useful expression for K 1 reads [12, Eq. 5.7.11] is the logarithmic derivative of the Euler Gamma function. Hence Substitution of (5.27) in (5.25) gives This integral is readily evaluated by applying the change of variables ζ = tanh z, with the result that and (5.12) follows immediately.
The maximal temperature as a function of time
We claim that the maximal temperature as a function of the time is attained in the steady state. In order to prove it we observe that, since the integrand of (5.3) is positive and Q s > 0, the integral (5.3) is a monotonically increasing function of the time. We have and the maximum value of T (0) is attained in the steady state. Analogous arguments apply to (5.7), so and again the maximum value of T (1) is reached in the steady state. Altogether, Eqs. 5.30 and 5.31 imply that where the derivative of the error function has been used. To further manipulate (5.33) let us consider the function The change of variables h = (v d σ/4k) 2 reduces (5.34) to and we can limit our search for the zeroes of ∂ T (0) (x, 0)/∂ x to the interval (0, ε). A detailed analysis, supplemented with Bolzano's theorem, establishes the following. When v d > 0, there exists a unique value c + ∈ (0, ε) at which T (0) (x, 0) attains a maximum. When v d < 0, there also exists a unique value c − ∈ (0, ε) at which T (0) (x, 0) attains a maximum. Moreover, the fact that the error function is odd implies that c − = ε − c + . T 0 (K) 300 y) for (x, y) ∈ (− , ) × (0, /10)
Numerical example
Very fast and simple numerical algorithms have been implemented in MATLAB to compute the maximum of the workpiece temperature in the steady-state, see Appendix E of reference [10, http://hdl.handle.net/10251/4769, Chap.5] for further details. These algorithms are based on formulas (5.5), (5.19) and (5.37) presented before. Assuming that the workpiece is a VT20 titanium alloy and using the values of the parameters tabulated in Table 1, see references [4,6], we show that the maximum point c + = 0.0072 ε m and that the maximum value of the temperature, reached on the workpiece surface, T max = T 0 +2T (0) (c + , 0) = 1042.23 K. These values agree with the temperature field shown in Fig. 2.
Conclusions
Our first conclusion is the equality between the (apparently different) solutions for the temperature field given in the literature (and summarized here in Sects. 3 and 4). Beyond summarizing the existing approaches to the boundaryvalue problem (2.1)-(2.3), we have presented a detailed analysis of its exact solution in the case of dry grinding. For this purpose, a new representation of the Dirac delta distribution has been developed, involving a modified Bessel function; this new representation of the Dirac delta has not been tabulated in the literature yet.
From an applied point of view, the computation of the maximum temperature is the principal goal. According to this, it has been proved that this maximum is reached in the stationary state within the contact zone on the workpiece surface. It is shown that the numerical computation of this maximum is quite simple from the approach given in the Samara-Valencia model.
The maximal surface temperature, and also the maximal temperature as a function of the time, have been computed theoretically and numerically.
Our analysis can be generalized to the case of wet grinding [13] again considering the assumption of constant heat-transfer coefficient on the workpiece surface. One can expect the results concerning the maximal temperature to be quantitatively different from those of dry grinding. We expect to report on these issues soon.
|
2019-04-19T13:06:45.488Z
|
2010-07-01T00:00:00.000
|
{
"year": 2010,
"sha1": "398af87d086618384e6f92d059b159b79a7c6735",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10665-009-9335-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a0533d3879ff158df141eaa88b082ae705dfa618",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
18133032
|
pes2o/s2orc
|
v3-fos-license
|
Some Compositional and Kinetic Controls on the Bioenergetic Landscapes in Oceanic Basement
This contribution assesses the availability of catabolic energy for microbial life during water–rock reactions in the flanks of mid-ocean ridges, where basaltic and ultramafic rocks interact with circulating seawater. In addition to equilibrium thermodynamic computations, results for kinetic reaction paths are presented. In these calculations, it is assumed that dissolution of olivine and basalt glass control the rates of hydrogen forming reactions in ultramafic and basaltic rocks, respectively. The results suggest that all ocean crust basement rocks release enough hydrogen (H2,aq) to support hydrogenotrophic life at low water-to-rock ratios. Olivine dissolution rate control imposes a stronger effect on hydrogen production than phase equilibrium controls, indicating that magnetite formation is not a requirement for production of large amounts of hydrogen in ultramafic rocks. The formation of non-tronite and celadonite are primarily responsible for the formation of the moderate amounts of hydrogen (H2,aq) expected in basaltic ridge flanks. Under conditions of large seawater fluxes required to account for the great global convective heat flow in ridge flanks, however, hydrogen production in basaltic ridge flanks is insufficient for supporting hydrogenotrophic life. It is hence proposed that the role of Fe oxidation in basaltic ridge flanks is greater than previously suggested. A standing stock of 2.4∗1028 cells may be supported by Fe oxidation in basaltic ridge flanks, equivalent of about 10% of the sedimentary deep biosphere. The size of a hydrogenotrophic biomass within the ocean crust is more difficult to estimate because the rates and processes of hydrogen release are insufficiently constrained. In any case, hydrogenotrophy in the ocean crust should be of key importance only in olivine-rich basement rocks and in sedimented ridge flanks with low time-integrated seawater fluxes.
INTRODUCTION
Seawater flows in aquifers within the seafloor at rates so large that it takes only few 100s of 1000s years to process the entire volume of the oceans through the permeable ocean crust (e.g., Elderfield and Schultz, 1996;Fisher, 2005). This tremendous flux, coupled with exchange reactions between the crust and the circulating seawater, is critical in global budgets of ocean-lithosphere exchange. Important types of reactions include the removal of Mg ++ and HCO 3 − by minerals (smectite, carbonate) that fill fracture and void space within the crust. Alteration of basaltic glass removes Ca ++ and SiO 2 from the crust into the oceans, where these components are taken up in algal and protozoan tests. These processes play a central role in the silicate-carbonate loop of the Earth's carbon cycle (e.g., Arvidson et al., 2006). Alteration reactions also consume oxygen, which makes them a crucial sink for oxidizing power that is continuously produced in the carbon cycle by burial of organic matter (Hayes and Waldbauer, 2006). In addition, the ocean crust may harbor microbial life, which possibly impacts these global scale carbon and redox budgets. Perhaps more importantly, microbial activity within the ocean crust likely affects the rates and pathways of reactions governing exchange between seawater and oceanic basement.
Most of the seawater circulation takes place in ridge flanks, where temperatures are low and the rates of water-rock reaction are slow. The low to moderate temperatures in the ridge flank settings and the sluggish kinetics of abiotic water-rock reactions allow chemolithoautotrophic microorganisms to harness the Gibbs energy ( r G) associated with redox reactions during alteration. An initial appraisal of the potential size of biomass living in these rocky habitats suggests that the cell numbers may resemble those in the sedimentary deep biosphere (Bach and Edwards, 2003).
The principal energy sources (electron donors) are divalent iron (Fe +2 ) and sulfide (S −2 ) dissolved in basaltic glass or forming minerals in volcanic and mantle rocks. These reduced constituents of rocks are oxidized in alteration reactions. If the interacting fluid is oxygenated seawater, oxygen is consumed. Different electron acceptors (NO 3 − , SO 4 2− , HCO 3 − , ferric hydroxide, etc.) are used in anoxic ridge flank environments (Boettger et al., 2013). Tectonic denudation of olivine-dominated rocks (dunites, peridotites, troctolites) are common along midocean ridges and as much as 50% of the seafloor created along slow and ultraslow spreading ridges may expose these rock types (Escartin et al., 2008;Cannat et al., 2010). Olivine-rich rocks undergo fairly rapid reactions with seawater (serpentinization), during which hydrogen (H 2 ,aq) may form (McCollom and Bach, 2009;Nakamura et al., 2009;Klein et al., 2013). The reducing potential of micromolal quantities of H 2 is large enough to drive e − transfers to all electron acceptors, including CO 2 . Again, these transfers are sluggish in the abiotic world at low temperatures, but H 2 is turned over very quickly by microorganisms under these conditions (e.g., Hoehler et al., 2001). High hydrogen-yields in serpentinization are contrasted by very low H 2 production related to radiolysis in the subseafloor. But even such small quantities of H 2 are utilized by aerobic microorganisms in energy-starved sedimentary environments with exceedingly low flux of reducing power (Blair et al., 2007).
In this communication, I examine some energetic constraints pertinent to the potential abundance and functioning of microbial life in the oceanic basement (Orcutt et al., 2011;Edwards et al., 2012), where basalt and peridotite are the principal sources of reducing power. In the kinetics calculations, I focus on reactions involving basaltic glass and olivine, as rate data are available for these two substrates, which are abundant phases in volcanic and ultramafic ridge flank environments, respectively.
MATERIALS AND METHODS
Thermodynamic reaction path modeling using both arbitrary and true kinetics mode was conducted to examine the roles of substrate composition, secondary mineral compositions, and rates of primary mineral dissolution on the extent and timing of hydrogen (H 2 ,aq) release from water rock reactions in the seafloor. The thermodynamic calculations of hydrogen yields were conducted using the EQ3/6 code (Wolery and Jarek, 2003) if solid solution compositions were permitted. Ideal solid solution compositions as in Klein et al. (2013) were allowed to form in these reaction paths, in which rock was added incrementally to decrease water-rock ratios. The REACT code of Geochemist's Workbench (GWB; Bethke, 1996) was used for computing most of the kinetic reaction path models in which basalt glass or olivine dissolution was the rate-limiting step. I used 500-bar databases compiled from SUPCRT92 (Johnson et al., 1991) for these model calculations. Included in the GWB database was mid-ocean ridge basalt glass as a phase, for which thermodynamic properties were calculated with entropy, volume, and free energy data from Chermak and Rimstidt (1989) and Holland (1989) using a polyhedral approach (cf. Oelkers and Gislason, 2001). The composition of the glass (Ca 0.25174 Na 0.0977 K 0.0013 Fe ++ 0.1464 Fe +++ 0.01627 Mg 0.2585 Al 0.3844 Si 1.0000 O 3.3566 ) was calculated from average oxide concentrations of mid-Atlantic Ridge basalt reported by Klein (2004). The same basalt composition -added as "special reactant" -was used in the EQ6 model run presented in Figure 1. Also included were secondary Fe-phases, such as Fe-Al celadonite, ripidolite, and goethite (Wolery, 2004). Basalt glass dissolution rates at 25 • C of 10 −15 mol cm −2 s −1 (Oelkers, 2001) and 10 −7 mol cm −2 s −1 at 90 • C (Daux et al., 1997) were used to estimate a 10 • C rate of 10 −16 mol cm −2 s −1 . Olivine dissolution rates of 10 −14 mol cm −2 s −1 for 25 • C and neutral pH were taken from Pokrovsky and Schott (2000). A pH-dependency of reaction rates was not applied. Arrhenius parameters for dissolution of olivine, orthopyroxene, and clinopyroxene were provided in Seyfried et al. (2007). The peridotite composition used in the EQ6 thermodynamic calculations was assumed 70% olivine, 25% orthopyroxene, and 5% clinopyroxene, with X Mg set to 0.9 in all these phases. A specific surface area (SSA) of 1 m 2 g −1 was assumed in all kinetics calculations, as it may serve as good approximation for the SSA of fresh basaltic crust (Nielsen and Fisk, 2010).
It should be noted that the presented results of predicted hydrogen levels have large uncertainties due to poorly known thermodynamic properties for some secondary minerals and largely unknown concentration-activity relations in the solid solution phases at low temperatures. The effect of variations in pressure and primary mineral compositions is small by comparison. The calculation results can hence be applied to seafloor systems with <5000 m water depth (i.e., 500 bar pressure) and to basalt and peridotite with compositions that deviate slightly from those used in the calculations. The time values in the true kinetics calculations are plagued by an even larger ambiguity, because the uncertainties in the rate constants and the SSAs of fractured rocks are very large. FIGURE 1 | Equilibrium thermodynamic calculation of potential hydrogen supply by water-rock reactions and hydrogen demand by hydrogenotrophic microorganisms (Hoehler, 2004). Off-axis systems in basalt (NP = North Pond; JDF = Juan de Fuca ridge flank) in gray and the Lost City Hydrothermal Vent Field (LHVF) are also shown. Calculations were conducted with EQ3/6 along a path that had water-rock ratios decrease from infinity to two as temperatures increased from 2 to 120 • C.
For instance, if average SSAs of partially altered oceanic crust of 3.4 m 2 g −1 (Nielsen and Fisk, 2010) were used, all calculated rates would be 3.4 times faster. Therefore, the true kinetics calculation results merely have order-ofmagnitude-type accuracies as far as predicted time scales are concerned.
Equilibrium Thermodynamic Computations of Hydrogen Release
The first set of computations examines the differences in the hydrogen generating potentials between basalt and peridotite, the two most abundant rock types in the oceanic basement. The reaction paths were selected such that recharge of seawater into basement and concomitant heating of the water is represented. Consequently, temperatures go up along the reaction paths as water-to-rock ratios decrease. The results of these computations are compared with the demands of hydrogen in maintenance and growth of hydrogenotropic microbial communities (Hoehler, 2004). The results presented in Figure 1 indicate that basaltseawater reactions in off-axis environments can provide enough hydrogen for maintenance of hydrogenotrophic communities. Peridotites are predicted to produce hydrogen levels that are 3 to 4 orders of magnitude greater than during basalt-seawater interactions under identical conditions of temperature and waterto-rock ratios. These quantities of hydrogen are sufficient to allow growth of hydrogenotrophic communities across a large range of temperatures.
Kinetic-Thermodynamic Computations of Hydrogen Release
A number of computations were conducted to examine the temporal evolution of water-rock systems assuming that the dissolution of mafic minerals in a peridotite and glass in the basaltic crust are the rate-limiting step in the overall reactions.
Production of hydrogen during low-temperature interactions between seawater and peridotite were examined in Figure 2. The evolution of hydrogen in the intergranular fluid of peridotite is predicted to increase in a step-wise fashion, from concentrations <1 nM in the first years to concentration >10 µM after more than 50 years. An interesting discrepancy develops in the period 5 to 50 years between a model run in which serpentine solid solutions were allowed and another run that had solid solutions suppressed. These results suggest that the formation of serpentine with ferric iron dissolved in the tetrahedral and octahedral sites may facilitate rapid generation of hydrogen.
Hydrogen production is also strongly dependent on temperature and is predicted to change from sluggish at 10 • C to rapid at 110 • C (Figure 3). These predicted fast rates at temperatures of 50 • C and greater are inconsistent with experimental results (Mayhew et al., 2013), indicating that either dissolution rates of olivine are over-predicted at higher temperatures by the model or that processes other than olivine dissolution are rate-limiting.
Another set of reaction paths were computed to investigate the role of the nature of secondary Fe(III) mineral phase in hydrogen production (Figure 4). The interesting observation here is that the type of secondary Fe(III) mineral is only of secondary importance, if olivine dissolution is the rate-limiting step. These results suggest that magnetite is not required to explain hydrogen production. More oxidized phases like hematite and goethite can cause just as much hydrogen production in the kinetically controlled intial phase of peridotite-water interaction. Hydrogen production is predicted to be somewhat retarded, when oxides are not allowed to form and Fe(III)-bearing serpentine (cronstedtite, Mg 2 Fe II SiFe III O 5 (OH) 4 ) forms instead.
Basalt glass is predicted to dissolve more slowly (10 −15 mol cm −2 sec −1 ) than olivine (10 −14 mol cm −2 sec −1 at 25 • C). It is also expected to yield overall lower amounts of aqueous hydrogen upon water-rock reactions when compared with olivine-rich lithologies (Figure 1). The predicted evolution of hydrogen in waters interacting with basalt glass at 10 and 30 • C is shown in Figure 5. Although slower than peridotite (cf. Figure 3), basalt glass is predicted to produce nanomolal to micromolal quantities of aqueous hydrogen in the course of a 100,000 years.
Basaltic rocks often have olivine crystals distributed within a glassy mesostasis. An additional set of reactions paths were computed to determine the difference between olivine-bearing glassy basalt and pure basalt glass in the intial 1000 years of alteration. The results (Figure 6) show that a moderate proportion of olivine (10%) in a glassy basalt does increase the predicted hydrogen production markedly relative to olivine-free basalt, although it falls short of that predicted for an olivine rock (dunite). The increased hydrogen production is due to the faster dissolution rate of olivine compared to the glass. If the two phases dissolved at the same rate, the presence of olivine in the basalt FIGURE 2 | Predicted hydrogen yields during serpentinization of peridotite and considering dissolution rates of olivine and pyroxenes (EQ6). The upper curve represents results of a calculation that allowed serpentine solid solutions (serp-ss) to form. Those were not allowed to from in a second calculation represented by the lower curve. Talc formation early in the reaction path is from loss of silica dissolved in seawater to the rock. Note that the curves merge when magnetite is predicted to become stable.
would not increase hydrogen production at all, since ferroan smectite (and not serpentine and oxide) is predicted to form.
A comprehensive reaction path model was computed for lowtemperature (10 • C) alteration of basalt glass over the course of 10 million years (Figure 7). The model predicts that nontronite (Na 0.33 Fe III 2 Al 0.33 Si 3.67 O 10 (OH) 2 ) * nH 2 O and celadonite (KAlFe III Si 4 O 10 (OH) 2 ) drive most of the hydrogen production. Predictions of the quantities of hydrogen in ridge flanks depend heavily on the assumed fluxes (time-integrated water-to-rock ratios). High water-to-rock ratios (e.g., open circulation) does not result in noticable accumulations of hydrogen in volcanic ridge flank settings (Figure 8).
DISCUSSION
I examined water-rock reactions and energetic implications for related microbial life in three different ridge flank (sensu lato) habitats: (i) open circulation of oxygenated seawater (e.g., the North Pond site; Edwards et al., 2012;Orcutt et al., 2013), (ii) closed-system circulation of suboxic to anoxic seawater (eastern flank of the Juan de Fuca Ridge; Fisher et al., 2005), and (iii) seawater interaction with mantle peridotite in fracture zones and off-axis oceanic core complexes (e.g., Kelley et al., 2001). The effects of variable rock composition (e.g., basaltic glass vs. olivine-phyric basalt vs. peridotite) are examined as well as the consequences of variable dissolution and precipitation reactions.
Dissolved hydrogen (H 2 ,aq) has long been known to be one of the most potent energy sources for chemolithoautotrophic microorganisms in the deep sea (e.g., Edwards et al., 2005). It has been demonstrated that different rock types have strongly variable abilities to release hydrogen upon water-rock reactions. It has been shown that peridotites can generate 100s of millimoles of hydrogen per kg of water in the course of high-temperature (200-300 • C) serpentinization reactions (e.g., McCollom and Bach, 2009). The results presented above show that peridotites have much greater potential for producing hydrogen than basalts. This difference may account for the fact that isotopic evidence for sulfate reduction is very commonly found in mantle peridotites (e.g., Shanks, 1998, 2003; FIGURE 5 | Hydrogen production predicted for basalt-seawater interactions at 10 and 30 • C (GWB). Kinetics is controlled by dissolution of basalt glass with an SSA of 1 m 2 g −1 (see text).
FIGURE 6 | Dissolution-controlled kinetic reaction paths from pure olivine (dunite) and basalt glass in comparison with an olivine basalt, composed for 90% basalt glass and 10% olivine (Fo 80 ). Calculations were conducted by GWB, and forsterite dissolution rates are used. Alt et al., 2007) than in basalt. The hydrogen contents of intergranular fluids during serpentinization are high enough under a large range of conditions, even at low temperatures, to allow for hydrogenotrophic sulfate reductions. The drive for hydrogenotrophic sulfate reduction is smaller in basalt systems, but may facilitate maintenance of sulfate reducing communities. Myrs of alteration at low water-to-rock ratios (10). The computation was conducted with GWB.
FIGURE 8 | Dependency of hydrogen release kinetics as a function of water-to-rock ratios. Note that at high water-to-rock ratios hydrogen concentrations >1 nM are not predicted to develop. The computations were conducted with GWB for a temperature of 10 • C.
The sulfate-reducing bacteria identified in the Juan de Fuca Ridge flank (Lever et al., 2013) may hence be supported by hydrogen (Boettger et al., 2013), although it is also possible that organic carbon compounds are used as electron donor.
Equilibrium thermodynamic predictions of hydrogen yields during low-temperature water-rock reactions are of limited use, because these reactions commonly do not reach the state of equilibrium due to sluggish reaction kinetics. We do not currently have a good understanding of what the rate-limiting steps in production of hydrogen in water-rock reactions are. Most of the calculations presented above work on the assumption that olivine dissolution is an important rate-limiting in the overall reactions FIGURE 9 | Predicted evolution of dissolved oxygen in the course of water-basalt interactions at different water-to-rock ratios. These ratios correspond to temperature anomalies ( T) required for circulating seawater in order to account for a global heat flow of 7 TW in ridge flanks. The computations were conducted with GWB for temperatures as indicated in the inset. Note that warm flanks with decreased ventilation by seawater are predicted to turn anoxic after a few million years. Ridge flanks with higher flux rates of seawater (w/r > 2000), in contrast, may stay oxygenated throughout their evolution. that release hydrogen. However, the levels of hydrogen predicted to develop within a few years during peridotite-water interactions greatly exceed the hydrogen yields observed in experiments conducted by Mayhew et al. (2013) at temperatures of 55 and 100 • C. These authors suggested that the transfer of electrons from silicates to oxides is the rate-limiting step. If so, then apparently the dissolution of olivine may, though part of reaction sequence, not control the rate of this overall transfer. More experiments like those by Mayhew et al. (2013) will be required to decipher the pathways and rates of hydrogen generating reactions. In the meantime, I suggest that simplified kineticthermodynamic modeling may provide useful insights that go beyond what can be achieved by equilibrium thermodynamic computations.
The computational results presented above suggest that the nature of the Fe(III) bearing minerals (e.g., hematite vs. goethite vs. magnetite vs. cronstedtite) is of secondary importance in terms of hydrogen yields at low temperatures when dissolution of olivine is the rate-limiting step (Figure 4). This result is surprising, because it is commonly assumed that serpentinization of olivine and production of magnetite are required to have high hydrogen yields. Of course, the equilibrium hydrogen concentrations are greatest, when the fluid equilibrates with magnetite, brucite, and serpentine. But other reaction pathways that lead to the production of more oxidized phases like hematite and goethite are apparently able to produce similar quantities of hydrogen as magnetite, as long as the pace of the reaction is determined by the rate of olivine dissolution. On the other hand, should the rates of Fe(III)-mineral precipitation be slower than the rate of olivine dissolution, then the magnitude of the difference in the rates should play a dominant role in controlling hydrogen production. Figuring out those relative rate differences in the critical reaction steps is an import challenge for future experimental work.
The computations also show that when olivine in a basalt dissolves at a rate similar to basalt glass, it does not develop the same reducing power as it does during serpentinization, because ferroan smectite -and not serpentine + ferric oxide -will form. Only if olivine reacts at rates that are an order of magnitude greater than those of the other phases will olivine-bearing basalts yield more hydrogen than basalt free of olivine. These results lend tentative support to the proposal by Lever et al. (2013), who suggested that alteration of olivine may have played an important role in producing hydrogen for sulfate reducing bacteria in the Juan de Fuca ridge flank system. However, better constraints on the relative rates of alteration of basalt glass and olivine are required to further test this idea.
The computations presented indicate that fresh basalt reacting with seawater may yield micromolal quantities of hydrogen at somewhat elevated temperatures and when water-to-rock ratios are small (e.g., 10). The quantities of hydrogen analyzed in the 60 • C Juan de Fuca system (1 µM; Boettger et al., 2013) can easily by produced in basement that is largely isolated from the open ocean, even at temperatures below 60 • C (Figure 5). Hydrogen production in basaltic ridge flank settings can yield levels high enough to support microbial life (i.e., quantities of several nanomoles per kg water; e.g., Hoehler et al., 2001), if time-integrated water-to-rock ratios are <1000 (by mass). We can assess what this flux number implies in terms of alteration temperature, because we have a tight constraint on the amount of heat (about 7TW) that is lost by circulation of seawater in ridge flanks (Stein and Stein, 1994). Most of that circulation takes place within the uppermost 300 m of basement where permeability is high enough to facilitate fluid flow in the absence of large pressure gradients (Fisher and Becker, 2000). Given that 3 km 2 of seafloor are generated anually, roughly 2 * 10 12 kg yr −1 of basaltic upper basement is newly subjected to ridge flank circulation. If this basement is exposed to a 1000-fold greater mass flux of seawater in its lifetime, we arrive at 2 * 10 15 kg of circulating seawater per year. For this mass flux (F), the circulating water must be heated by 21 • C ( T) to transport 7 TW of heat (Q), given a heat capacity of water (Cp) of 4184 J kg −1 • C −1 (F = Q/(Cp * T); Elderfield and Schultz, 1996). Smaller fluxes of water would come with higher hydrogen concentrations in basaltic ridge flank systems, e.g., at a w/r of 100 (Figure 8), but then T would have to be an unreasonable 210 • C to account for the 7TW heat loss at a global scale. These results lead me to suggest that hydrogen generation in basaltic ridge flank crust on a global scale is negligible, although it may happen regionally in virtually closed circulation systems (e.g., at the eastern flank of the Juan de Fuca Ridge). Oxidatively altered basalt has an even lower potential of yielding hydrogen by hydrolysis reactions, but radiolytic hydrogen production may play a role in the maturing ocean crust (Türke et al., 2015;Dzaugis et al., 2016).
How do these new assessments influence the validity of the bioenergetic calculations presented by Bach and Edwards (2003)? The iron oxidation rate estimated by these authors (1.7 ± 1.2 * 10 12 moles yr −1 ) was recently confirmed by a newer compilation of iron redox state in upper ocean crust (Rutter, 2015). The bioenergetics parameter, e.g., a r G of 66 kJ mol −1 for iron oxidation and 292 kJ gC −1 cellular carbon are also still valid. Bach and Edwards (2003) assumed that only half of the ferrous iron was oxidized to ferric iron by oxygen, the other half, they suspected, is oxidized by water and releases hydrogen. These authors calculated that an energy flux of 0.85 * 10 12 moles yr −1 * 66 kJ mol −1 = 6 * 10 13 kJ yr −1 can fix 2 * 10 11 gC biomass yr −1 . The remaining Fe, they assumed, gives rise to the generation of 0.45 * 10 12 moles of hydrogen yr −1 , which could support twice the amount of biomass by iron reduction.
I suggest considering those 50% of oxidation of ferrous iron by water as absolute maximum value. The computations presented above indicate that hydrogen production from hydrolysis may actually be quite minimal in open ridge flank systems, where time-integrated fluid fluxes are large. The North Pond system in the western flank of the mid-Atlantic Ridge is such an open system, where hydrogen in basement fluids is <1 nM and oxygen is still present, although it is respired along the flow path of seawater (Orcutt et al., 2013). In those kind of systems, iron oxidation by oxygen plays a much larger role, and at time-integrated water-to-rock ratios of >2000 (corresponding to a T of 11 • C), oxygen is not even the restricting compound (Figure 9). In other words, there is more oxygen fluxed those open ridge flanks than is required to account for the amount of Fe oxidation observed. If we assume that all iron in basaltic basement was oxidized by oxygen, the amount of biomass potentially fixed by iron oxidation in basaltic ridge flank aquifers is up to two times higher (i.e., 4 * 10 11 gC biomass yr −1 ) than suggested before. In computing the standing stock of cells (using a maintenance energy of 200 J gC −1 yr −1 and a cell weight of 26 fg; Whitman et al., 1997) that could be supported by the maximum energy flux related to aerobic iron oxidation (12 * 10 13 kJ yr −1 ), we arrive at 6 * 10 14 gC or 2.4 * 10 28 cells. This is as much as 10% of the sedimentary biomass (Kallmeyer et al., 2012), indicating that iron oxidation in the basaltic aquifer remains one of key potential drivers supporting microbial communities in the basaltic basement.
Hydrogen plays a role where ultramafic and other olivine-rich rocks interact with seawater. This is case in oceanic detachment fault setting, transform faults, bend faults, and in sedimented ridge flanks such as the eastern of the Juan de Fuca Ridge are the Costa Rica Rift Zone. Globally, a hydrogen-fuelled biophere in the oceanic basement may be as large (or larger) as the cold basaltic ridge flank system that runs primarily on Fe-oxidation. In the aging basaltic ridge flanks, Fe oxidation will be slowed as alteration rinds armor the fresh glass and slow down reactions. Strong enrichments of U and K in the alteration rinds, however, give rise to hydrogen production from radiolysis, which may become increasingly important relative to Fe-oxidation in old seafloor (Türke et al., 2015). For a better assessment of the size of a potential hydrogenotrophic subseafloor biosphere, a more detailed understanding of the critical reaction rates and pathways is needed.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and approved it for publication.
|
2016-06-17T07:17:08.940Z
|
2016-02-09T00:00:00.000
|
{
"year": 2016,
"sha1": "b5300a3670cb43a0aeb834f99fc7040b022b3007",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.00107/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5300a3670cb43a0aeb834f99fc7040b022b3007",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
208566436
|
pes2o/s2orc
|
v3-fos-license
|
Host tolerance and resistance to parasitic nest flies differs between two wild bird species
Abstract Hosts have developed and evolved defense strategies to limit parasite damage. Hosts can reduce the damage that parasites cause by decreasing parasite fitness (resistance) or without affecting parasite fitness (tolerance). Because a parasite species can infect multiple host species, determining the effect of the parasite on these hosts and identifying host defense strategies can have important implications for multi‐host–parasite dynamics. Over 2 years, we experimentally manipulated parasitic flies (Protocalliphora sialia) in the nests of tree swallows (Tachycineta bicolor) and eastern bluebirds (Sialia sialis). We then determined the effects of the parasites on the survival of nestlings and compared defense strategies between host species. We compared resistance between host species by quantifying parasite densities (number of parasites per gram of host) and measured nestling antibody levels as a mechanism of resistance. We quantified tolerance by determining the relationship between parasite density and nestling survival and blood loss by measuring hemoglobin levels (as a proxy of blood recovery) and nestling provisioning rates (as a proxy of parental compensation for resources lost to the parasite) as potential mechanisms of tolerance. For bluebirds, parasite density was twice as high as for swallows. Both host species were tolerant to the effects of P. sialia on nestling survival at their respective parasite loads but neither species were tolerant to the blood loss to the parasite. However, swallows were more resistant to P. sialia compared to bluebirds, which was likely related to the higher antibody‐mediated immune response in swallow nestlings. Neither blood recovery nor parental compensation were mechanisms of tolerance. Overall, these results suggest that bluebirds and swallows are both tolerant of their respective parasite loads but swallows are more resistant to the parasites. These results demonstrate that different host species have evolved similar and different defenses against the same species of parasite.
However, host defenses mechanisms can be costly, and therefore, hosts have to balance these investments with other important processes including reproduction, migration, and foraging to maximize their fitness (Graham et al., 2010(Graham et al., , 2011Lochmiller & Deerenberg, 2000;Sheldon & Verhulst, 1996;Van Der Most et al., 2011).
Additionally, host species body size can affect their tolerance to parasitism. Despite similar parasite densities (number of parasites per gram of host), nestling birds of larger-bodied host species are less affected by parasitic nest flies than small-bodied hosts, suggesting that larger hosts are better defended and more tolerant of parasites than smaller hosts (Heimpel, Hillstrom, Freund, Knutie, & Clayton, 2017;Knutie et al., 2016;McNew & Clayton, 2018). Smaller-bodied hosts have higher surface area to volume ratios and higher metabolic rates and therefore require more energy per gram of body mass than larger-bodied hosts (Schmidt-Nielson, 1984). These traits could increase the cost of the infection if the hosts are not able to find enough food resources to generate energy to allocate toward tolerance mechanisms, such as repairing damaged tissues or recovering lost resources, such as red blood cells. Although several studies have shown correlations between host fitness and parasite load (Careau, Thomas, & Humphries, 2010;Christe et al., 1996;Dudaniec, Kleindorfer, & Fessl, 2006), few field experiments directly compare host defenses between species against the same native parasite. One potential model system to study how host defense mechanisms differ between species in response to the same parasite is the box-nesting bird-parasite system of eastern bluebirds (Sialia sialis) and tree swallows (Tachycineta bicolor) and their parasitic nest flies Protocalliphora sialia (DeSimone, Clotfelter, Black, & Knutie, 2018;Hannam, 2006;Roby, Brink, & Wittmann, 1992). While adult flies are nonparasitic, the larvae live in the nest and feed nonsubcutaneously on the blood of nestlings (Boyd, 1951). Several studies report no detectable lethal effects of P. sialia on nestling survival of tree swallows and eastern bluebirds, while others report sublethal effects of the parasite such as lower hemoglobin levels, lower body mass, and delayed fledging in parasitized nestlings compared to nonparasitized nestlings (Table 1). Despite similar varying effects of parasitism on these two host species, parasite abundance differs between them. On average, tree swallows have 36.5 ± 6.5 parasites per nest and eastern bluebirds have 81.1 ± 11.5 parasites per nest (Table 1). However, mass of the host and clutch size can affect parasite load (Dudaniec & Kleindorfer, 2009;Dudaniec et al., 2006) and eastern bluebirds have greater body mass than tree swallows while tree swallows generally have larger clutch sizes than bluebirds (Pinkowski, 1977b;Winkler et al., 2011). To control for clutch size and body mass differences between host species, parasite density (number of parasites per gram of host) can be calculated from previous studies (Table 1). We multiplied the average clutch size for each population by the average hatch mass of swallows (2.4 g) and bluebirds (3.8 g), which resulted in a total mass for the nest; average hatch mass was calculated from our Minnesota field site since these data are not available for most of the studies listed in the table. The average number of parasites published in the study was then divided by total mass of the nestlings. The average parasite density in bluebirds is still higher than swallows (Table 1; bluebirds: 4.36 ± 0.85 parasites per gram of nestling, swallows: 2.50 ± 0.49 parasites per gram of nestling). Based on these results, P. sialia either prefers bluebirds over swallows or each host species has evolved different defenses against the parasite.
The first goal of the study was to compare the effects of P. sialia on growth and survival of eastern bluebird and tree swallow nestlings in the same geographic location. Specifically, we experimentally manipulated P. sialia and then quantified growth metrics and fledging success of nestlings. Based on prior studies, we predicted that P. sialia would not significantly affect nestling growth and survival of bluebirds and swallows and therefore both host species would be effectively defended against the parasite (DeSimone et al., 2018;Gentes, Whitworth, Waldner, & Fenton, 2007;Hannam, 2006 Thomas & Shutler, 2001). We then tested whether bluebirds and swallows had effective defenses against P. sialia. Previous studies found that eastern bluebirds have higher parasite densities compared to tree swallows (Table 1), and larger-bodied bird species, such as bluebirds, may be able to tolerate parasites more than smaller-bodied bird species (Heimpel et al., 2017;McNew & Clayton, 2018). Because swallows have lower parasite densities than bluebirds, we predicted that swallows would be resistant to P. sialia compared to bluebirds.
For a potential mechanism of resistance, we quantified IgY antibody levels as a proxy of the immune response and then determined whether parasite abundance was related negatively to antibody levels (Owen et al., 2010). After the host is bitten, a series of immune pathways are activated by the host to induce the inflammatory response, leading to the production of IgY antibodies, which can bind to larval parasitic nest flies (DeSimone et al., 2018;Koop, Owen, Knutie, Aguilar, & Clayton, 2013;Owen et al., 2010). These immune molecules can negatively affect ectoparasites by causing edema (tissue swelling), which prevents the parasites from feeding from the capillaries, and damage to the parasite's tissue (e.g., via the release of proteolytic molecules from granulocytes). If swallows are more resistant to P. sialia than bluebirds, then we predicted that nestling TA B L E 1 Relationship between Protocalliphora sp. and fledging success in eastern bluebirds and tree swallows across the United States Grand mean 36.5 ± 6.5 (9) 2.50 ± 0.49 (9) Note: The types of studies were either experiment (E) or correlational (C) and found no relationship (0) or a negative relationship (−) between Protocalliphora spp. and fledging success. Parasite abundance is shown as the mean ± SE with number of nests in parentheses. Mean parasite density (number of parasites per gram of nestling) was calculated by dividing the mean parasite abundance by the average mass of nestlings in the nests from the study. tree swallows would mount a higher immune response compared to bluebirds and the immune response will be negatively correlated with parasite abundance (DeSimone et al., 2018). For mechanisms of tolerance, we quantified hemoglobin levels as a potential proxy for oxygenated red blood cells recovery (i.e., tissue repair), and parental provisioning rates to determine whether parents of parasitized nestlings were compensating for energy lost to the parasite. We predicted that parasitized bluebird nestlings would have similar hemoglobin levels to nonparasitized nestlings if they are able to recover oxygenated red blood cells as an effective tolerance mechanism.
Additionally, if parasitized bluebird nestlings could recover red blood cells, we predict that hemoglobin levels would be similar across varying parasite densities. Alternatively, if parents increase feeding rates when nestlings are parasitized, the nestlings might be better able to tolerate the parasites, which would also be reflected through higher blood glucose levels in parasitized nestlings (Knutie et al., 2016). Eastern bluebirds also build open-cup nests, which are made of grasses and/or pine needles, in secondary cavities (Gowaty & Plissner, 2015). The clutch size of eastern bluebirds ranges from three to seven eggs, which are incubated for about 13-14 days, and nestlings spend 16-22 days in the nest (Gowaty & Plissner, 2015;Pinkowski, 1975). As with swallows, bluebirds feed their nestlings by placing food items in the nestling's open mouth. For bluebirds, as with swallows, both parents will feed the nestlings despite division of labor varying with geographic location.
| Experimental manipulation of parasites
Boxes were checked once a week for nesting activity. Once eggs appeared, nests were checked every other day until nestlings hatched. At hatching, the nestlings and top liner of the nest cup (i.e., just enough material to provide a barrier between the insecticide and nestlings) were removed in order to treat the nest with either water (parasitized treatment) to allow for natural parasitism or a 1% permethrin solution to remove all parasites (nonparasitized treatment) (DeSimone et al., 2018;Knutie et al., 2016). The treatment for each species was initially determined by a coin flip, and the following nests were assigned by alternating treatment for each nest. In 2016, 12 nonparasitized and 11 parasitized swallow nests and six nonparasitized and seven parasitized bluebird nests were followed. In 2017, 13 nonparasitized and 16 parasitized swallow nests and nine nonparasitized and 11 parasitized bluebird nests were followed.
| Nestling growth and survival
Since swallow and bluebird eggs hatch asynchronously, we also determined the age of each nestling (0-2 days old) at this time by weighing them (0.1 g) with an Ohaus CS200-100 portable compact scale balance. When nestlings were ten days old, they were weighed (g) again and tarsus length (mm), bill length (mm), and first primary feather length (mm) were measured using Avinet plastic dial calipers. They were also banded with a numbered USFWS metal band (Master's banding permit #23623). A small blood sample (<30 µl) was taken from the brachial vein of the nestlings. When nestlings were approximately 13 days old, the boxes were checked every other day from a distance (to avoid premature fledging) to determine the fledging success and the age at which the nestlings fledged or died (>10 day old nestlings are not typically removed from the nest by the parents after they die, S.A.K. personal obs.).
| Nestling hemoglobin and glucose
Whole blood hemoglobin was measured using a HemoCue ® HB +201 portable analyzer, and glucose was measured using a HemoCue ® Glucose 201 portable analyzer. The rest of the blood was placed on ice for up to 3 hr until it was centrifuged for 3 min at 12,000 g at Itasca Biological Station. Plasma and red blood cells were then stored separately in a −20°C freezer.
| Parental behavior
In 2016, the amount of time that parents spent in the box and the frequency that they fed their offspring was quantified between 0,558 and 1,335. If more than one observation occurred in a day, the order of the nests was determined by a random number generator and/or a coin toss. Behavior was quantified when nestlings were 5 and 10 days old.
Nests were checked when the observer (K.M.G.) arrived at the nest box to make sure that it was still occupied. Once the nests were checked, there was a 15-min waiting period after checking the box before beginning the observation period to reduce the impact of the disturbance; the observer was at least 30 m from the nest box to reduce disturbance (Tripet & Richner, 1997). The observation periods lasted between 30-60 min (mean ± SE = 57.27 ± 1.17 min).
During the observation, we determined whether they held food in
| Quantifying parasites
Once nestlings died or fledged, nests were collected and stored in plastic bags. Nests were dissected and all larvae, pupae, and pupal cases were counted to determine total parasite abundance for each nest. Eclosed flies were collected and identified as P. sialia.
| Statistical analyses
A negative binomial and binomial general linear model (GLM) was used to analyze the effect of parasite treatment and host species on parasite load (abundance and density) and fledging success, respectively. For each host, general linear mixed models (GLMMs) were used to analyze the effect of parasite treatment on nestling growth measurements, immune response, and blood glucose and hemoglobin levels, with nest as a random effect. We initially used year as a covariate for all models but it was excluded from all models because it did not account for a significant amount of variation. We performed log 10 transformations to normalize the data distribution for 1st primary length, bill length, mass, hemoglobin, and glucose.
Since we had two days of behavioral observations (when nestlings were different ages) in 2016, GLMMs were used to determine the effect of treatment and age on parental behavior, with nest as a random effect, for each species. For the tolerance analysis, determined the reaction norm between parasite load and host health (Simms, 2000); specifically, we used GLMs to determine the effect of parasite density and host species on fledging success and mean hemoglobin levels. Analyses were conducted in RStudio (2016,
Parasite treatment did not affect the amount of time parents spent in the box (χ 2 = 0.50, df = 1, p = .48). However, nestling age was correlated with the proportion of time spent in the box (χ 2 = 10.90, df = 1, p < .001). The proportion of time spent in the box was affected by nestling age and varied across treatments (χ 2 = 9.60, df = 1, p < .01) as parents spent more time in boxes when the nestlings were younger compared to when the nestlings were older and they also spent more time in parasitized nests compared to nonparasitized nests.
| D ISCUSS I ON
We examined the effects of P. sialia on two different species of avian hosts across two breeding seasons. Protocalliphora sialia did not affect the survival to fledging of either host species; however, bluebirds sustained twice as many parasites as swallows, which is consistent with the results of past studies (Table 1). Tree swallow nestlings produced an antibody response to P. sialia, which likely reduced parasite load (Figure 4). In contrast, bluebird nestlings did not produce a robust immune response to P. sialia. Both host species were tolerant to P. sialia at their respective parasite loads with respect to survival to fledging since increasing parasite abundances did not result in a decrease in host fitness. However, we could not determine the mechanism of tolerance. Parasitized nestlings of both species had lower hemoglobin levels than nonparasitized nestlings and were not tolerant to the blood lost to the parasite ( Figure 3). Furthermore, parents from parasitized nests were not provisioning their nestlings more than parents from nonparasitized nests nor did we see differences in blood glucose levels between treatments in relation to provisioning rates ( Parasitized swallows and bluebirds did not effectively recover hemoglobin to nonparasitized levels, as found in other studies (Knutie et al., 2016;Morrison & Johnson, 2002;Råberg et al., 2007) (Table 3, Figure 3). Measuring micronuclei in red blood cells in the future would provide a proxy of whether any blood was recovered by the host . Additionally, the swallows and bluebirds might be able to quickly and effectively repair damaged epithelial tissue caused by the ectoparasite, subsequently reducing the potential for secondary infections or leaking of blood (Allen & Sutherland, 2014;Medzhitov et al., 2012;Uhazy & Arendt, 1986). This potential tolerance mechanism could be addressed in future studies by quantifying skin damage caused by the parasite and tracking the rate of repair.
While we did not see differences in feeding frequency between treatments in swallows, we found higher blood glucose levels in parasitized swallows compared to their nonparasitized counterpart. In bluebirds, there were no differences in feeding frequency or blood glucose levels between treatments. Swallows, on the other hand, were not increasing their feeding frequency to parasitized nestlings, but the nestlings still had elevated blood glucose levels. Outside of resource provisioning, blood glucose levels and subsequent glucocorticoids can also be a sign of stress. Studies have found evidence connecting parasite load to glucocorticoid and stress levels (Haond, Nolan, Ruane, Rotllant, & Wendelaar Bonga, 2003;Raouf, Smith, Brown, Wingfield, & Brown, 2006). Glucocorticoids can also regulate immune function as certain concentrations can either enhance or inhibit certain immune functions, such as immunity or inflammation (Cain & Cidlowski, 2017). Specifically, the increased blood glucose levels we detected in swallows could be a function of stress and/or another sign of an immune response. Further investigation is needed into how blood glucose levels within this system are influenced by parasites and how stress levels impact host defense mechanisms.
Parasite density was lower in swallows than bluebirds, suggesting that swallows are more resistant to the parasites than bluebirds.
Parasitized swallow nestlings produced an antibody response, which F I G U R E 4 Protocalliphora sialia-binding antibody response in bluebird and swallow nestlings from parasitized and nonparasitized nests in 2017. (a) Mean ± SE antibody response in eastern bluebirds and tree swallows for both treatments; swallows have a higher antibody response than bluebirds. (b) The relationship between parasite density and P. sialia-binding antibody response in eastern bluebirds and tree swallows within the parasitized treatment. Within the parasitized nests, the antibody response is negatively related to parasite abundance in swallows but not bluebirds 0.37 ± 0.05 0.50 ± 0.08 Proportion of time in box 0.14 ± 0.06 0.11 ± 0.06 0.07 ± 0.01 0.14 ± 0.06 0.52 ± 0.09 0.25 ± 0.08 0.71 ± 0.05 0.19 ± 0.03 Note: Proportion of time spent in the box was quantified from the total time observed. Nestling provisioning was quantified as number of feeding events per minute. Numbers are in mean ± SE and numbers in parentheses are the number of nests.
was negatively related to parasite load, suggesting that swallow nestlings were able to resist the parasite to a certain load. This antibody response was likely triggered by P. sialia feeding on the nestlings. After the host is bitten, tissue damage and the introduction of antigens from the parasite stimulate the release of inflammatory cytokines, which triggers the migration of innate immune cells to migrate to the damaged tissue (Owen et al., 2010). These cells then degrade the antigen with the help of the major histocompatibility complex, which activates the helper T lymphocytes and the production of antigen-specific antibodies, such as IgY antibodies. Through repeated exposure, these antibodies can quickly migrate to the wound, bind, and degrade the antigens. This immune cascade can negatively affect ectoparasites by causing edema (tissue swelling), which prevents the parasites from feeding from the capillaries, and damage to the parasite's tissue (e.g., via the release of proteolytic molecules from granulocytes).
Interestingly, however, the immune response of nestling swallows did not differ between treatments suggesting that nonparasitized birds also produced an immune response. The antibody isotype (IgY) that we quantified binds to P. sialia but is not specific to P. sialia. One possible explanation for why nonparasitized nestlings are producing an immune response is that there are other parasites in the system, such as endo-or intracellular parasites, that may not be affected by the experimental manipulation (Pedersen & Fenton, 2015;Shutler et al., 2004). Several studies have shown that a reduction of the target parasite resulted in an increase in a nontarget parasite species (Knowles et al., 2013;Pedersen & Antonovics, 2013). Swallows at other locations are infected with other parasites, such as the blood parasite Trypanosoma spp. (Shutler et al., 2004), which might not be as affected by the insecticidal treatment (Sholdt, Schreck, Mwangelwa, Nondo, & Siachinji, 1989 . For example, food supplementation increased antibody production and parasite resistance in eastern bluebirds, which was possibly mediated by the gut microbiota of the host (Knutie, 2019).
Therefore, it is possible that swallows are feeding their nestlings food with higher nutritional value or have gut microbiota that better primes the development of the immune system, compared to bluebirds, but these ideas need to be tested in the future.
The differences in parasite density between the two species could also be influenced by the preference of the parasite.
Because swallows and bluebirds are different sizes, they differ in their metabolic rate and thus the rate of CO 2 release, which might make one host more attractive than the other host. Despite swallows requiring more energy per gram, bluebirds have a higher resting metabolic rate because they are larger and therefore release more CO 2 making them more attractive to P. sialia than swallows (Chaisson & Hallem, 2012;Furness & Speakman, 2008;Lehane, 2005). Additionally, adult plumage color may also play a role in P. sialia preference for bluebirds (Lehane, 2005). Bluebirds have a different UV chroma coloration than swallows which might serve an attractant to P. sialia to the nest box (Bitton & Dawson, 2008;Liu, Siefferman, & Hill, 2007). Other mechanisms by which P. sialia might prefer bluebirds over swallows are nest characteristics. For example, certain plants contain volatile compounds that may deter parasites (Dubiec, Góźdź, & Mazgajski, 2013). Both species incorporate grasses and pine needles into their nests (Gowaty & Plissner, 2015;Winkler et al., 2011) but the effect of the specific plant composition has not been explored.
Overall, our study suggests that nestling survival of bluebirds and swallows is relatively unaffected by the P. sialia because the hosts can effectively defend themselves against the parasite. For example, both bluebirds and swallows can tolerate their respective loads. However, swallows sustain fewer parasites per gram of body mass compared to bluebirds, which is likely because swallows resist the parasite with an immunological response. This resistance in swallows could be because they are unable to tolerate similar parasite densities as bluebirds due to their body size because smaller-bodied hosts probably suffer a higher cost of parasitism (Cardon, Loot, Grenouillet, & Blanchet, 2011;McNew & Clayton, 2018). Other studies have also found that bluebirds had higher parasite densities than swallows but the effect of the parasite on the hosts differs based on location and year (Table 1). Our results suggest that different host species can defend themselves similarly and differently to the same parasite, which is likely due to variation in host ecology and life history. Future studies are needed to determine if there is long-term interannual variation in these host-parasite relationships and whether environmental factors, such as precipitation, could affect them (Musgrave, Bartlow, & Fair, 2019).
ACK N OWLED G M ENTS
We thank Terry Whitworth for identifying Protocalliphora sialia,
Steve Knutie and Doug Thompson for building the nest boxes, and
Itasca Biological Station (David Biesboer, Laura Domine, and Leslie Knoll) for logistical support. We also thank the following people for allowing us access to nest boxes on their property and their help in the field (in alphabetical order): Wade and Kelly Foy (Rock Creek
|
2019-10-17T09:06:20.242Z
|
2019-10-14T00:00:00.000
|
{
"year": 2019,
"sha1": "c5a0732f3a234858ada3dc29c0cfcd1b0e5618dd",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.5682",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d371afc8c19ff88c3eb50cf049931452a04975a",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
17443151
|
pes2o/s2orc
|
v3-fos-license
|
Continued efforts to translate diabetes cardiovascular outcome trials into clinical practice
Diabetic patients suffer from a high rate of cardiovascular events and such risk increases with HbA1c. However, lowering HbA1c does not appear to yield the same benefit on macrovascular endpoints, as observed for microvascular endpoints. As the number of glucose-lowering medications increases, clinicians have to consider several open questions in the management of type 2 diabetes, one of which is the cardiovascular risk profile of each regimen. Recent placebo-controlled cardiovascular outcome trials (CVOTs) have responded to some of these questions, but careful interpretation is needed. After general disappointment around CVOTs assessing safety of DPP-4 inhibitors (SAVOR, TECOS, EXAMINE) and the GLP-1 receptor agonist lixisenatide (ELIXA), the EMPA-REG Outcome trial and the LEADER trial have shown superiority of the SGLT2-I empagliflozin and the GLP-1RA liraglutide, respectively, on the 3-point MACE outcome (cardiovascular death, non-fatal myocardial infarction or stroke) and cardiovascular, as well as all-cause mortality. While available mechanistic studies largely support a cardioprotective effect of GLP-1, the ability of SGLT2 inhibitor(s) to prevent cardiovascular death was unexpected and deserves future investigation. We herein review the results of completed CVOTs of glucose-lowering medications and suggest a possible treatment algorithm based on cardiac and renal co-morbidities to translate CVOT findings into clinical practice.
Background
Type 2 diabetes is characterized by a heavy atherosclerotic burden, inadequate compensatory remodeling and accelerated plaque progression, despite extensive use of medical therapies [1]. In diabetic patients, macrovascular and microvascular disease are tightly linked: patients with proliferative retinopathy have a 25-fold higher risk for lower limb amputation and a 2-3 fold higher risk for coronary heart disease (CHD) as compared to those without [2]. These features account for an increased cardiovascular disease (CVD) morbidity and mortality. CVD may be present at HbA1c values below the diagnostic threshold for diabetes [3], and in patients with overt type 2 diabetes, for every percentage point increase in HbA1c, the relative risk of CVD increases by about 18 % [4]. For these reasons, the latest ESC/EASD Guidelines on diabetes, pre-diabetes, and CVD emphasize the need for a stringent approach in patients with diabetes, underlying the importance of a patient-centered care [5]. In summary, in patients with type 2 diabetes, there is an excess risk and burden of CVD, which parallels the worsening of glycemic control.
Several mechanisms are though to be responsible for cardiovascular damage in diabetic patients, including hyperglycemia and oxidative stress [6], hypoglycemia [7], hyperinsulinemia and insulin resistance [8,9]. These mechanisms can be countered by the use of different glucose-lowering medications, which are therefore expected to reduce cardiovascular risk in patients with diabetes, in addition to lowering HbA1c <7.0 %, which is still consid-wide space to improve the management of patients with diabetes and cardiovascular disease [12].
We herein aim to briefly re-analyse the results of randomized controlled trials (RCTs) reporting the effects of various glucose-lowering medications on cardiovascular outcomes in patients with type 2 diabetes. To this end, we run a PubMed search for RCTs with the following terms: "type 2 diabetes" "randomized controlled trial" "cardiovascular" and screened for cardiovascular outcome trials (CVOTs) assessing safety or efficacy of glucose-lowering medications.
The macrovascular paradox
While there is much evidence that the risk of CVD increases along with HbA1c, intervention trials aimed at determining whether tight glycemic control is associated with a reduction in CVD have offered controversial results (meta-analyzed in [13]). This is reflected, for example, by the observation that there is no correlation between the glycaemic control and coronary vascular function in diabetic patients [14]. We call this the "macrovascular paradox": i.e. the failure to reduce macrovascular complications to the same extent as the microvascular complications, despite comparable reductions in HbA1c. A number of algorithms have been proposed to ensure glycemic control whenever lifestyle measurements fail to keep HbA1c at target. The choice of drugs in the treatment strategy is mainly based on efficacy, risk of hypoglycaemia, effect on body weight, other side effects and, ultimately, costs. Effect on CVD risk is an important item that may guide drug selection. Multiple reasons have been claimed to potentially account for the negative findings of CVD reduction trials, including: drug-induced hypoglycemia, weight gain, other side effects, wrong HbA1c target, short duration of the trials. A likely explanation for the CVD paradox may rely on the multifactorial nature of the CVD risk in diabetes, as highlighted by the results of the STENO-2 study [15]. As a corollary of this, pharmacologic agents with effects exceeding the glucose lowering action may be expected to confer either negative or positive impacts on CVD.
Trials on cardiovascular effects of glucose lowering agents
Several articles have already extensively reviewed preclinical and clinical findings on the cardiovascular effects of glucose-lowering medications [16][17][18][19][20][21]. We herein focus on data coming from early trials and from so-called cardiovascular outcome trials (CVOT) requested by regulatory agencies for marketing authorization approval [22].
Metformin is widely accepted as the first-choice agent for glucose lowering largely because of the results of the UKPDS sub-study showing a significant 39 % reduction in myocardial infarction (MI) in a limited number of overweight diabetic patients. After 10 years of the UKPDS post-trial monitoring, a significant reduction in nonfatal MI was observed in patients in the intensive arm (initially treated with insulin or sulfonylureas) [23]. However, metformin use has been recently questioned on the basis of both efficacy and outcomes [24]. In the UKPDS, the SU use, despite significant increase in hypoglycemia, was not associated to an increase in MI fatality compared to no SU use [25]. The action in diabetes and vascular disease: preterax and diamicron modified release controlled evaluation (ADVANCE) trial showed that intensive glycemic control based on gliclazide modified release reduced the incidence of combined major macro-and microvascular events, primarily because of a reduction in the incidence of new or worsening nephropathy [26]. In the Outcome Reduction With an initial glargine intervention (ORIGIN) trial, early use of basal insulin to achieve normal fasting plasma glucose levels had no effect on CVD outcomes compared with guideline-suggested glycemic control [27].
In the STOP-NIDDM trial, acarbose was initially suggested to reduce cardiovascular risk in patients with impaired glucose tolerance [28]. Furthermore, a metaanalysis of long term studies concluded that a similar effect may be present in type 2 diabetes, but the number of patients was very small and the conclusion of the analysis was very controversial and not reproducible [29][30][31].
An additional option in treatment algorithms is the use of thiazolidinediones. In the Prospective Pioglitazone Clinical Trial in Macrovascular Events (PROactive), pioglitazone, a PPAR-gamma agonist, when added to baseline anti-hyperglycemic regimen, had no apparent benefit on a broad, combined, primary end point [32]. However, a pre-specified secondary outcome (MI, stroke, and cardiovascular mortality) was reduced by 16 %, in spite of an increase in heart failure (HF). A different tale is known for rosiglitazone, which has been implicated in an increase risk of MI [33,34], although most recent analyses have casted doubts on the initial data interpretation [35]. Negative CVD outcomes have been reported with the use of dual PPAR alpha-gamma agonists Tesaglitazar and Muraglitazar [36].
The lesson of the cardiovascular outcome trials
In 2008, following the withdrawal of rosiglitazone from the market because of potential negative impact on CVD outcomes [33,34], the Food and Drug Administration (FDA) issued guidance on the assessment of CVD risk for all new drugs to treat type 2 diabetes [22]. Following this, a large number of patients with type 2 diabetes have been enrolled in CV outcome trials (CVOT, summarized in Table 1). We now have the results of 6 of such trials: 3 assessing safety of dipeptidyl peptidase (DPP) 4 inhibitors [37][38][39], 2 testing the safety of a glucagon-like peptide-1 receptor agonist (GLP1-RA) [40,41], and 1 of an inhibitor of the sodium-glucose co-transporter (SGLT)-2 [42]. All DPP4-I CVOTs met the safety primary endpoint of non-inferiority versus placebo with respect to 3-point MACE (CVD mortality, non-fatal myocardial infarction, and non-fatal stroke). In the Saxagliptin Assessment of Vascular Outcomes Recorded in Patients with Diabetes Mellitus (SAVOR) trial, a statistically significant increase in hospitalization for HF was observed in the patients randomized to saxagliptin, although no increase in CVD mortality occurred in these individuals. This was not observed in CVOTs assessing sitagliptin and alogliptin [38,39]. The issue of hospitalization for heart failure has been subsequently analyzed in several observational trials [43] and meta-analyes [44], most concluding for a neutral effect of DPP4-I on heart failure risk.
In the evaluation of lixisenatide in acute coronary syndromes (ELIXA) trial, the use of lixisenatide in diabetic patients with a recent acute coronary syndrome showed neutrality on CVD outcomes with no increase in the risk of heart failure hospitalization [40].
Both DPP4-I and GLP-1RA have been integrated in treatment algorithms before the results of these trials were published/disclosed, mainly because of the favorable efficacy-safety profile. With the question remaining open with respect to the increased risk of hospitalization for heart failure reported in SAVOR trial (not confirmed in any other study), overall evidence is available for safe use of DPP4-I across populations with different degree of CVD risk, including those with recent acute coronary syndrome [38]. Though these results have been welcome as reassuring, the diabetes community continues asking whether these treatments may, under different circumstances, lend to some degree of CVD protection.
EMPA-REG Outcome trial and LEADER trial
Results of the two latest CVOTs, namely the EMPA-REG Outcome trial [42] and the Liraglutide effect and action in diabetes: Evaluation of cardiovascular outcome results (LEADER) trial [41] have recently stirred much enthusiasm. In these trials, as in SAVOR, TECOS and ELIXA, diabetic patients at very high CVD risk, were enrolled to determine the CVD safety of the SGLT2 inhibitor empagliflozin and the GLP-1RA liraglutide.
In the EMPA-REG Outcome trial, the primary outcome was a composite of death from CV causes, nonfatal myocardial infarction, and nonfatal stroke (3-point MACE). Two daily doses of the drug were tested: 10 and 25 mg. As compared to placebo, empagliflozin (pooled analyses of 10 and 25 mg) showed non-inferiority for 3-point MACE, non-inferiority for 4-point MACE (including hospitalization for unstable angina), superiority for 3-point MACE and not for 4-point MACE. With respect to secondary endpoints, patients randomized to empagliflozin had significantly reduced risk of hospitalization for HF by 35 %, reduced risk CV death by 38 %, and reduced risk for allcause mortality by 32 %. The mechanisms responsible for these results are still unclear, being possibly related to pleiotropic effects on risk factors, to hemodynamic effects, and possible direct effects on the heart and vasculature [45,46]. Furthermore, empagliflozin, compared to placebo, significantly slowed progression of kidney disease and loss of glomerular filtration over time in highrisk patients of the EMPA-REG Outcome trial [47].
In the LEADER trial, the primary endpoint was the same as in the EMPA-REG Outcome trial [41]. In the primary data set and in per protocol analyses, compared to placebo, liraglutide significantly reduced occurrence of the 3-point MACE by 13 %, cardiovascular death by 22 %, and all-cause mortality by 15 %, without significant effects on non-fatal MI, non-fatal stroke and hospitalization for heart failure [41]. These findings appear to some extent similar to what observed in the EMPA-REG Outcome trial, and occurred in parallel with mild reductions in body weight and systolic blood pressure [41].
In EMPA-REG, the early and unusual divarication of mortality curves deserves attention. Unlike in the LEADER, such a rapid effect suggests treatment has little effect on the atherosclerotic process pointing for an effect unlikely to be mediated through the modulation of glucose or lipid metabolism. Similarly, the modest reduction of body weight, is unlikely to account for the reported effect. More interesting is the effect on blood pressure, although prior trials using blood pressure lowering drugs have shown a positive effect on CVD outcome to occur at a later time than in EMPA-REG [48]. A minor impact on the atherosclerotic process is also supported by the lack of any significant effect on nonfatal MI and stroke. These results suggest that the use of empagliflozin does not necessary protect from the CV event, rather with the mortality linked to the event itself.
Of note, the early separation of the mortality curves is paralleled by an even earlier divarication of the curves for hospitalization for heart failure, suggesting at least part of the beneficial effect of empagliflozin to be exerted through volume depletion: in keeping with this, a 4 % hematocrit increase was recorded in the empagliflozin treated patients [49]. Ferrannini et al. have speculated on the mechanisms at work suggesting that a switch to fatty acid utilization, concurrent with better oxygen delivery to the tissues, cooperates with small changes in body weight and blood pressure to achieve cardioprotection by SGLT2 inhibition [50]. In the LEADER trial, survival curves for 3-point MACE and mortality separate later (since about 12-18 months from randomization), and there were no effect on heart failure. This observation suggests that liraglutide, differently from empagliflozin, may reduce the occurrence of cardiovascular events mainly by preventing progression of atherosclerosis, possibly thanks to a better control of risk factors and despite a mild rise in heart rate [41]. This view is supported by a wealth of pre-clinical and pathophysiologic studies (reviewed in [17]).
While we wait for additional studies able to clarify which mechanism(s) can explain the improvement in CVD outcomes by empagliflozin and liraglutide, the clinical implications of these results need to be critically put in the perspective of current guidelines, treatment algorithms, and health care economy. With respect to this, a number of questions need to be addressed: 1. Can the results obtained with empagliflozin and liraglutide extended to other drugs of the same class? By now, of the two GLP-1RA evaluated in CVOTs, only liraglutide, but not lixisenatide, achieved cardioprotection. Before concluding for a drug-specific effect, differences in the patient populations and study design between ELIXA and LEADER trials should be taken into account (Table 1), whereas results of ongoing trials with other GLP-1RAs will help concluding on a class-effect. 2. As the proportion of patients with established CVD was high in the EMPA-REG (75.6 % had CAD) and LEADER (81.3 % had CVD), can the results obtained be translated to other patient categories, i.e. patients without established CVD? 3. Is there a specific sub-population that may derive a specific benefit from the treatment with SGLT2-I and GLP-RA? Answering these questions will require further analysis of the EMPA-REG and LEADER databases as well as more specific clinical and mechanistic studies. So far, in both the EMPA-REG Outcome trial and the LEADER trial, there is a signal indicating that patients with renal impairment are those who benefited most from treatment with empagliflozin and liraglutide, respectively, as both drugs appear to reduce kidney-related endpoints [41,47]. Yet, initiating therapy with empagliflozin is still not recommended in patients with eGFR <60 ml/min/1.73 m 2 . While this limitation may change in the near future, liraglutide can already be used in patients with stage III CKD, being not indicated in patients with eGFR <30 ml/min/1.73 m 2 .
Results of the CVOT with canagliflozin (CANVAS) and dapagliflozin (DECLARE) will shed further light on the results obtained with empagliflozin, possibly showing whether the beneficial effects shown in the EMPA-REG outcome trial can be extended to the SGLT2-inhibitor class and to individuals with a different CV risk profile [21].
Yet, a 22 and 38 % risk reduction of CVD mortality observed with liraglutide and empagliflozin, respectively, is too strong to be overlooked, and will likely require a favorable revision of the positioning of these drugs in the current treatment algorithm of type 2 diabetes. In order to do so, some features of the EMPA-REG and LEADER trials are worth a consideration. First of all, the results of the trial pertain to a well-defined diabetic population, i.e. patients with established prior CV events. As such, it cannot be generalized to the wide spectrum of clinical diabetes; in this context, particularly striking have been the superiority of the primary outcome in patients with an age ≥65 years, and the death for CV causes in the group with body mass index <30 kg/m 2 .
Comorbidities-driven treatment
In patients with type 2 diabetes, especially in the elderly, the presence of CVD is central, with emphasis on concomitant heart failure and chronic kidney disease (CKD).
In obese patients, treatment should aim at improving glycemic control and reducing body weight. Although evidence for the CV protection of metformin is rather limited, the drug has become quite familiar after more than 50 years of use, has advantageous cost effectiveness and a modest lowering effect on body weigh to remains the preferred background treatment.
If the patient has asymptomatic CVD or prior MACE and eGFR ≥60 ml/min/1.73 m 2 and heart failure NYHA class I-III, metformin plus empagliflozin/liraglutide should be considered as these are the typical patients included in the EMPA-REG Outcome and LEADER trials, with liraglutide currently being usable in stage III CKD. For patients with no sign of heart failure, either pioglitazone (or DPP4-I) may represent a therapeutic option. For patients with a eGFR of 60-30 ml/ min/1.73 m 2 , liraglutide should be consider the preferred choice. Though 26 % of patients in the EMPA-REG Outcome trial fell in this eGFR category and they benefited most in terms of cardiovascular protection, initiation of empagliflozin is currently discouraged in stage III CKD, and lower glycemic effect is expected. DPP4-I can be used even with an eGFR <30 ml/min/1.73 m 2 because of their overall safety and efficacy, granted dose adjustment is made for the compounds requiring it.
If the patient has evidence of CKD without heart failure, pioglitazone can be a reasonable option as shown in a subanalysis of the PROactve trial [51]. Insulin and SU, if needed, should be used with caution because of potential risk of hypoglycaemia, for the latter gliclazide modified release may be a preferred choice because of the available data obtained in ADVANCE [26]. The ORIGIN trial has clearly demonstrated cardiovascular safety of insulin glargine, and the risk of hypoglycemia with basal insulin is lower than during basal-bolus regimen. However, as glargine and liraglutide show similar glycemic effects [52,53], the latter should be preferred for the lower hypoglycemia risk [54] and in view of the LEADER trial [41], except for eGFR <30 ml/min/1.73 m 2 . In general terms, use of insulin has been shown to be effective even in the log run in patients with acute coronary syndrome in the DIGAMI 1 trial [55,56], though superiority of insulin over standard care was not confirmed the DIGAMI 2 [57].
In patients without CVD, a SGLT2-I or a GLP-1RA should be considered if body weight loss is required, whereas a DPP4-I can be considered whether when weight neutrality is sought. In this context, incretinbased therapy has greater HbA1c reduction if patients have obesity/metabolic syndrome with a greater effect for GLP-1RA as compared to DPP4-I [58]. Pioglitazone could be considered if obesity is associated with evidence for marked insulin resistance as supported by the co-existence of dyslipidemia, inflammatory markers, and subclinical CVD [59,60].
In non-obese or mildly obese patients, the prevention of body weight gain may represent an important target along with glycemic control. In this case, a DPP4-I can represent a good choice due to its weight neutrality and the overall and CVD safety. In the case of leaner patients, additional treatment options may be gliclazide modified release and insulin. The former has been shown to significantly decrease new or worsening nephropathy with relatively little risk for severe hypoglycemia [26], while CV safety of basal insulin in the early stage of diabetes is supported by the results of the ORIGIN study [27]. It must be mentioned that in many patients, triple (if not quadruple) oral or oral plus injective treatment becomes necessary during the course of the disease: this implies subsequent decisions about drug combinations. Nowadays choices are multiple and it should always be carefully considered the many aspects of clinics phenotype as suggested by the ADA/EASD guidelines. This makes the proposition of a more stringent algorithm difficult (Fig. 1).
Limitations
The view presented in this article is largely based on results from RCTs. It should be noted that, although such mega-trials provide the highest level of evidence, they have intrinsic characteristics that limit their generalizability and transferability to clinical practice. We have already mentioned that the study population in most CVOTs is very different from the entire population of patients who are entitled to receive the respective medication. In fact, RCTs differ from clinical practice in several instances. For this reason, there is great interest in real world evidence (RWE) on glucose-lowering medications and how these fit with results from RCTs (Fig. 2). Data from RWE studies can complement RCTs, but they provide a lower level of evidence and can yield quite different results. This has been clearly shown for the risk of hospitalization for heart failure in DPP-4i treated patients, where meta-analyses of RCTs and observational studies [43] can reach to different conclusions [44]. Nonetheless, we endorse the importance of RWE in the evaluation of glucose-lowering medications, especially to explore aspects that cannot be extracted from RCTs. For instance, placebo-controlled RCTs may be poorly informative for clinical practice because they do not provide a comparative assessment of different glucose-lowering medications against cardiovascular outcomes, whereas RWE data can evaluate complex regimens against cardiovascular risk [61].
In addition, we would like to underline that the pharmaco-centric view presented here ignores the importance of a healthy lifestyle and a comprehensive approach to prevent cardiovascular disease, as originally shown in the STENO-2 study [15], and confirmed by the analysis of treatment-dependent and -independent factors associated with cardiovascular morbidity [62], as well as by the effects of a multidisciplinary risk assessment and management program [63]. Although the Look-AHEAD study found no cardiovascular benefit of an intensive lifestyle intervention that promoted weight loss [64], there are still reasons to recommend lifestyle changes in type 2 diabetes mellitus and obesity since early disease stages [65]. Finally, bariatric surgery, another non-pharmacologic approach, may reduce cardiovascular risk in obese people with type 2 diabetes, as shown by recent metaanalyses of observational studies [66,67].
Conclusions
The most recent CVOTs have expanded our knowledge on the potential effects of glucose-lowering agents on CVD risk. Though most of them have proven CVD safety, the EMPA-REG Outcome trial and LEADER trial have provided evidence for significant improvement of CVD outcomes. While we do not yet have firm explanations for the mechanisms accounting for the observed beneficial effect or whether any specific population may benefit more (for instance patients with heart failure or CKD), it sounds legitimate to try putting these observations in the Fig. 2 The interplay between data derived from CVOTs and real world evidence for assessing the cardiovascular effects of glucose-lowering agents. RSG rosiglitazone, CVOTs cardiovascular outcome trials, RWE real world evidence, RR risk ratio perspective of current treatment algorithms. We believe this exercise is needed to avoid inappropriate over-use of SGLT2-I and GLP-1RA, before all needed information is gathered while ensuring they are used in keeping with the available evidence. Central to our proposal is the presence of CVD. Patient's phenotype, degree of renal function, presence of heart failure, allows for a further patient's population breakdown for more appropriate pharmacologic treatment selection. The results of the EMPA-REG and LEADER trials mostly support the use of empagliflozin or liraglutide in patients who have established CVD, a prior MACE, with or without stage I-III CKD, but we still do not know whether similar positive effects should be extended to the other drugs of the same classes.
The prevention of CVD complications and the safe treatment of patients who already have suffered a CVD event, especially in the elderly patients, remain a major task in treating type 2 diabetes. CVOTs and RWE data represent the basis for evidence-based treatment though we must acknowledge this is a moving target as results of new and ongoing trial will be released requiring a constant revision of treatment algorithms.
Abbreviations ADA: American Diabetes Association; ADVANCE: action in diabetes and vascular disease: preterax and diamicron modified release controlled evaluation; CAD: coronary artery disease; CHD: coronary heart disease; CKD: chronic kidney disease; CVD: cardiovascular disease; CVOT: cardiovascular outcome trial; DIGAMI: diabetes and insulin-glucose infusion in acute myocardial infarction; DPP4: dipeptydil peptidase 4; EASD: European Association for the Study of Diabetes; eGFR: estimated glomerular filtration rate; ELIXA: evaluation of lixisenatide in acute coronary syndromes; EMPA-REG: empagliflozin cardiovascular outcome event trial in type 2 diabetes mellitus patients; ESC: European Society of Cardiology; GLP-1: glucagon-like peptide 1; HF: heart failure; LEADER: liraglutide effect and action in diabetes: evaluation of cardiovascular outcome results; MACE: major adverse cardiovascular events; MI: myocardial infarction; NYHA: New York Heart Association; ORIGIN: outcome reduction with an initial glargine intervention; PPAR: peroxisome proliferator activating receptor; PROactive: prospective pioglitazone clinical trial in macrovascular events; RCT: randomized clinical trial; RWE: real world evidence; SAVOR: the saxagliptin assessment of vascular outcomes recorded in patients with diabetes mellitus; SGLT2: sodium-glucose cotransporter 2; STOP-NIDDM: stop non insulin dependent diabetes mellitus; SU: sulphonylurea; TECOS: Sitagliptin Cardiovascular Outcomes Study; UKPDS: United Kingdom Perspective Diabetes Study.
|
2018-04-03T05:48:51.338Z
|
2016-08-11T00:00:00.000
|
{
"year": 2016,
"sha1": "d57b0a20d1bb618bf430b1407686a0fb890f6272",
"oa_license": "CCBY",
"oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/s12933-016-0431-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d57b0a20d1bb618bf430b1407686a0fb890f6272",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239519055
|
pes2o/s2orc
|
v3-fos-license
|
Multifractal Model for Transient Phenomena Analysis in Laser Produced Plasmas
The pulsed laser deposition process is a complex one, encompassing a wide range of physical phenomena that act at different time scales, ranging from laser absorption to melting, plasma formation, and thin film deposition. In the past 15 years, there has been an exponential increase in theoretical models for different aspects of laser ablation phenomena, especially laser-produced plasmas. Among these. new types of models based on multifractal physics have been proposed. The models aim to describe the various dynamics reported for laser-produced plasmas in both the multifractal-type Schrodinger representation and in the multifractal-type hydrodynamic representation. The two approaches are complementary, as the Schrodinger representation uses operation procedures (invariance groups, variational principles, harmonic maps, etc.) while the hydrodynamic representation translates the dynamics of the laser-produced plasma into fluid motion of a multifractal type. The aim of this review is to report on the wide range of dynamics that can be discussed within the framework of a multifractal approach to plasma dynamics.
Introduction
The development of theoretical models in a multifractal paradigm for the analysis of the intricate dynamics of complex physical systems has been reported in recent years as providing some important results related to multiparticle flow in fluids [1,2] or the dynamics of transient plasmas generated by laser ablation dynamics [3][4][5][6][7][8]. Most deterministic approaches for laser-produced plasma (LPP) modeling do not necessarily involve the presence of the new and peculiar behaviors reported for LPP-like plasma oscillation, multi-structuring, and selective ablation [3][4][5][6][7][8]. In the multifractal paradigm, the analysis developed in [1][2][3][4][5][6] is based on projecting the dynamics of the LPP in a multifractal space defined by a selected number of variables that reflect the experimental reality. In such a complex representation of laser ablation plasma dynamics, the nonlinear periodicity and structuring appear automatically as a quality of the dynamics induced by the fractality of the system. The development of nonlinear analysis and the discovery of a series of laws that govern chaos offer an alternative to the reductionist analysis method, on which the entirety of plasma physics was based, albeit with limited applicability. Moreover, in a multifractal paradigm, the unpredictability which sometimes characterizes the pulsed laser deposition process is not a property of laser ablation plasmas but a natural consequence of their simplification through linear analysis. It follows that nonlinearity and chaos present common behaviors, highlighting the universality of the mathematical laws that govern transient plasma dynamics.
For transient plasmas generated by laser ablation, properties such as nonlinearity or chaoticity present with a dual applicability, being both structural and functional. The interactions between the plasma structural components (electrons, ions, clusters, molecules, atoms, and photons) govern micro-macro, local-global, individual-group, etc., reciprocal conditioning. In such a paradigm, the global nature of the laws describing the dynamics of transient plasmas must be implicitly or explicitly reflected by the mathematical procedures of the multifractal model. The approach is based on the concept of "holographic implementation" in the description of plasma dynamics. Generally, the existing theoretical models that are used to describe the ablation plasma dynamics are based on a differentiable-variable assumption. The impressive results of the differentiable models must be understood sequentially, regarding when and where the integrability and differentiability limits are valid. Differentiable mathematical (classical) procedures limit our understanding of some of the more complex physical phenomena, such as nonlinear scenarios for laser-produced plasma expansion, chaotic movement of the ablated particle in extreme conditions, or self-structuring of the ablated cloud in various expansion regimes. To better describe the LPP dynamics and still remain faithful to some of the classical approaches based on differentiable and integral mathematics, we must introduce the scale resolution in an explicit manner. Further implementation of the model implies that the scale resolution can be embedded in the expression for the physical variables that describe the LPP, and that it implicitly exists in the fundamental equations governing set dynamics. In particular, it means that all physical variables become dependent on the spatio-temporal coordinates and the scale resolution. This means that, as an alternative to describing physical variables by a non-differentiable/fractal mathematical function, we can implement different approximations of the respective mathematical function found by averaging at various scale resolutions. Thus, in the multifractal paradigm, the physical variables describing the LLP dynamics act as a limit of a function family whose members are non-differentiable for a null scale resolution and differentiable for a non-null scale resolution.
This approach to the investigation of LPP dynamics indicates the construction of new geometric structures [9,10], with physical theories tailored for these structures. For these new theories, movement laws that are invariant to spatio-temporal transformation can also be integrated into scale laws that are invariant to scale resolution transformations. Our group has proposed that such geometric structures can be generated by the fractal-multifractal theory of motion, either in the form of scale relativity theory (SRT) in the fractal dimension D F = 2 [11] or in the form of SRT in an arbitrary constant fractal dimension [1][2][3]. In both cases the "holographic implementation" of the particular dynamics of an LPP implies the projection of dynamics with restrictions from a Euclidian space onto a multifractal space with dynamics free from restrictions. Therefore, the movement of the ablated particles on continuous and non-differentiable curves in a multifractal space can be investigated [11]. As a direct consequence, the self-similarity property (where the part reflects the whole and vice versa) of the movement curves which define the nonlinear behavior of the laser-produced plasma dynamics allows the implementation of the holographic type.
The aim of the review is to report on the wide range of dynamics that can be discussed within the framework of a multifractal view of plasma dynamics.
Transient Plasma Dynamics in a Multifractal Paradigm
In the following, a fractal analysis will be used for the multi-structuring behavior of the ablation plasma, based on the history of the composing elements. The history of the ablated particles is given by the nature of the ablation mechanism involved. Fast particles defined by high kinetic energy are ejected via a Coulomb (electrostatic) mechanism, and longer pulse widths induce a thermal mechanism, leading to the ejection of slower atoms, molecules, or cluster structures. Our goal is to analyze the dynamics of the ejected plasma entities after the mechanism has already manifested itself, with the goal of correlating the nature of the mechanism with the fractalization of the geodesics defined by the particles. Let us consider the solutions for the fractal hydrodynamic equation system in the following form, given in [5]. In this context, the following normalization is used: The differentiable velocity, the non-differentiable velocity, and the density of states take the forms: In Equations (1)-(4), x is the fractal spatial coordinate, t is the non-fractal time coordinate with the role of the affine parameter of the motion curve, V D is the differentiable velocity, V F is the non-differentiable velocity, ρ is the state density, λ is the multifractal degree, α is given by the initial conditions [5] driven by the normalization of the position Gaussian, and V 0 , V D0 , V F0 , and ρ 0 are the normalization values of each parameter.
From (2) and (4), the current density state at differentiable scale resolution takes the form: whereas the current density state at an arbitrary fractal scale resolution is given by: Based on the classical understanding of the laser ablation process and the empirical data, the expansion of a laser-produced plasma can be characterized by three important sequences (chronologically): the Coulomb explosion, the thermal ejection (explosive boiling), and the cluster/nanoparticle formation. Each of the sequences defines a different ejection mechanism, subsequently leading to the organizing of the ablation of three independent plasma structures [12,13], based on their thermal and kinetic properties. Thus, in the multifractal model, the dynamics of a fast plasma structure related to the Coulomb explosion mechanism will be defined by relations (2), (4) and (5), while the dynamics of the slower structures related to thermal ejection mechanisms are described by relations (3), (4) and (6). This association is possible because the non-differentiable characteristics of a plasma (particularly a transient plasma) are induced through the collision process between the plasma particles or ejected particles and the background gas. In Figure 1, 3D plots of current density states for different degrees of fractalization are shown. The current density state is here identified with the partial particle density (either ionized particles or excited species). The true nature of the current density state in the LPP system can only be revealed by calibration of the model against empirical data. This process was presented in [5][6][7][8]. When the fractalization degree is increased, a variation in the slope defining the velocity of the differentiable (measurable) part of the current can be seen. This is interpreted as a reflection of the thermal velocity increase which characterizes particles ejected by the thermal mechanism. In contrast, the particle current induced by Coulomb mechanisms at a fractal resolution scale is defined by two components. This dual structure is generated, as confirmed experimentally, by a double layer that exists at the front of the plasma plume generated in the initial stages of ablation. The fractality degree makes only a small contribution to the spatio-temporal evolution of the fast component, as the number of collisions is reduced in the free-expansion regime (low pressures) and becomes dominant for expansion in high-pressure atmospheres. This means that the fractality of the system will always define the thermal movement and energy within the plasma volume. For investigations into the dynamics of a third, slower substructure that could potentially contain heavy particles (containing mainly clusters, nanoparticles, or molecules), we hypothesize that the specific momentum is null at the global scale resolution. This latter structure is formed by particles that are directly ejected from the target as the target's surface breaks down, or particles resulting from nucleation processes occurring within the plasma volume. Thus, at the differentiable scale resolution their velocity can be considered to be equal and of opposite sign to the velocity at a fractal scale resolution (For details see [5][6][7][8]).
In these conditions the conservation law of the state density: takes the form of a fractal diffusion equation: The solution of this equation has the following form [4][5][6]: where a and b are integration constants. In this context, the velocity can be written as: while the current density state is defined as follows: Calibrating the cluster-rich structure according to the dynamics of the other two structures, we can admit a normalization generated by imposing the restrictions a ≡ 1 and b ≡ 0. This results in: In Figure 2, the 3D representation of current density for different values of the fractalization degree (depicted through θ) is plotted. The fractalization degree values were chosen to reflect the number of collisions for each plasma structure, subsequently covering the full range of ablation mechanisms reported experimentally. The reasoning behind the choice for the range of fractality degrees is given in our previous work [4][5][6][7][8], where we show that the range remains the same for a wide range of materials.
In Figure 2, the space-time evolution of the global particle current density can be seen. The contour plot representation associated with the 3D representation highlights the shift of the current maxima during expansion. This result fits the data seen experimentally via ICCD fast camera photography well, as reported in [8,13]. The shift in the current maxima associated with structures generated by different ablation mechanisms, defines individual slopes which describe the expansion velocity of each structure. The structures driven by the electrostatic mechanism are defined by a steep slope, and thus a high expansion velocity, which also corresponds to a low degree of fractalization. The interactions of these particles are mostly concentrated in the first moments of the expansion, when the plasma density is higher. For the thermal mechanism case, the analysis performed using the multifractal model shows a different slope. These structures also have a reduced expansion velocity, reflected in a longer lifetime and a bigger spatial expansion. Finally, the nanoparticles-/cluster-dominated structure has a high fractalization degree. The maximum of the particle current remains constant for a long expansion time over a small distance. This characteristic of a complex laser-produced plasma is known and was also reported by our group in [5][6][7][8].
Let us further perform some calculations using the initial conditions of our reported data from [7,8]. We can derive the expansion velocities of each plasma structure. For the first structure, we calculated a velocity of 18.7 km/s, for the second structure 2.5 km/s, and for the last structure 710 m/s. These results are in line with the empirical values reported in the literature [5][6][7][8][9][12][13][14][15]. Thus, we conclude that the fractal analysis, when implemented correctly, is a robust technique that can cover a wide range of plasmas regardless of the nature of the targets.
Insight into Plasma Plume Energy Distribution
Valuable information related to the dynamics of an LPP can be extracted from the multifractal approach by translating the dynamics defined by the ablated particles under the real experimental conditions into the multifractal theory of motion. This is explicitly given in the form of various hydrodynamic regimes that characterize different resolution scales (the multifractal hydrodynamic model [5][6][7][8]).
Thus, let us redefine new non-dimensional variables and new non-dimensional parameters where τ 0 is the specific time, and σ = λ(dt) The normalized fields of the velocities then become From (17) and (19) the non-dimensional differentiable current is From (18) and (19) the non-dimensional non-differentiable current density is Considering [2][3][4] and (19), the non-dimensional specific multifractal potential is In the above equations, f (g) is the singularity spectrum of order g, and g = g(D F ), with D F as the fractal dimension.
Let us further calibrate the multifractal model with respect to the experimental data presented in [5][6][7][8] and [16][17][18]. According to [7], (17) can be identified with the expansion velocity of the Coulomb plasma structure (V C ), while (18) is associated with the thermal plasma structure (−V T ), and (22) can be identified with the electron temperature. identification of a non-dimensional time with the inverse of the non-dimensional temperature, i.e., η ≡ T −1 , is implied by the time-temperature correlation of special statistics models [17] and by multifractalization through stochastization. According to [5][6][7][8], the specific multifractal potential can be defined up to a non-null arbitrary constant. Under these conditions, the relations (17), (18), and (22) become: In Figure 3, the dependences as given by the set of Equations in (23)-(25) are plotted. By choosing adequate values of the fractal constants from the non-dimensionalization of the variables and parameters, and further identifying T with the melting point of the target material, the multifractal model can model the empirical data as presented in [5][6][7][8]18] and can form a valuable tool for exploring dependences beyond the spatial and temporal evolution, as used in [4]. The nature of the multifractal model offers the flexibility to transition from the usual spatio-temporal scale to dependences on physical parameters related to the irradiated target. The differentiable to non-differentiable dynamics transition accepts a particular statistic in the case of the relationship between the functionalization of the target properties and the behavior of the ejected particles. This means that the intrinsic properties of the target can be expressed implicitly through the multifractality degree σ = λ(dt) Furthermore, the dynamics at the various resolution scales are concealed in (see [5][6][7][8]) the critical points of the velocity functions and are given by the following restrictions, which must be respected simultaneously: These restrictions imply a uniform-type movement at a differentiable resolution scale and at a non-differentiable resolution scale, they result in the following: With the constrains (29) and (15), the differentiable velocity takes the form: Admitting the functionality presented above and the relation: where T is the non-dimensional temperature and M is the non-dimensional atomic mass, (30) becomes: Furthermore, with the restriction (29) the non-differentiable velocity (18) takes the following form: Admitting the following relation: (33) becomes: In Figure 4a, the theoretical dependence from (32) is plotted. By choosing adequate nondimensional variables and adopting a suitable interpretation of V D , Ω, and M, this dependence can describe the general empirical behavior reported in [5][6][7][8][12][13][14][15][16]. In Figure 4b, relation (35) is represented. In the context of simulating LPP dynamics by choosing the nondimensionalization variable and by means of correct interpretation of the T/Ω ratio, this dependence can be used to simulate the key plasma parameters such as particle expansion velocity or plasma temperature and their dependence on the history (seen here as the physical properties of the ablated target). Let us also comment on the fact that the multifractal theoretical model, with suitable calibration with respect to the empirical data from various investigation techniques, can offer general laws regarding the dynamics of laser-produced plasmas. However, taking into account existing experimental data on single-element pure metals or complex targets, we can also extend our model to cover the mass dependence of the velocity [19] or the relationship between evaporation temperature and overall ablation density [19,20]. Some successful examples of empirical data fitted by the fractal model can be seen in [5][6][7][8][12][13][14][15][16][18][19][20][21][22][23][24][25], for multi-element targets. Other extensive empirical comparisons between the properties of the plasma and those of the irradiated target, complementary to the work presented here, can be seen in [21]. This allows us to claim that the model has a universal appeal that is not restricted by the experimental tools used to analyze the data.
Multifractal Model for Multi-Element Laser-Produced Plasma
Let us now investigate some particular dynamics of charged particles with different properties ejected from the same target under similar irradiation conditions, by postulating that the plasma particles define multifractal curves during their expansion. The mathe-matical procedure entails the use of a set of multifractal hydrodynamics equations, which can be seen in [5][6][7][8]. From these, we can consider the charged particle current density as follows: where Σ is a surface which crosses, and the other parameters have the meanings given in [1][2][3][5][6][7][8].
Under the above-mentioned conditions, remains invariant to the scale resolutions transformation group as well as the coordinates transformation group. The two groups are isomorphs, and thus various isometries result, such as compactizations of the scale resolutions, of the spatial and temporal coordinates, of the spatio-temporal coordinates and the scale resolutions, etc. We can execute a specific compactization between the temporal coordinate and the scale resolution, given by: where ε corresponds to the specific energy of the ablation plasma entities. Accepting such an isometry, it follows that by means of substitutions: and (36) takes a simpler non-dimensional form: In (38) and (39), we defined a series of normalized variables where I corresponds to the state intensity, ξ to the spatial coordinate, µ to the multifractalization degree, and u to the specific energy of the ablation plasma. Furthermore, if the specific energy ε and the reference energy ε 0 can be written as: with T and T 0 being the specific temperatures and M and M 0 the specific mass, we can also write: Hence (36) becomes: Some of the fundamental behavior observed in laser-produced plasmas can be assimilated with a non-differentiable medium. The fractality degree of the medium is reflected in collisional processes such as excitation, ionization, recombination, etc. (for other details see [4][5][6][7][8]). With this assumption, (36) defines the normalized state intensity and can also be a measure of the spectral emission of each plasma component; a situation for which the spatial, mass, or angular distribution is specified by our mathematical model and is well correlated with the reported data presented in the literature [5][6][7][8]16,18].
Some examples are given in Figure 5a,b, where it can be observed that ejected particles defined by fractality degrees < 1 are characterized by narrow distributions centered around small values of ξ (below 5). Particles defined by fractality degrees > 1 have a wider distribution centered around values about one order of magnitude higher than those of the low fractality degrees (ξ = 8, 10, 15, and 18). These data allow the development of a unique image of laser-produced plasmas: the core of the plasma contains mainly low-fractality entities with plasma temperatures, while the front and outer edges of the plume contain highly energetic particles described by higher fractality degree. Finally, we compared the simulated results with the classical view of the LPP. To this end we performed a simulation of the plasma emission distribution as function of particle mass, for a plasma with an average µ factor of 5 at an arbitrary distance (ξ = 5.5). We observed that plasma entities with a lower mass were defined by higher relative emission at a specific constant temperature. With an increase in the plasma temperature, the emission of heavier elements also increased. These results correlate well with some experimental studies performed and reported in [4][5][6][7][8], where we assimilated the plasma temperature with the overall inner fractal energy of the plasma. The ramifications of these results can be immediately applied to industrial processes. The implementation of the model is achievable, since it reveals that for a relatively low plasma temperature, the kinetic and density distributions are strongly heterogenous, and therefore it can prioritize particles with a high fractalization degree. This corresponds to a possible deviation from stoichiometry in the case of PLD, with the lighter elements being scattered towards the edges of the plume, while the heavier ones form the core of the plasma.
A Multifractal Theoretical Approach for Understanding the Plasma Dynamics during PLD of Complex Materials
In the framework imposed by the pulsed laser deposition of multicomponent materials with a wide range of properties in a low ambient atmosphere, the individual dynamics of the ejected particles are significantly complicated. A wide range of diagnostic techniques and theoretical models based on multiscattering effects have been employed to comprehend the effect of the small-scale interaction between the elements of the plasma and the global deposition parameters. Our model could offer an alternative to other approaches when investigating such complex dynamics. Details of the approach are presented in [5], where at a differentiable resolution scale the dynamics of laser-produced plasmas are controlled by the specific fractal force: where u F is the fractal component of the particle velocity, D F is the fractal dimension in a Kolmogorov sense or Hausdorff-Besikovici sense [11], and D kl is a tensor of fractal type associated with a fractal to non-fractal transition. The existence of a specific fractal force manifested in an explicit manner could explain the reasoning behind structuring the flowing plasma plume in each component by introducing a particular velocity field. To explore this, we further accept the functionality of our differential system of equations: where (44) specifies the fact that the fractal force can become null under specific conditions related to the differential scale resolution, while (45) represents the state density conservation law at a non-differentiable scale resolution (the incompressibility of the fractal fluid at a non-differentiable resolution scale).
Generally, it is difficult to obtain an analytic solution for the presented system of equations, especially considering its nonlinear nature (by means of fractal convection u l F ∂ l u i F and the fractal-type dissipation D kl ∂ l ∂ k u i F ) and the fact that the fractalization type, expressed through the fractal-type tensor D kl , is left unknown by design in this representation. In order to explore the multifractal model and its implementation for the study of laser-produced plasma dynamics under free-expansion conditions, we define the association between the expansion of a 3D plasma and that of a complex/fractal fluid. The flow of a 3D fluid has a revolution symmetry around the z-axis and will be investigated via the two-dimensional projection of the fluid in the (x,y) plane.
Choosing the symmetry plane (x,y), the (44)-(45) system becomes: We solve the equation system (46) and (47) by selecting the following conditions with: Let us note the fact that the existence of a complex phase can lead to the development of a hidden temporal evolution of our complex system. The simple variation of a complex phase implicitly defines a time dependence, which means that our model can be extended for the study of both space and time evolutions. Therefore, the way D yy is chosen can allow both spatial and temporal studies of the dynamics of laser-produced plasmas.
The solution of Equations (46) and (47), for the most general form of the normalized quantities: is given according to the method from [3]: The validity of our approach was verified by performing 3D theoretical modeling ( Figure 6) of a complex fluid flow, starting from the exact solution of our system of equations. The complex fluid is given in the multifractal paradigm of our model as a weighted mixture of various particles with different physical properties. The definition has a larger scope, as parameters such as the fractal dimension, complex phase, or specific lengths (x 0 , y 0 ) will encompass within their values the identifiable (unique) properties of each component. Figure 6 presents the structuring of the fluid flow for various values of the complex phase, corresponding to the formation of preferential lines of flow for Θ > 1.5.
In Figure 7, various scenarios for fluid flow are plotted in relation to the composition of the fluid, starting from a uni-particle fluid (equivalent to a pure single-element plasma) and ending with a multicomponent fluid (complex stoichiometry of the plasma). We report on the presence of a separation into multiple structures in all expansion directions (across X and Y). For smaller values of Ө, which will be used as a control parameter, we can define a fluid with only one component. This is clearly seen in Figure 7, where we obtain only one fluid structure on the main expansion flow axis. Increasing the value of this parameter leads to changes in the homogeneity of the structural units of the fluid (i.e., the equivalent plasma becomes more heterogeneous in terms of dimension, mass, and energy of the plasma particles). This corresponds to the development of two symmetrical secondary structures mirrored in the plane defined by the main expansion axis. Based on our model, the structures contain mainly structural units with a small physical volume and low kinetic energy. A subsequent increase in the heterogeneity of the fluid leads to the creation of symmetrically situated fluid structures, each defining a selected set of physical properties related to the structural units. In Figure 7, various scenarios for fluid flow are plotted in relation to the composition of the fluid, starting from a uni-particle fluid (equivalent to a pure single-element plasma) and ending with a multicomponent fluid (complex stoichiometry of the plasma). We report on the presence of a separation into multiple structures in all expansion directions (across X and Y). For smaller values of θ, which will be used as a control parameter, we can define a fluid with only one component. This is clearly seen in Figure 7, where we obtain only one fluid structure on the main expansion flow axis. Increasing the value of this parameter leads to changes in the homogeneity of the structural units of the fluid (i.e., the equivalent plasma becomes more heterogeneous in terms of dimension, mass, and energy of the plasma particles). This corresponds to the development of two symmetrical secondary structures mirrored in the plane defined by the main expansion axis. Based on our model, the structures contain mainly structural units with a small physical volume and low kinetic energy. A subsequent increase in the heterogeneity of the fluid leads to the creation of symmetrically situated fluid structures, each defining a selected set of physical properties related to the structural units. According to our model, the structuring of the laser-produced plasma is a gradual process. In the θ = 0.4~1 range, we obtained the three main structures, while subsequent internal structuring became clear in the θ > 1 range. Let us stress that this is a reversible process, as the distribution often returns to the three-structure configuration. This is a clear representation of the often-reported breathing modes of laser-produced plasmas, which are understood as periodic changes in the shape and structure of the plasma plume based on the chemical reactions occurring in the plasma volume. In the framework of our model, these modes are understood as an attempt to produce a transition towards a completely separate flow annihilated by the interacting fractal forces between the individual fluid structures, unifying the fluid and the structural units of which it is composed.
The structuring of the complex fluid was highlighted by taking cross sections across the X-direction (Figure 8a-f). We report that in the chosen direction, the separation is better seen in the initial states of expansion. It is also worth noting that that the spatial separation of the observable structures does not remain constant during the expansion. This leads to the conclusion that each structure is defined by a unique velocity, well in line with the understanding of the multicomponent plasma flow reported empirically. Complementary analyses were performed in the Y-direction. For the cross section on the Y-axis (at X = 0) we again see multiple maxima, which means that the fluid structuring is complex and occurs in both the X-and Y-directions. This corresponds to the unrestricted separation phenomena of the fluid towards a particular axis, observed in all directions. Moreover, the fractality of the system, defined here through θ and µ, can be clearly correlated with the trajectories of the fluid particles, and hence of the plasma particles. Our complex multifractal theoretical approach describes the multi-structuring of a laser-produced plasma from a dynamic point of view. We would like to point out that the developed model presents a rather abstract view of a real problem in technology. In order to validate both the conceptual and mathematical approaches, we performed a series of experiments presented at length in [4][5][6][7][8], where we tackled the direct application of pulsed laser deposition. The theoretical curves follow the empirical data and are encouraging for future developments of the present model as a way of anticipating the behavior of different materials used as targets in pulsed laser deposition.
Conclusions
We reported here a new approach for understanding the fundamentals of laser ablation and transient plasma dynamics through a multifractal paradigm. The monofractal dynamics defined in Nottale's scale relativity theory were expanded for multifractal dynamics, developing a multifractal theory of motion. The model was focused on understanding complex phenomena such as multi-structuring and plasma heterogeneity during expansion, coupled with creating a link between the properties of the generated plasma with those of the ablated material. The angular distribution of the particles in multi-element plasmas was investigated, revealing that the fractality of the system can be a factor in maintaining the stoichiometry of thin films during pulsed laser deposition. The simulations performed agreed well with the existing data in the literature and were in line with recent reports of fundamental processes in laser-produced plasma dynamics.
Throughout this manuscript, we aimed to present for the first time all the multiple branches of the multifractal NSRT model relating to laser-produced plasma, in a unique way. We aimed to provide the first comprehensive description of the model as implemented for laser-produced plasmas. Further developments of the model should be focused on the intricate dynamics of charged particles within the framework of the pulsed laser deposition process. As PLD moves closer to becoming an industrial tool, it becomes imperative to have comprehensive models that can describe the phenomena that exist and elucidate the connections between the properties of the target, laser, and plasma and those of the deposited film, as a complex puzzle that needs to be solved.
|
2021-10-23T15:18:53.861Z
|
2021-10-19T00:00:00.000
|
{
"year": 2021,
"sha1": "1dad127dc662baf0fcbf4afa82609af816e43bbe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/13/10/1968/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "99b1020a240dd080a941d13c27f86be1596faa43",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
19610767
|
pes2o/s2orc
|
v3-fos-license
|
Occurrence of antibodies to Anaplasma phagocytophilum in patients with suspected tick-borne encephalitis
Introduction and objectives. Human granulocytic anaplasmosis (HGA) is an emerging tick-borne infectious disease caused by Anaplasma phagocytophilum. In Europe, the first serological evidence of HGA was described in 1995 in Switzerland, and the first clinical case was confirmed in 1997 in Slovenia. Since then, many European countries, including Slovakia, have reported the occurrence of HGA. The aim of this study was to examine the occurrence of IgG antibodies against A. phagocytophilum in blood sera of humans with suspected tick-borne encephalitis. Material and methods. 181 people were examined for the presence of anti-A. phagocytophilum IgG antibodies; 113 were patients with suspected TBE (65 males, 48 females), and 68 from the control group (18 males, 50 females). Respondents were aged 2–80 years (mean age: 31.39; STD: 17.1). Anti-A. phagocytophilum IgG antibodies were detected by the IFA IgG test. Relative risk (RR) and their 95% confidence intervals (95% CI) were estimated for the occurrence of IgG A. phagocytophilum antibodies. Results. Of the total number of 181 people examined, 32 (17.7%) showed positive for IgG antibodies against A. phagocytophilum, 22 of whom were patients with suspected TBE (19.5%) and 10 people from control group (14.7%). The RR of occurrence of IgG A. phagocytophilum was 1.3-times higher in the patients with suspected TBE than in the control group. Conclusion. None of the examined patients with suspected TBE had the disease confirmed. Hoever, as shown by the results, the relative risk of occurrence of anaplasmosis is higher in people examined for some another vector-borne disease (in this case TBE). Therefore, the performance of screening examinations in patients suspected of having any tick-borne disease
INTRODUCTION
Human granulocytic anaplasmosis (HGA) is an emerging tick-borne infectious disease caused by Anaplasma phagocytophilum (formerly Ehrlichia phagocytophila) [1]. This gram-negative obligate intracellular pathogen was first identified in humans in 1990, when a patient from Wisconsin in the USA died of an acute febrile illness 2 weeks after a tick bite. The pathogen was isolated in 1994 by polymerase chain reaction (PCR) and its taxonomic name was changed in 2001 to the current form of A. phagocytophilum [1,2].
In Europe, the first serological evidence of HGA was described in 1995 in Switzerland [3], and first clinical case was confirmed in 1997 in Slovenia [4]. Since then, many European countries have reported the occurrence of HGA [5,6,7,8,9,10,11]. In Slovakia, the first case of HGA was confirmed in a 54-year-old hunter from the northern part of the country [12].
A. phagocytophilum has a tropism for granulocytic cells, where the bacteria replicates within the host cell vacuoles to form microcolonies called morulae [13].
The clinical presentation of HGA is generally an acute nonspecific febrile illness that consists of a high fever (>39 °C), headache, malaise, generalized myalgia and arthralgia. Other symptoms, such as nausea, abdominal pain, diarrhea, cough and meningitis, are less commonly reported [14].
Anaplasmosis is usually transmitted to human and animals through the bite of an infected tick. In Europe, including Slovakia, the main vector of A. phagocytophilum is the common tick, Ixodes ricinus, which can also transmit other pathogens, such as the tick-borne encephalitis (TBE) virus, the bacteria Borrelia burgdorferi, and the spottedfever group of bacteria Rickettsia [15,16]. Of these, tickborne encephalitis is currently the most important disease transmitted by Ixodes ricinus ticks in central Europe. The disease typically takes a biphasic course. The initial symptoms are non-specific, and symptoms may include fever, malaise, anorexia, muscle pain, headache and nausea or vomiting, symptoms similar to HGA. The second phase of the disease occurs in 20% -30% of patients and involves the central nervous system, with symptoms of meningitis, encephalitis or meningoencephalitis.
OBJECTIVE
The aim of this work was to study the occurrence of IgG A. phagocytophilum antibodies in the blood sera of humans with suspected TBE, and to compare the obtained results with prevalence in the control group.
MATERIALS AND METHODS
A total of 181 human serum samples were examined for the presence of IgG antibodies against A. Phagocytophilum. 113 patients (65 males, 48 females) were selected on the basis of showing clinical symptoms during a differential diagnosis and examined for tick-borne encephalitis. For comparison, 68 healthy individuals (18 males, 50 females) who denied having any contact with ticks and who showed no clinical signs of vector-borne diseases were also examined. The examined patients were between 2-80 years of age (mean age: 31.39, STD: 17.1).
Blood from patients suspected of having tick-borne encephalitis was taken by neurologists and infectologists and sent to a virology laboratory, where the samples were examined for TBE using a complement fixation test. Residual sera were subsequently delivered to our institute, where they were stored at -20 °C until tested in the laboratory.
Anti-A. phagocytophilum IgG antibodies were detected using an Indirect Immunofluorescence Antibody (IFA) IgG test (Focus Diagnostics, California, USA). The IFA assay is a two stage 'sandwich' procedure: in the first stage, the patient's serum is diluted in PBS and placed on a slide in contact with the substrate, and incubated. Following incubation, the slide is washed in PBS to remove unbound serum antibodies. In the second stage, each antigen well is overlaid with fluoresceinlabelled antibody to human IgG. The slide is incubated, allowing the antigen antibody complexes to react with the fluorescein-labeled anti-human IgG. After the slide has been washed, dried, and mounted, it is examined using fluorescence microscopy.
Positive reactions appear as an apple-green fluorescence of the morulae. Semi-quantitative endpoint titers are obtained by testing serial dilutions of positive specimens. The serum screening dilution was 1:64, according to the test producer.
Statistical analysis. Basic descriptive statistics were used to analyse the obtained results. Relative risks (RR) and their 95% confidence intervals (95% CI) were estimated for the occurrence of IgG A. phagocytophilum antibodies. The contributions of gender and risk group on the prevalence A. phagocytophilum antibodies were assessed using a logistic regression model. Statistical significance was defined as p value <0.05.
RESULTS
In the case of a positive immunological reaction to the presence of antibodies against A. phagocytophilum, the apple-green fluorescence of the morulae was detected. Patients whose sera reacted at the titre 1:64 and higher were considered to be positive.
Of the total number of 181 people included in the study, 32 (17.7%) showed positivity for IgG antibodies against A. phagocytophilum; 22 of them were patients with suspected TBE (19.5%) and 10 were from control group (14.7%). The highest positivity was detected in males and females with suspected TBE (20% resp. 18.8%), while positivity in females from the control group was 18% and in men from control group 5.6% (Tab. 1).
None of the 22 anti-A. phagocytophilum IgG antibodies positive patients with suspected TBE had the disease confirmed (Tab. 2).
Upon comparing the relative risk of occurrence of IgG A. phagocytophilum antibodies in the group of patients with suspected TBE and the control group, it was found that the risk of infection was almost 1.3-times higher in patients with suspected TBE than in the control group. This risk was 3.6-times higher for males with suspected TBE, compared with males in the control group, and upon comparing the group of females with suspected TBE with those from control group, the relative risk for both groups was approximately the same. Therefore, no significant difference was observed between positive cases in the groups of people with suspected TBE and the control group (Tab. 3).
DISCUSSION
Tick-borne diseases are the most common vector-borne diseases in Europe. Lyme borreliosis, tick-borne encephalitis, Crimean-Congo haemorrhagic fever and rickettsiosis are endemic in certain regions of Europe. Lyme borreliosis and tick-borne encephalitis are of primary importance in public health, but the overall burden of these tick-borne diseases in Europe remains unclear [17]. TBE is endemic across much of Central and Eastern Europe. The reported incidence of the disease is increasing, with numbers estimated to be as high as 8,755 cases per year [18]. In Slovakia, around 60-80 cases are reported annually. The main vectors of the TBE virus in Europe are ticks of the family Ixodidae, mainly Ixodes ricinus (Central, Northern and Eastern Europe) and Ixodes persulcatus (parts of the Baltic States, Finland, Russia, Siberia). Competent reservoir hosts are mainly small rodents (voles, mice), but also insectivores and carnivores. Hosts that support virus circulation indirectly by enabling tick reproduction are different species of wild and domestic mammals (foxes, bats, hares, deer, wild boar, sheep, cattle, goats, and dogs). Humans are incidental and dead-end hosts. In addition to being bitten by an infected tick, in endemic areas humans can also acquire TBE infection by consuming infected raw dairy products [19,20].
The same ticks that transmit TBE in Slovakia can also transmit other pathogens, including Anaplasma phagocytophilum. Therefore, simultaneous infection with multiple organisms is possible.
The real infection rate of HGA in Europe is still difficult to establish. Seroprevalence rates range from zero to up to 28.0% [21]. No official epidemiological data on the prevalence of this infection in the human population are available in Slovakia. Only a few studies have been published relating to anaplasmosis, with results of prevalence ranging from 7% -25% [22,23]. The total prevalence of A. phagocytophilum antibodies in the prrsented sample (17.7%) corresponds with the findings of these studies. Despite a higher number of positive cases in the group of patients with suspected TBE, no significant difference in the occurrence between the study and control groups was found. The most probable reason seems to be the disproportion in the gender ratio in the control group (three times more females than males), which affected the overall value of the prevalence of A. phagocytophilum antibodies in the control group.
In the presented srtudy, the highest positivity was detected in group of patients with suspected TBE. There are several studies that confirm the possible co-infection of multiple vector-borne pathogens. In Europe, these combinations also include infections by A. phagocytophilum and the TBE virus, but generally the frequency of simultaneous diseases is usually low. In the current study, none of the examined patients with suspected TBE had the disease confirmed. However, as shown by the results, the relative risk of occurrence of anaplasmosis is higher in people who are examined for some other vectorborne disease (in this case TBE). Therefore, the performing of screening examinations in patients suspected of having any tick-borne diseases is very important, especially in the case of negative results, not only for TBE and Lyme borreliosis, but also for anaplasmosis.
|
2018-04-03T04:04:49.569Z
|
2015-09-04T00:00:00.000
|
{
"year": 2015,
"sha1": "009a93fbdb1aeb0cd5522c664e585ba65312ccbe",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5604/12321966.1167702",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "009a93fbdb1aeb0cd5522c664e585ba65312ccbe",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119489777
|
pes2o/s2orc
|
v3-fos-license
|
Laboratory gas-phase infrared spectra of two astronomically relevant PAH cations: diindenoperylene, C$_{32}$H$_{16}$$^+$ and dicoronylene, C$_{48}$H$_{20}$$^+$
The first gas-phase infrared spectra of two isolated astronomically relevant and large PAH cations - diindenoperylene (DIP) and dicoronylene (DC) - in the 530$-$1800 cm$^{-1}$ (18.9$-$5.6 $\mu$m) range - are presented. Vibrational band positions are determined for comparison to the aromatic infrared bands (AIBs). The spectra are obtained via infrared multiphoton dissociation (IRMPD) spectroscopy of ions stored in a quadrupole ion trap (QIT) using the intense and tunable radiation of the free electron laser for infrared experiments (FELIX). DIP$^{+}$ shows its main absorption peaks at 737 (13.57), 800 (12.50), 1001 (9.99), 1070 (9.35), 1115 (8.97), 1152 (8.68), 1278 (7.83), 1420 (7.04) and 1550 (6.45) cm$^{-1}$($\mu$m), in good agreement with DFT calculations that are uniformly scaled to take anharmonicities into account. DC$^+$ has its main absorption peaks at 853 (11.72), 876 (11.42), 1032 (9.69), 1168 (8.56), 1300 (7.69), 1427 (7.01) and 1566 (6.39) cm$^{-1}$($\mu$m), that also agree well with the scaled DFT results presented here. The DIP$^+$ and DC$^+$ spectra are compared with the prominent infrared features observed towards NGC 7023. This results both in matches and clear deviations. Moreover, in the 11.0$-$14.0 $\mu$m region, specific bands can be linked to CH out-of-plane (oop) bending modes of different CH edge structures in large PAHs. The molecular origin of these findings and their astronomical relevance are discussed.
Introduction
Strong emission features at 3.3, 6.2, 7.7, 8.6, and 11.2μm dominate the infrared (IR) spectrum of many astronomical sources. These bands are commonly known as the aromatic infrared bands (AIBs) and are generally attributed to IR fluorescence of large (roughly more than 40 C atoms containing) polycyclic aromatic hydrocarbon (PAH) molecules and their related families. These bands are emitted upon ultraviolet (UV) excitation of these species (Sellgren 1984;Allamandola et al. 1989;Puget & Leger 1989). PAHs are found to be abundant and ubiquitous, and they are expected to account for ∼10% of the cosmic carbon (Tielens 2008). They play an important role in the energy and ionization balance of the interstellar medium (ISM) and may serve as a catalyst for the formation of molecular H 2 in photo-dissociation regions (PDRs; Tielens 2013, and references therein).
The AIBs have been interpreted as the cumulative spectrum of a family of PAHs and PAH cations as well as PAH derivatives (Tielens 2008, and references therein). For this reason, a variety of neutral and ionized PAH molecules with different sizes and structures have been studied, both by theory and in the laboratory, to link specific (sets of) AIB features to (modes of specific) carriers (Sloan et al. 1999;Hony et al. 2001;Malloci et al. 2004;Pathak & Rastogi 2008, for example). Experimentally, IR emission gas-phase spectra are available from studies by Cook et al. (1998), Kim & Saykally (2002). Infrared absorption spectra of PAHs and PAH cations have been recorded in rare gas matrices (e.g., Hudgins & Allamandola 1995;Mattioda et al. 2003;Bernstein et al. 2007;Tsuge et al. 2016). In the gas phase, spectra have been measured using infrared multiphoton dissociation (IRMPD; Oomens et al. 2001), messenger atom photo-dissociation spectroscopy (Piest et al. 1999;Ricks et al. 2009), and iondip technique (Maltseva et al. 2015(Maltseva et al. , 2016. The majority of the available studies have focused on smaller PAHs, like naphthalene and coronene that are commercially available and relatively easy to handle in a laboratory setting. Spectra of large and astronomically more relevant PAHs-from ∼40 C atoms upwards-are much rarer (Kokkin et al. 2008;Zhen et al. 2016). Recently, the first gas-phase IR spectra of a large PAH cation-HBC or hexa-peri-hexabenzocoronene (C 42 H 18 )-and its dication have been reported by Zhen et al. (2017).
Tielens (2013) first proposed the concept of "grandPAHs," as a set of the chemically most stable PAH species that are able to survive in the harsh conditions of the ISM. This idea was motivated by the observation of highly similar AIB spectra observed toward very different interstellar sources as well as the limited number of bands in the 15-20 μm range, which is a region in which structural features of PAHs are expected to show up (Boersma et al. 2010). The "grandPAHs" concept is currently under further investigation (e.g., Andrews et al. 2015;Peeters et al. 2017). If "grandPAHs" indeed abundantly exist in the ISM, this would simplify the picture for the scientific community, as only a rather limited number of species would be important to characterize (Tielens 2013;Andrews et al. 2015;Croiset et al. 2016). The present study adds to this discussion by presenting the gas-phase IRMPD spectra of two other large PAH cations-diindenoperylene (DIP, C 32 H 16 , m/z=400.13) and dicoronylene (DC, C 48 H 20 , m/z=596.16). The molecular geometries are shown as insets in Figure 1. These spectra are compared to both density functional theory (DFT) predictions and astronomical spectra.
Experimental and Theoretical Methods
The experiments described here have been performed with iPoP, our fully mobile "instrument for Photo-dissociation of PAHs" (Zhen et al. 2014b). The set-up has been used at one of the end stations of FELIX, the Free Electron Laser for InfraRed eXperiments at Radboud University (Oepts et al. 1995). The information on the experimental procedures is available from (Zhen et al. 2017). Here, only the relevant details are provided.
The two central parts of iPoP are a quadrupole ion trap (QIT) and a time-of-flight (TOF) mass spectrometer. Gas-phase DIP and DC precursor molecules are obtained by heating commercially available powder (Kentax, purity higher than 99.5%) in an oven that can be preset to a value very close to the sublimation temperature: ∼480 K for DIP and ∼670 K for DC. The gas-phase neutrals are ionized using electron impact ionization, typically with 83eV electron impact energy. The resulting DIP + and DC + cations enter in the QIT via an ion gate and are trapped by applying a 1 MHz radiofrequency electric field (3000 and 3280 V p-p ) onto the ring electrode. Helium that is continuously introduced into the ion trap thermalizes the ion cloud.
Spectra between 530 and 1800 cm −1 (18.9−5.6 μm) are obtained by means of IRMPD, using the intense and tunable radiation of FELIX. This free electron laser delivers 5μs long macro-pulses of light with an energy up to 100mJ and at a repetition rate of 10Hz. Its bandwidth (full-width-at-halfmaximum, FWHM) amounts to about 0.6% of the central wavelength (i.e., ranging from 3 to 10 cm −1 ). Resonant vibrational excitation through multiple photon absorptions causes the PAH cation to fragment. Subsequently, the fragments are extracted from the QIT and analyzed mass spectrometrically in a TOF mass spectrometer. All (stronger) channels resulting from the photo-fragmentation of the parent are summed up and this value is normalized to the total signal, i.e., the value found for parent plus photo-fragment ions. This results in a relative photo-fragmentation intensity. The IR spectrum of the trapped ion is then obtained by recording the fragment ion yield as a function of the wavelength.
The ion cloud is irradiated for 0.8s (8 macro-pulses) to obtain a single IRMPD mass spectrum. The average of 25 individual mass spectra is taken for a data point at a single wavelength. A spectrum is obtained by averaging mass spectra while tuning the free electron laser with 5 cm −1 steps. This value is close to the absolute wavelength accuracy of the system. In addition, blank spectra are recorded in order to subtract fragmentation signals that are present in absence of IR radiation, i.e., upon electron impact ionization only.
The recorded spectra are normalized to the pulse energy, which, in the best case, offers only a first-order correction, as IRMPD is a nonlinear process. Signal intensities, therefore, should be handled with care, as these reflect more the dissociation efficiency upon excitation of a specific vibrational mode than its IR band strength. Moreover, as high excitations are involved in the detection process, anharmonic effects may shift absorption wavelengths by as much as several cm −1 (Oomens et al. 2001). A direct comparison with astronomical data, therefore should be done with care: intensities may vary and wavelength positions may be somewhat off-set. For an indepth comparison, detailed theoretical studies on the effects of anharmonicity in highly excited PAHs (Parneix et al. 2013) are required, which is currently outside of the realm for large PAHs.
To further guide the spectral interpretation of the experimental spectra, DFT calculations are performed with the B3LYP functional and the 6-31 G ** basis set, using Gaussian 09. It has been shown that multiple scaling factors are needed when a larger basis set than the standard 4-31G is used (Langhoff 1996;Buragohain et al. 2015). In this work, a uniform scaling of 0.961 is applied to all modes, which suffices to unambiguously assign the broad bands in the IRMPD spectra. For both DIP + and DC + , only the doublet state is considered, as this state is lowest in energy (Bauschlicher & Bakes 2010a;Bauschlicher et al. 2010b). Figure 1 shows the mass spectra for DIP + (Figure 1(A)) and DC + (Figure 1(B)) for fragments formed upon electron impact ionization only (upper graphs) and additional photo-excitation (lower graphs) using 1275 cm −1 photons (DIP + ) and 1300 cm −1 photons (DC + ). Due to the impacting electrons, the mass spectra before irradiation (labeled as background in the upper graphs) exhibit a small amount of residual fragmentation. The mass signals also comprise small contributions from fragments containing 13 C isotopes. The mass spectrum in Figure 1(A) shows that the DIP + photo-dissociation pattern preferably follows sequential H 2 (or 2H) loss channels, similarly to what was observed for HBC + by Zhen et al. (2014b), but the hydrogen stripping is not complete and no bare + C 32 (m/z=384) signal is observed. In Figure 1(A), also a (much weaker) carbon loss channel is clear found, resulting in noisy signals around the masses of ( + C H 30 8 , m/z=368). Figure 1(B) shows the mass spectrum of DC + . The photoinduced fragmentation pattern visualizes a more complete dehydrogenation process, compared to DIP + , and even the fully dehydrogenated, i.e., bare carbon ( + C 48 , m/z=576) geometry is observed. No detectable evidence is found for a loss channel involving carbon.
Results and Discussion
As pointed out by earlier studies (Ekern et al. 1998;Zhen et al. 2014a), large PAH cations will initially fragment through rapid H-loss, leaving a bare carbon skeleton. The involved fragmentation kinetics is controlled by the molecular absorption properties of the parent species and the competition of the different processes involved; e.g., their relative rates, which are set by the energy barriers and the tightness of the transition states involved (Tielens 2008). This competition between the different fragmentation channels is known to depend on size as, in contrast to large PAHs, small PAHs already start to lose C 2 units before all H atoms are lost (Ekern et al. 1998;West et al. 2014). Molecular geometry may also affect the fragmentation behavior as the binding energy per C-atom is higher for compact PAHs than for non-compact PAHs (Ricca et al. 2012). Our spectra show differences in fragmentation behavior for these two species with DIP + losing some C 2 before H-loss is complete while DC + does not (Figure 1). Whether this reflects the difference in size or molecular geometry remains to be determined. We do conclude, though, that IRMPD provides an alternative way to study the fragmentation patterns of highly excited PAHs. In space, PAH evolution will be a competition between UV-driven fragmentation and reactions with H atoms (Vuong & Foing 2000;Le Page et al. 2003). Further IRMPD studies may serve as inputs for astronomical models for the evolution of interstellar PAHs (Andrews et al. 2015;Berné et al. 2015).
In Figure 2, the resulting IR spectra for DIP + (panel A) and DC + (panel B) between 530 and 1800 cm −1 are presented. The molecular geometries are repeated for convenience. The two spectra exhibit a number of clearly resolved vibrational bands with reasonable signal-to-noise ratios. Both relatively narrow and broader bands are found. Broadening is caused by overlapping bands or may be due to anharmonic effects. This becomes clear from the scaled harmonic DFT predictions for the vibrational band positions that are incorporated as sticks in the figure. Table 1 summarizes all vibrational and computed wavelengths. Generally, one can see a quite convincing agreement between experiment and theory. The vibrational assignment of the experimental bands, therefore, is based on the theoretical predictions. A comparison of the involved intensities is less obvious, as stated before, because the IRMPD intensities do not directly reflect absorption strengths. This is discussed in more detail below.
In Figure 2(A) and Table 1, a comparison of our experimental IRMPD DIP + spectrum with the theoretical predictions is presented. Assignments can be made that are based on a smallest wavelength difference criterion. This also works for bands that are very close, resolved in the calculations, but that are not resolved in the measured spectra. For example, the broad experimental band around 1278 cm −1 overlaps with theoretically predicted stronger bands at 1232, 1308, and 1333 cm −1 . It is very likely that the 1278 cm −1 band also includes bands at 1248, 1287, and 1313 cm −1 . This explains why the band is both intense and broad with a FWHM of around 120 cm −1 . Figure 2(B) and Table 1 provide similar information for DC + . Also the DC + experimental and theoretical results are rather similar and the recorded bands are assigned by looking for (close) matches with the predicted ones. For the theoretical result of DC + , we note that in the out-of-plane (oop) modes region (11-14 μm), DC + shows a relatively strong "solo" mode at 870 cm −1 (compare to the longer wavelengths oop modes) despite that this species has only 4 "solo"' Hs as compared to 16 "duo" Hs; in contrast, in Figure 2(B), we note that the "duo" oop mode in DC + at 853 cm −1 seems to be the same strong comparison to the "solo" oop mode at 876 cm −1 . This may indicate that the nonlinear aspect of the IRMPD technique seems to play a major role in intensity prediction rather than the anharmonicity effect, as the experimental "solo" modes are as strong as the "duo" modes.
A comparison between Figures 2(A) and (B) shows clear differences. DIP + exhibits multiple CH in-plane bending vibration bands (1001, 1070, 1115, and 1152 cm −1 ) in a range (1000-1200 cm −1 ) where DC + , and also HBC + show only two resolved bands (Zhen et al. 2017). The origin of these features can be traced back to a combination of CH in plane and CC stretches modes involving also, but not exclusively, the two pentagons in the DIP geometry. Thus, if this is confirmed in other PAHs, these bands can offer a tool to identify PAHs with pentagon structures. Another pronounced difference is found around 1600 cm −1 ; whereas HBC + (Zhen et al. 2017) only exhibits a relatively weak feature in this range, both DIP + an DC + have much stronger bands here. Further studies are required to assess whether this reflects the less symmetric structure of the species studied here, which could "activate" more CC modes than for the more highly symmetric HBC + .
As a general remark, we mention that, despite the fact that a scaling factor has been applied to correct the DFT calculations for anharmonicities, most experimentally measured band positions of DIP + and DC + are shifted by several wavenumbers with respect to the calculated positions. Similar findings were found in other studies (Oomens et al. 2003). The scaling factors are based on comparison between DFT calculations and low temperature matrix isolation studies (Langhoff 1996). As stated earlier, the IRMPD (absorption) process results in highly excited species and will therefore induce an additional (red) shift. The size of this shift depends on the level of anharmonicity and the temperature of the involved modes (Chen et al. 2016;Candian & Mackie 2017). With this qualifying remark in mind, the comparison of our experimental results with the theoretical predictions is very encouraging with deviations of the order of a few percent.
Astrophysical Relevance
The IRMPD experiments described here, in combination with the theoretical predictions, yield band positions of vibrational bands that can be compared with astronomical data. The laboratory band positions suffer to some extent from anharmonic effects that, however, can be taken into account, also because the observed bands are generally rather broad, of the order of 20−40 cm −1 , comparable to typical AIB values (Tielens 2008;Bauschlicher & Bakes 2010a;Bauschlicher et al. 2010b). However, the laboratory band intensity values have to be used with care, as-unlike direct absorption spectroscopy-IRMPD is based on a nonlinear process. We also note that observed AIB intensities will be affected by the excitation temperature of the emitting interstellar PAHs, which can be quite different from those in our laboratory study (Verstraete et al. 2001;Boersma et al. 2011;Chen et al. 2016;Candian & Mackie 2017).
In Figure 3, the obtained experimental IRMPD spectra for DIP + and DC + are replotted together with the previously reported spectrum of HBC + (Zhen et al. 2017). Also included in the figure (top row) is a typical AIB emission spectrum as observed with Spitzer toward NGC 7023 (Houck et al. 2004;Werner et al. 2004). The DC + spectrum exhibits clear resemblances with the AIB spectrum; the strong bands at 6.2, 7.7, 8.6, and 9.7 μm closely match the position (and even intensity profile) observed in the AIB spectrum. The spectral patterns of DIP + and HBC + show much less agreement, even when taking into account that the laboratory intensities do not need to match.
In the geometries shown in Figure 3, the numbers of adjacent CH groups per aromatic ring are given. 6 Experimental and theoretical studies on small PAHs have shown that the pattern of the oop CH bending modes is very characteristic for the number of adjacent CH groups (Hony et al. 2001), and quantum chemical studies have extended this finding to large PAHs (Boersma et al. 2014). The number of adjacent CH groups clearly differs for the three PAH cations shown in Figure 3. Comparing the spectra to the observed bands in the range of the CH oop bending modes (11.0-14.0 μm) of NGC 7023-which shows very clear (major) emission features at 11.2, 12.0, 12.7, and 13.5 μm-we conclude that the interstellar features are close to several of the observed laboratory bands: the interstellar 11.2 μm band is very close to the 11.42 μm band due to "solo" CH-structure bending modes in DC + ; the interstellar 12.0 μm band is not far from the 11.72 μm and 12.50 μm band due to "duo" CH-structure bending modes in DC + and DIP + ; the interstellar 13.1 μm band is close to the 12.7 μm band due to "trio" CH-structure bending Notes. a Theoretical band positions have been corrected by a scaling factor of 0.961 to account for anharmonicity. b Relative intensities are indicated as w, m, s for weak, medium, strong. c Experimental band position uncertainties are estimated to be A (1∼3 cm −1 ), B (3∼6 cm −1 ) and C (15∼30 cm −1 ). d FWHM (cm −1 ). e We give approximate values. f Theoretical intensities are presented, only transitions with intensities larger than 20 km mol −1 are listed.
6 1 is indicative of aromatic rings carrying CH groups that have no neighboring CH groups (termed "non-adjacent" or "solo" CH groups); likewise, 2 is indicative of two adjacent CH groups ("doubly adjacent" or "duo" CH's); 3 is indicative of three adjacent CH groups ("triply adjacent" or "trio" CH's); and 4 is indicative of four adjacent CH groups ("quadruply adjacent" or "quartet" CH's; Hony et al. 2001). modes in HBC + ; the interstellar 13.5 μm band nearly coincides with the 13.57 μm band due to "quartet" CH-structure bending modes in DIP + . As stated earlier, the AIB spectrum is thought to be composed of the individual spectra of a family of different PAHs that will have different absorption efficiencies and that will be present in space in different abundances. A one-toone comparison of a PAH and the AIBs, for that reason, is unlikely to result in a full reproduction of the astronomical spectrum. Nevertheless, we note that DC + gets very close, a finding that is in agreement with theoretical studies that show that IR spectra of large compact PAHs provide the better agreement with astronomical observations (Ricca et al. 2012).
Is it possible to extract more general findings from these data? Typically, many different PAHs exhibit more or less similar spectral features, as similar vibrational modes are involved. Nevertheless, differences can be found, as apparent in Figure 3. Even though premature, on the basis of only the few spectra presented here, one could conclude that a larger PAH, compact and possibly better resistant against UVinduced dissociation of its carbon frame, may be a more likely carrier of the AIBs than smaller or less compact PAHs. A preliminary conclusion based on the small data set presented here would be that "GrandPAHs" must be compact (Tielens 2013; Andrews et al. 2015;Croiset et al. 2016). This would make sense, but definitely needs to be tested by investigating other PAHs.
Conclusions
The IR spectra of DIP + and DC + , in the 530−1800 cm −1 (18.9−5.6 μm) IR fingerprint region, have been measured and compared to DFT calculated and measured interstellar spectra. The applied IRMPD technique provides useful band positions for IR active modes, despite inherent inaccuracies such as band shifts or nonlinear band intensities. A comparison of the recorded spectra with AIBs supportsbut is not conclusive evidence for-the possibility that large compact PAHs act as AIB carriers. The spectra also demonstrate that the general rules for the CH oop bending modes derived for small PAHs also apply to large PAHs and allow us to interpret the AIB bands in the 11.0-14.0 μm range as being due to CH oop bending modes associated with "solo", "duo", "trio", or "quartet" structures on the PAH edges. Finally, we note that IRMPD also provides a tool for the study of the fragmentation behavior of PAHs and that even large PAHs are amenable to study using a IR laser facility.
The authors gratefully acknowledge the FELIX staff for their technical assistance during preparation and the actual beamline shifts. Studies of interstellar chemistry at Leiden Observatory are supported through advanced-ERC grant 246976 from the European Research Council, through a grant by the Netherlands Organisation for Scientific Research (NWO) as part of the Dutch Astrochemistry Network, and through the Spinoza premie. J.Z. acknowledges financial support from National Science Foundation of China (NSFC, grant No. 11743004 . IR spectra of DC + , HBC + , and DIP + (lower three panels) compared to one representative AIB emission spectrum toward NGC 7023 (upper panel). The latter has been recorded using Spitzer and the mid-IR spectra were taken with the short-low (SL) module covering the wavelength range 5.1-14.2 μm (AORs 3871488 and 3871744) (Houck et al. 2004;Werner et al. 2004). The numbers in the molecular geometries, shown on the right, indicate the number of adjacent CH groups per aromatic ring.
|
2018-09-24T08:37:58.000Z
|
2018-02-08T00:00:00.000
|
{
"year": 2018,
"sha1": "85ed823b8643a59216e603a0524dca9502894c0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3847/1538-4357/aaa7f2",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "85ed823b8643a59216e603a0524dca9502894c0e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235422416
|
pes2o/s2orc
|
v3-fos-license
|
Toward Diverse Precondition Generation
A typical goal for language understanding is to logically connect the events of a discourse, but often connective events are not described due to their commonsense nature. In order to address this deficit, we focus here on generating precondition events. Precondition generation can be framed as a sequence-to-sequence problem: given a target event, generate a possible precondition. However, in most real-world scenarios, an event can have several preconditions, which is not always suitable for standard seq2seq frameworks. We propose DiP, the Diverse Precondition generation system that can generate unique and diverse preconditions. DiP consists of three stages of the generative process – an event sampler, a candidate generator, and a post-processor. The event sampler provides control codes (precondition triggers) which the candidate generator uses to focus its generation. Post-processing further improves the results through re-ranking and filtering. Unlike other conditional generation systems, DiP automatically generates control codes without training on diverse examples. Analysis reveals that DiP improves the diversity of preconditions significantly compared to a beam search baseline. Also, manual evaluation shows that DiP generates more preconditions than a strong nucleus sampling baseline.
Introduction
Preconditions are an important part of language understanding with numerous applications, ranging from event understanding to story generation. They provide the semantic glue to understand (or generate) the chains of events common in narrative text. How can we build intelligent systems to fill in these chains, or to identify semantically related events in context? Kwon et al. (2020) took a first step by introducing a precondition generation task, where given a target event mention the goal is to generate text that describes a precondition for the target. They released the 'PeKo' dataset for training, and showed that a GPT-2 model can be fine-tuned on input/output sequence pairs. While PeKo is useful, it is constrained by annotating a single relation for each target event. This is contrast to the real-world where most events have many preconditions. For example, "opening a door" has several preconditions like approaching the door, turning a key in the door, and pushing the door. PeKo's annotation limits the ability of models to learn to generate multiple and diverse preconditions 1 . In this work, we address the challenge of generating more preconditions for each target event while still maintaining quality.
Generating non-repetitive diverse outputs is a challenge for any conditional language generation system. Our analysis of the GPT-2 based model shows that this is also the case for preconditions. Table 1 shows such top preconditions for an example event. Standard sampling techniques produce high-levels of lexical and semantic redundancy. In the absence of any explicit mechanisms to force diversity, the model just produces minor variations of the same event as preconditions. To obtain diverse candidate preconditions, we have to start looking lower in the model's ranked lists of probable preconditions, thereby sacrificing quality.
How can we induce a model to generate diverse outputs without losing quality? Context sensitivity might help with quality, but it also hinders diversity. To address this we introduce a three-stage generative process, which we call DiP. In the first stage, DiP uses an event sampler whose only goal is to generate event trigger words as precondition candidates. In the second stage, DiP forces the generative model to use the candidate triggers from the first stage to produce the full description of the precondition event. In the third stage, DiP reranks and filters the generated descriptions using a precondition classifier (also trained from the same training data). 2 A brief example is shown here: Experiments on the PeKo dataset show that DiP produces more diverse and better quality preconditions compared to standard beam decoding, as well as an iterative filtering extension that applies a standard repetition penalty in a sampling strategy. Analyses show that DiP is able to better balance the need for diversity against quality. While the iterative repetition penalty method generates lexically diverse outputs, it often introduces irrelevant information rather than producing distinct types of preconditions. Our human evaluation shows that DiP on the other hand is able to produce text that is more likely to be preconditions.
Related Work
Most work on logical preconditions has focused on identification/extraction from text. For example, Sil et al. (2010) identified preconditions using a SVM-based score function with hand-crafted PMI and WordNet based features. Branavan et al. (2012) extracted domain-specific precondition relations from instructions for the game of Minecraft. This paper is instead focused on generating novel preconditions. To the best of our knowledge, only the prior PeKo work (Kwon et al., 2020) has attempted this. We are building on those initial ideas.
There has been research for diverse generation using control codes or latent variables. Some works use explicit cues to control text generation. Huang et al. (2018) used emotion embeddings to generate dialogue responses in a specific mood. Keskar 2 We will release the source code upon acceptance. et al. (2019) trained a LM with human readable control codes, which describe domain, style, or topics. Then the model learns to generate text conditioned on a given code. The model requires manually predefined control codes and a corresponding training corpus for each code.
Other diverse generation works learn latent representations or codes from input text, and then generate text conditioned on those codes. Shu et al.
(2019) applied a sentence embedding to generate syntactically diverse translations. They find that syntax-based encoding with TreeLSTM (Socher et al., 2011) yields better diversity than a contextual encoding using BERT (Devlin et al., 2019) or FastText (Bojanowski et al., 2017). Bao et al. (2020) used K categorical latent variables to control the generation context of dialogue responses and pick the highest probability response from the responses generated using the latent variables. COD3S (Weir et al., 2020) is designed to generate diverse causal relations. It uses locality-sensitive hashing (LSH) (Indyk and Motwani, 1998) on representations from Sentence-BERT (Reimers and Gurevych, 2019). Conditioning on these 16-bit LSH signatures, it generates cause/effect sentences using a Transformer architecture (Vaswani et al., 2017) but with a limited vocabulary size of 10K.
These previous approaches have some drawbacks -they either require explicit control codes and training examples, or they have low interpretability of their codes. Our approach addresses these two limitations: control codes are learned from non-diverse input text and the codes are human-readable events. And these approaches are not directly comparable to our method without proper modification, which would not be fair comparisons. Thus, we present our own baselines for evaluation, and these baselines serve as a proxy of ablation studies as well.
Diverse Precondition Generation
This section describes our diverse precondition generation task and our methodology for solving it. Our proposed approach does not require additional diverse training examples.
Generation Task
This paper follows the precondition definitions from Kwon et al. (2020): Precondition Definition -"Given a target event mention t and a candidate event mention p, we assert p is a precondition event for t if p is necessary for t to happen i.e., t likely would not have occurred without p, in the current text context." Precondition Generation -"Given a target event t, generate an event p that is a precondition for t." The precondition generation task is defined over sentences that contain both a target and a precondition event. The precondition part is masked and a model is asked to reconstruct the sentence including its precondition. For masking, the syntactic subtree of a precondition is replaced with [BLANK]. In order to indicate the events of interest -target and precondition -we use special tokens <event> .. </event> and <pre> .. </pre>.
For our new task, instead of generating the entire sentence, we only generate a precondition clause that would fit into the input's [BLANK]. Since a precondition could be stated in either preceding or succeeding position of its target event, we modeled this as a text infilling task. This approach is inspired by Donahue et al. (2020) and this modification allows the model to focus solely on generating preconditions because the model doesn't need to copy over its input text. Thus, the model can learn faster and more efficiently.
Diverse Precondition Generator
Generating preconditions is a difficult task even for a single output setting (Kwon et al., 2020). With the training data derived from existing news articles, generative models only get to see one possible precondition for each target event. Not surprisingly the top candidates in beam search tend to be focused towards a specific type of precondition event with minor variations. This suggests that we need to provide explicit guidance to the model to explore diverse candidates.
How can we get such diverse guidance? A main strength of large generative language models is that they learn to generate text that fits with the input context. If we can get the input context to be less specific then we can aim to get more general outputs. We can exploit this behavior by training a separate event sampler that is fed a reduced version of the target event description. For example, we can denote the target event by just the event trigger and its arguments. The event sampler learns to predict possible precondition event triggers based on this reduced context. This task forces the sampler to learn a more general mapping between target and precondition events that can produce a diverse set of starting points for generating the precondition events. We can then train another generative model to condition on the precondition trigger in addition to the input sentence. This gives us a model whose outputs we can control by providing different possible precondition triggers. Not all precondition triggers may yield high quality preconditions. To further assist the model, we also devise a precondition re-ranker.
Our overall system, shown in Figure 1, consists of three components -an event sampler, a candidate generator, and a post processor (Precondition re-ranker and Similarity filter). The first two stages are used for generation -they use two separate generation models, and the last is employed to improve the quality of generated preconditions. We refer to this system as DiP short for Diverse Preconditions.
Event Sampler
The event sampler provides possible precondition event triggers given a target event. This can be formulated as a sequence to sequence problem where the input sequence is a target event and the output sequence is a precondition event trigger. Since our goal here is to get diverse precondition events, we can experiment with input contexts of different levels of detail. To get more general precondition events, we use just the target event triggers as the input. To get more specific preconditions, we can use larger contexts surrounding the target event trigger as the input. During inference, we sample top n event triggers based on their probability.
Formally, let x be a subset of the full description x of the target event. The sampler can be seen as a generative model that outputs event triggers e for the preconditions of the target event.
The generative model is trained to maximize the probability for the correct precondition trigger e and during inference can be used to sample a candidate set of top n precondition event triggers.
Candidate Generator
The candidate generator, as the name suggests, is a language model that we fine-tune for generating precondition candidates. We want this model to generate preconditions corresponding to the triggers from the event sampler. To this end, in addition to the full target event description x, we also provide a precondition trigger marked by a special token -<E> precondition_event -at the end of the input. This can be seen as a form of a control code similar to those used in Keskar et al. (2019); Weir et al. (2020). The crucial difference, however, is that the codes in our case are dynamically generated conditioned on the input and not restricted to a predefined set.
Formally, the candidate generator is a language model that generates a description of the precondition event c i conditioning on the full description of the target event x and a given precondition trigger e i from the event sampler.
The model is trained to maximize the probability of the observed precondition text for the target event when provided with the correct precondition trigger. Note that during training, the precondition trigger provided as input always appears in the correct precondition description output (ĉ i ). This encourages the model to learn to incorporate the trigger provided at the end of the input as part of its output. During inference, the model generates a set of preconditions one for each of the top n triggers obtained from the event sampler.
Post Processor
Precondition Re-ranker We use a precondition re-ranker to reorder the generated candidates based on how likely they are to be preconditions of the target event. Note that the generative model is implicitly trained for a similar objective. However, the model is also forced to include the input precondition trigger which could make it harder to focus on ensuring the result is indeed a precondition. Therefore, we introduce a separate precondition classifier that scores the generated candidates. Note that the original PeKo dataset is already setup for training such a classifier (Kwon et al., 2020). Each instance in this dataset consists of an input text that includes a pair of marked event triggers (target, candidate) and a label that indicates whether the candidate is a precondition of the event denoted by the target trigger. The output from the precondition generator is essentially equivalent to an instance from this dataset. We build a classifier that scores a pair of events in text, and we use this score as an indicator of the precondition quality of the generated candidates and re-rank them based on this score. Iterative Redundancy Filtering The resulting candidates are a mix of candidate precondition events from different triggers. To further avoid redundancy we also include an explicit filtering step, where we post-process the generated text based on their pairwise similarity. Specifically, we start with the highest ranked instance in the output set, and iteratively walk down the ranked list and add instances to the output if the highest similarity score they have with any of the current output set is lower than a certain threshold.
Evaluation
Our goal is to investigate the impact of our DiP approach for generating diverse and high-quality preconditions. We closely follow Kwon et al. (2020) for the experimental setup and the GPT-2 based generation system for our evaluation.
Datasets
For the fine-tuning task, we use the precondition generation instances in the PeKo dataset. In addition, we also create a large additional pre-training dataset that includes temporal generation instances. With this additional dataset we can perform a form of domain adaptive pre-training (DAPT) introduced by Gururangan et al. (2020). The main idea here is to create generation instances where the model gets to see a target event but now is required to produce an event that temporally precedes the target event. Since preconditions are supposed to be temporally preceding this temporal generation task can be seen as a more permissive yet related generation task, which is then subsequently restricted to only preconditions in the fine-tuning stage. We use the CAEVO (Chambers et al., 2014) to obtain temporally related event pairs from the NYT corpus (Sandhaus, 2008). This yields 1.1 million instances and each instance contains one temporal relation (BEFORE/AFTER). Note that all systems are trained using the same pre-training and fine-tuning strategy using both datasets.
Baselines
Beam Search As a baseline, we use text infilling GPT-2 system (inspired by (Donahue et al., 2020) with a standard beam search decoding strategy. This beam search decoder can provide multiple responses up to its beam size. We expect this simple baseline to contain high-levels of redundancy in its outputs.
Repetition Penalized Sampling (RPS) For a stronger baseline, we use a decoding strategy that can generate diverse preconditions by penalizing previously generated precondition event triggers. This is done by an iterative decoding process applied to the same GPT-2 generation model. Given a target event, the model generates k preconditions in an iterative manner. When the model generates a precondition trigger -after <pre> token -a repetition penalty is applied to deter the model from selecting previously generated precondition events. We adopt the penalized sampling from Keskar et al.
. Instead of using a list of all generated tokens, we use a list of precondition event triggers that are generated in the previous iterations. Given a list of generated precondition events t, the probability distribution p i for the next trigger token x i is defined as: We set λ = 1.2 as in Keskar et al. (2019). For decoding, we use Nucleus Sampling (Holtzman et al., 2020) which has been claimed to generate a higher quality of text. Finally, we test the RPS model with the post-processor from DiP, to confirm that the major gain of DiP is from the Event Sampler.
DiP Model
DiP has three modules -Event sampler, Candidate generator, and Precondition re-ranker. We train each module separately.
Event sampler
We use the GPT-2 model for the event sampler. The model is trained on the same data instances described in Section 4.1, but instead of using the entire target-precondition pairs, we use target-precondition event trigger pairs. We train three event samplers with different levels of context -trigger only, 3 neighboring tokens, and 5 neighboring tokens -to understand how different context affect candidate precondition sampling. As Table 2 shows, adding more context help the model to generate more specific events related to describe situations while the model provides more general events if only a trigger is given. Candidate generator The GPT-2 model is also used for the candidate generator. For training, as described in 3.2, we add <E> precondition_event at the end of input so Model Diversity Evaluation
Model
Self-BLEU Self-BLEURT that the model can learn how to utilize the provided event trigger as a control code. Table 3 shows the training examples for the candidate generator.
Post-processor We train a precondition re-ranker using BERT (Devlin et al., 2019). The F1 score of the classifier is 71.91 with 64.65 of the precision.
To remove possibly redundant preconditions using iterative redundancy filtering, we need to compute cosine similarity between the generated preconditions. We take the precondition classifier's [CLS] token representation as the embedding for preconditions. Since the similarity score distributions are different from instance to instance, instead of using a fixed value as the threshold, we set the threshold as µ + σ of each instance (the mean and the standard deviation of pairwise similarity scores). This filters out ∼16% of the most similar generated preconditions. For comparison with the baselines, we take top 10 preconditions from remaining outputs.
Automatic Evaluation Metrics
We use Self-BLEU (Zhu et al., 2018) and Self-BLEURT score to measure the diversity of generated preconditions. Self-BLEU measures how similar a set of sentences is to each other using BLEU score -the average of BLEU scores for the all pairs of sentences in the set. In addition to direct lexical overlap, we also measure semantic overlap using BLEURT (Sellam et al., 2020), which is a BERTbased learned evaluation metric that is trained on human ratings of sentence pairs. We refer to this metric as Self-BLEURT. For both metrics, a lower score implies more diverse preconditions.
Results
We compare the models on both diversity and quality. For diversity, we use an automatic evaluation, and for quality we used human annotators.
Automatic Diversity Evaluation: Table 4 shows the diversity metrics for all methods. We evaluated 5,000 preconditions generated for 500 target events. Comparing RPS+Post-proc to RPS, Post-proc shows little effect, we compare just RPS to DiP in the rest of the evaluations (See Appendix for more details between RPS and RPS+Post-proc).
In both metrics, DiP and RPS generate more diverse output than the beam search decoder. DiP is compatible to RPS in shorter preconditions, and RPS produces more diverse outputs when the generated text gets longer, as shown in Figure 2.
Manual Quality Evaluation:
The automatic evaluation only measures diversity. To see if the models generate legitimate preconditions, we conducted a manual evaluation for quality. We evaluated 960 generated outputs covering 96 distinct target events for both DiP and the RPS baseline. For each instance the annotators were presented with the top ten generated outputs from two systems. For each output the annotators provided a rating on a scale from 0 to 2, where 0 means not a precondition, 1 is a maybe, and 2 is definitely a precondition. We split the 96 instances across 8 different annotators 3 . Table 5 shows the results in terms of two metrics: one is the average score across all 96 instances, and another is the number of "Wins" where a model gets +1 point if the sum of its 10 precondition scores is higher than the other. In both metrics, DiP outperforms RPS. Moreover, as shown in Figure 3, DiP produces better preconditions across most output lengths and is best on longer outputs.
Analysis
Examples: Table 8 shows the top 5 generations from our main three systems. The beam search's failure on the diversity metric is easy to see with its repetitive output. Most verbs are the same. Both RPS and DiP are notably better in terms of diversity, but RPS introduces lots of irrelevant information that may have artificially increased its diversity score. Long irrelevant phrases are clear to see, and verb synonyms are common. In contrast, DiP generates more succinct and general preconditions, as well as fewer direct synonyms.
Context Specificity: Table 7 shows the diversity scores when different levels of context are provided to the event sampler. Diversity gets slightly worse with more context. This aligns with our observation from Table 2 that the event sampler with more context generates more context-specific precondition events, which now appear to be closer to each other semantically. This makes intuitive sense if you view context as closing a model's view of broader options.
Errors: We categorize 4 types of frequently observed errors from DiP. 1) Rare target events: some target event triggers are polysemious, and some meanings are rarely used in the newswire. Thus, the event sampler is biased toward generating precondition events based on more frequent usages of triggers. 2) Conflict with sentence structure: the candidate generator has no ability to modify the trigger for its syntactic context because it is trained only as copying over the event trigger. 3) 3 These were computer science graduate students She died of cancer in 1998, after her husband was killed by an electrician who had been working with him" The word lost is usually used in the context of war, election, or obituary in the newswire. That may lead the event sampler to fail to generate proper precondition event triggers. Conflict with sentence structure The proposal from Mr. Levy that the Justice Department struck down, she added, would have violated nondiscrimination provisions of the consent decree and the Civil Rights Act of 1964 by [BLANK]. Event Generated Precondition created created a new state law requiring that a judge be present on the court called called for a "state level ban on political activity" in New York City The candidate generator is not able to modify the provided triggers according to the context.
Re-ranking issue
China's markets nearly disintegrated in 2005, and one 2003 poll found [BLANK]. Event Generated Precondition became (#4) that the market became more popular than in 2000 (#2) named (>#10) that the country was named the world's largest economy (#4) Generated sentences are not preconditions but ranked high after re-ranking (#4 → #2 and >#10 → #4). Limited available context [BLANK] to hit a short forehand and guided it wide. Event Generated Precondition batted In the third round, he batted the ball with his left hand pitched In the first inning, the Mets pitched three consecutive hits The target context is related to tennis, but preconditions are generated in the context of baseball because the provided context is too limited. Table 6: Examples from each type of errors. There are 4 types of frequently observed errors and the first 3 types are caused by each stage in DiP. The last one is due to data instances.
Re-ranking issue: the re-ranker can induce errors due to its performance -64.65 precision. 4) Limited available context: when the provided context of a target event is too limited, the model often fails to generate preconditions. Table 6 shows examples for each type of error.
Conclusion
Real-world events often have multiple preconditions, but today's datasets do not, including the
Context
Self-BLEU Self-BLEURT Precondition candidates are generated from the event samplers given the input with a trigger only, a trigger with neighboring 3 tokens, or a trigger with 5 tokens.
latest PeKo, presenting a challenge for text-driven models. Vanilla generative models have high-levels of redundancy in their outputs and are thus not well suited for diverse generation. This work introduced an event sampler that overcomes the issue of target context specificity to provide diverse guidance to the generator. Coupled with a precondition ranker and similarity filter, this multi-stage generation setup yields more diverse and higher quality preconditions. Further, a new training corpus was not required. More generally, this approach can be seen as an instance of controllable diverse output generation for conditional language models.
A.2 Comparison between RPS and
RPS+Post-processing Table 9 shows the comparison between RPS and RPS+Post-processing. The effect of Post-processor on RPS system is considered neutral. There are some cases where the generation qualities are improved but also other cases where the qualities are compromised.
A.3 Manual Evaluation
Evaluation Instruction Figure 4 shows the evaluation instruction that we provided to annotators.
Evaluation rating distribution Table 10 shows the distribution of voted ratings by annotators. On average, DiP got higher ratings than RPS and RPS got highest votes in "Not a Precondition." RPS RPS+Post-processor Wilhelmina's stock dropped 8 percent in early 2005 after a lull, but more actively revived the strong dollar, lending to an expected influx of funds Mr. Berg last year signed a two-year contract with the firm The City Council's 10-member City Planning Committee voted 11 to 6 yesterday Another landlord is assembling a plan property values are rising, rising as the vacancy rate is expected Ms. Cianci, 32, is building an intercom tower In less than a year, such improvements have increased through acquisitions and capitalization at New York City police stations, legal firms and cruise ships, suggesting that housing can be bought Her workers hit a brick floor a few years ago Matthew Hallico, president of the General Electric Company in Manhattan, and Robert Chrisin, a sales vice partner at Ira G. Albrecht His comments about the incentive package raised many concerns about how it works, as well as what shareholders might do Last week, Lloyd Pound, the influential Wall Street analyst, gave 75 percent shares of his money Target Event: By about 10 p.m. , the proposals appeared dead for now [BLANK].
RPS
RPS+Post-processor after the judge, Col Richard Kultura of Thailand, signaled the end of his sentence after negotiators from both parties reconvened in Davis Park to talk things out after city Hall learned it would begin public comment on ways it could add 27,000 new jobs to the island after the researchers analyzed DEMIC data on children and early adults whose ages began at 8 or 15 as they were rejected by legislative leaders after Google released its pie-in-pie template during an extensive public presentation when Mayor Mark Meehan heeded all of his smaller complaints after the Council passed it on Monday after the State Senate voted yes on key issues after the developer, Trivata Films of New Orleans, agreed to pay up to $14 million over seven years Target Event: [BLANK] to scout potential recruits.
RPS
RPS+Post-processor The pending replacement of Carl Crawford has enticed some intelligence officials and top Qaeda leaders About the same time, Mr. Booker elicited state financing for another program that provided some of the funds through the Police Department's National Guard to help workers find mental illness or Employees are giving them the opportunity In 2005 , Mr. SCAD sent students from Iowa and Ohio to visit Johns Hopkins Most Somalis want a law that would enable them In championing the elite classes last week, public school teachers mounted an extensive publicity campaign to persuade parents Shortly after Katrina , Post servicemen were chasing selectors after the storm 's onset Joel Packer, a Detroit Pistons and assistant coach with Brigham captured a larger campus and invited the scouts Ever since college opened in 1983 , he shopped for school assignments As the trend forward moves into next season , larger colleges are beginning with faculty members from 75 sites on an extensive bioharker scholarship site
|
2021-06-15T01:16:15.167Z
|
2021-06-14T00:00:00.000
|
{
"year": 2021,
"sha1": "39b8c3a864b00e42921ec10b043f54aae5da1cd8",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.starsem-1.15.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "26677f086148c85556b1b4dbb259d998ae1967f5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
235810780
|
pes2o/s2orc
|
v3-fos-license
|
Predicting enzymatic reactions with a molecular transformer
The use of enzymes for organic synthesis allows for simplified, more economical and selective synthetic routes not accessible to conventional reagents. However, predicting whether a particular molecule might undergo a specific enzyme transformation is very difficult. Here we used multi-task transfer learning to train the molecular transformer, a sequence-to-sequence machine learning model, with one million reactions from the US Patent Office (USPTO) database combined with 32 181 enzymatic transformations annotated with a text description of the enzyme. The resulting enzymatic transformer model predicts the structure and stereochemistry of enzyme-catalyzed reaction products with remarkable accuracy. One of the key novelties is that we combined the reaction SMILES language of only 405 atomic tokens with thousands of human language tokens describing the enzymes, such that our enzymatic transformer not only learned to interpret SMILES, but also the natural language as used by human experts to describe enzymes and their mutations.
Introduction
The use of enzymes for organic synthesis, commonly referred to as the eld of biocatalysis, greatly contributes to organic synthesis methodology by providing the possibility to carry out highly chemo-, regio-, stereo-and enantio-selective transformations under mild and environmentally friendly conditions, oen allowing the redesign and simplication of synthetic routes by enabling reactions that are not possible with conventional chemical reagents. 1,2The advent of directed enzyme evolution as a tool to increase enzyme performance has also greatly contributed to improve the range and efficiency of enzyme catalyzed reactions for organic synthesis. 3However, the implementation of biocatalytic steps in synthetic processes remains challenging because it is very difficult to predict whether a particular substrate might actually be converted by an enzyme to the desired product.
][6][7][8][9][10][11][12] Here we asked the question whether CASP might be exploited to predict the outcome of enzymatic reactions for organic synthesis.Recent efforts in predicting enzymatic reactions focused on metabolic reactions from the KEGG enzymatic reaction database and predictions of drug metabolism, [13][14][15] as well as retrosynthetic planning with enzymatic reactions using a template based approach. 16Here we considered the molecular transformer, [17][18][19] which is a sequenceto-sequence prediction model operating on text representations of reactions as reaction SMILES (Simplied Molecular Input Line Entry System) 20 including stereochemistry.We set out to use multi-task transfer learning combining the USPTO dataset 21 as a source of general chemistry knowledge with a few thousand enzymatic reactions collected from the scientic literature as a source of specialized knowledge (Fig. 1).
We used transfer learning previously to enable the molecular transformer to predict complex regio-and stereo-selective reactions at the example of carbohydrates. 22In this former study transfer learning was performed on a dataset of reactions described as SMILES, which are based on a vocabulary of only a few hundred atomic tokens identical to the vocabulary describing the general USPTO dataset used for primary training.One of the novelties of the present work on enzyme reactions is that we combine SMILES language for the substrates with human language for the enzyme descriptions.Those more diverse inputs result in an increase from 405 atomic tokens for SMILES only to a few thousand atomic and language tokens when describing enzyme reactions, implying that our transformer model had to learn to interpret not only the SMILES language but also natural language, as used by human experts to describe enzymes and their mutations.
Reaction datasets
As a general chemistry dataset, we used the previously reported "USPTO stereo augmented" dataset derived from the patent mining work of Lowe, which contains, for each of the one million reactions in the USPTO dataset, the original reaction SMILES and a randomized SMILES version, both conserving stereochemical information. 23,24To compose a specialized dataset of enzymatic reactions, we extracted 70 096 reactions labeled as "enzymatic reactions" from the Reaxys database. 25e collected the data columns corresponding to reactant SMILES, product SMILES, and enzyme description ("reaction", "reagent" and "catalyst").Canonicalizing all SMILES and removing reactions lacking either reactants or products as well as duplicate entries (identical reactants, products and enzyme description) le 32 181 unique enzymatic reactions, each annotated with an enzyme description, referred to here as the ENZR dataset.
Although Reaxys does not cover the full spectrum of scien-tic literature about enzymes, the ENZR dataset contains a broad range of enzymes covering diverse reaction types, including not only highly specic enzymes such as glucose oxidases and dehydrogenases used in glucose monitoring devices, 26 but also enzymes with a documented broad substrate scope for organic synthesis including mechanistically promiscuous enzymes, 27 such as lipases used to promote aldol and Michael addition reactions, 28 or ene-reductases capable of reducing oximes, 29 thus providing a broad basis for training our model about the scope and specicity of different enzymes.We did not consider the enzyme databases KEGG 30 or BRENDA 31 because their data format is not homogeneous and many of the listed reactions are template-based and not assigned to documented examples.
To better understand our ENZR dataset, we analyzed enzyme reactions in terms of the frequency of occurrence of words with the suffix "-ase", which are the enzyme names, in the enzyme description.Across all enzyme reactions, 81.9% (26 348) contained a single "-ase" word, and 98.4% (31 663) contained one, two, or three "-ase" words (Fig. 2a).The largest group of single "-ase" word reactions involved a lipase (17%), a type of enzyme which is almost exclusively used alone.By contrast, dehydrogenases and reductases were most frequent in reactions involving two or more "-ase" words, reecting that such enzymes are oen used in processes involving enzyme-coupled cofactor regeneration systems.The ten most frequent "-ase" words corresponded to well-known enzyme families and together covered 50.3% of all enzyme reactions (the 15 most frequent "-ase" words covered 57.0% of all reactions, Fig. 2b).A ner analysis of enzyme families considering the complete enzyme description, which typically includes the enzyme source and the substrate type, showed that each enzyme family comprised a number of different enzymes (Fig. S1 †).
To visualize our ENZR dataset, we used our recently reported TMAP (tree-map) algorithm, a powerful tool to represent very large high-dimensional datasets containing up to millions of datapoints as connected trees in two dimensions. 32In a rst TMAP, we connected enzymatic reactions, each represented as a point, according to their similarity measured by the reaction ngerprint RXNFP, a recently reported reaction ngerprint derived from a neural network trained to classify patent chemical reactions. 33This analysis considered the transformation of substrates into product molecules but not the enzyme description in each ENZR entry.Color-coding the TMAP by the 10 most frequent "-ase" words mentioned above, corresponding to the most abundant enzyme families in the ENZR dataset, showed that these enzyme families formed relatively well separated clusters of reactions, illustrating that, similarly to organic reagents, enzymes carry out well-dened functional group transformations (Fig. 2c).
In a second color-coded version of the TMAP we labeled all enantioselective and kinetic resolution reactions, identied as reactions SMILES with no "@" characters in the reactants, indicating either the absence of chiral centers or an undened stereochemistry at chiral centers, but the presence of at least one "@" character in the products SMILES, indicating a specic absolute conguration for chiral centers. 34This color-code showed that enantioselective and kinetic resolution reactions also formed dened clusters corresponding to biotransformations with mostly dehydrogenases, lipases and reductases (Fig. 2c, inset lower right).
The different enzymes also formed identiable clusters in a different TMAP grouping reactions by substructure similarity of the reacting substrates using the extended connectivity ngerprint MHFP6 (Fig. S2 †). 35This illustrated that enzymatic reactions in the ENZR dataset followed the well-known trend that enzymes only react with certain types of substrates, in contrast to chemical reagents which are usually only specic for functional groups.The range of substrates utilized by the enzymes covered a broad range of sizes from very small molecules such as pyruvate up to relatively large peptides (Fig. S2, † inset).
Taken together, the analysis above indicated that the ENZR dataset contained a diverse set of enzymatic reactions, with the expected biases towards the most frequently used enzymes in the eld of biocatalysis such as lipases and dehydrogenases.
Training and evaluation of transformer models for enzymatic reactions
Training a transformer model rst requires tokenizing the input and output character strings to allow the model to learn which series of input tokens produces which series of output tokens.For the reaction SMILES in both USPTO and ENZR datasets, we used the approach reported previously for the general molecular transformer, which considers each character The "other" category groups reactions with "-ase" words other than the top 10 "-ase" words as well as reactions containing more than one "-ase" word.Inset lower right: TMAP highlighting enantioselective and kinetic resolution reactions.
Edge Article
Chemical Science of the reaction SMILES as a separate token except Cl, Br, and character strings in square brackets, which denote special elements. 18The set of tokens necessary for describing reaction SMILES in the USPTO amounted to 405 so-called atomic tokens, and did not increase for describing the reaction SMILES portion of our ENZR dataset, which we rst canonicalized using RDKit. 36To incorporate the enzyme information into our model, we tokenized the sentences describing the enzymes in the ENZR dataset using the Hugging Face Tokenizers library, 37 which aer preprocessing resulted in a vocabulary of 3004 atomic and language tokens to describe the ENZR dataset.
In view of evaluating transformer models, we split the USPTO stereo augmented dataset randomly into a training set (900 000 reactions, 90%, 1.8 million reactions aer adding for each canonical training reaction a duplicate using noncanonical precursor SMILES), a validation and a test set (each 50 000 reactions, 5%). 24For the ENZR dataset, we rst grouped reactions having the same product in different groups, and then split these groups into a training set (25 700 reactions, 80%), a validation and a test set (each 3200 reactions, 10%).Distributing these reaction groups rather than individual reactions into the different sets ensured that products which must be predicted in the validation or test sets have not been seen by the transformer during training or validation sets, respectively.
We then trained various models using OpenNMT 38 and PyTorch, 39 and evaluated them by presenting them with substrate SMILES, optionally together with the partial or full description of the enzyme, for each of the 3200 reactions in the test set.In each case, the model was challenged to write out the SMILES of the reaction product, including the correct stereochemistry, none of which had been seen by the model in the training or validation set.We analyzed whether the correct product was written out within the rst one or rst two solutions proposed by the model, as well as the percentage of invalid product SMILES, detected using RDKit, appearing among the rst one or two solutions (top 1 and top 2 accuracy, blue and cyan bars, top 1 and top 2 invalid SMILES, red and orange bars, Fig. 3A).
We rst evaluated if transformer models could be trained to predict reaction products from only the substrate by omitting any enzyme information during training.The UPSTO only model showed approximately 10% accuracy but a very low percentage of invalid SMILES, indicating that this model understood chemistry but lacked expertise in biotransformations (Fig. 3A, entry (a)).The ENZR only model also performed poorly (20% accuracy) and produced 10% invalid SMILES, reecting that general chemistry training was insufficient with this relatively small dataset (Fig. 3A, entry (b)).Nevertheless, training with both models using sequential transfer learning (STL) or multi-task transfer learning (MTL) reached 50% accuracy, indicating that substrate structure was partially predictive of the outcome of enzymatic reactions even in the absence of any enzyme information (Fig. 3A, entries (c) and (d)).This partial prediction based on only the substrate reects the fact that certain types of substrate molecules are only documented to react with specic enzymes in the ENZR dataset.For example, many alcohols are only documented to react with alcohol dehydrogenases to produce the corresponding ketone, such that a transformer model trained with the reaction SMILES learns to predict the ketone as the most likely product even without enzyme information, a prediction which is most of the time the correct one.
Adding enzyme information in form of "-ase" words alone did not signicantly increase prediction performance when using only ENZR, however combining the data with the USPTO by transfer learning increased in terms of top 1 accuracy to 51.7% with STL and 54.0% with MTL (Fig. 3A, entries (e)-(g)).Top 1 prediction accuracy increased further up to 59.5% with STL and 62.2% with MTL when using the complete enzyme information as full sentence (Fig. 3A, entry (j)).Note that the model trained with ENZR alone only reached 34.3% top 1 accuracy with full enzyme names and produced 10% invalid SMILES, showing that the general chemistry training learned from USPTO was essential even with full enzyme information (Fig. 3A, entry (h)).Furthermore, testing the MTL with a test set in which the enzyme information was scrambled between reactions resulted in poor results (15% accuracy), indicating that the true enzyme information was required rather than the presence of random text information (Fig. 3A, entry (k)).Examples of the added value of enzyme information for predicting the outcome of an enzyme reaction are provided with the cases of linoleic acid conversion with various oxygenases and dehydrogenases, and the conversion of L-tyrosine by a lyase and a tyrosinase.These examples are taken from the test set and reect true predictions since they have not been seen by the model during training or validation (Fig. 4).
Analyzing the prediction performance of the enzymatic transformer
The comparisons above showed that an excellent prediction performance was reached by the transformer trained using MTL combining the USPTO and the ENZR dataset using full enzyme names as enzyme information.Retraining this model with different splits of training, validation and test sets gave indistinguishable results in terms of prediction accuracy.This model was selected for further investigation and is referred to as the "enzymatic transformer".
Considering that many reactions in the ENZR dataset contain multiple enzymes, we wondered if our transformer might be confused in such situations because the main enzyme and the cofactor regeneration enzyme are not labeled as such.Indeed, the prediction accuracy of the enzymatic transformer was lower for reactions with multiple enzymes compared to reactions with a single enzyme (Fig. 3B).However, in many cases of multi-enzyme reactions including cofactor regeneration, the transformer provided the correct prediction when omitting the cofactor regenerating enzyme or swapping it for an equivalent one (glucose dehydrogenase to phosphite dehydrogenase, Fig. S3 †).
Since transformer models require a large number of examples for good performance, we also tested prediction accuracy as function of the number of occurrences of the enzyme name in the training set.Indeed, a prediction accuracy of almost 80% was reached for lipases, which were the most abundant in the training set (Fig. 3C).Nevertheless, prediction accuracy reached a good level (60%) as soon as more than ve examples of a particular enzyme were present in the training set.
In the best transformer model using MTL on full sentences, there was a clear association of the prediction condence score with accuracy, as observed with other transformer models (Fig. 3D). 22Overall, 85.5% of the predictions with condence score > 98% were true and 75.6% of the predictions with condence score < 80% were false, suggesting to use condence score values > 98% or <80% as indicators for a true (the reaction is worth testing) or false (the reaction outcome is uncertain) prediction.
Since the subset of the test set containing the word "lipase" performed best (Fig. 3C), we evaluated this subset exhaustively with all models (Fig. 3E).While models trained on the USPTO or ENZR dataset without enzyme information performed poorly (Fig. 3E, entries (a) and (b)), combining both sets with STL (entry (c)) or MTL (entry (d)) reached an excellent accuracy (>70%), indicating that the presence of an ester functional group is sufficient for the model to recognize a lipase biotransformation even in the absence of the enzyme name.However, models trained with ENZR alone using only the "ase" word or the full sentence performed poorly (Fig. 3E, entries (e) and (h)), showing that this relatively small dataset contained insufficient general chemistry knowledge to training even for the relatively simple lipase reaction.Overall, the model trained on both datasets using STL and the full enzyme description performed best for lipases, as observed in the entire dataset (Fig. 3E, entry (j)).However, scrambling the enzyme information between different reactions in the lipase only test set did not decrease prediction accuracy as dramatically as for the full set, reecting the fact that all lipases catalyze very similar reactions.In addition, 36.89% of the lipase test set cases were reactions with Candida antarctica lipase B, the most frequently used lipase in biotranformations, in which case swapping the enzyme information does not induce any change.
Enzymatic reactions are oen used to perform kinetic resolutions, typically using hydrolase enzymes such as lipases, or to transform achiral substrates into chiral products, typically to produce chiral alcohols or amines from achiral ketone precursors.To evaluate the performance of the transformer on such reactions, we dened enantiomeric resolutions as enzymatic reactions containing chiral centers, identied by the presence of at least one "@" character in the SMILES, in the reaction products only, which corresponded to 6495 reactions in the entire ENZR dataset (20.18%), and 687 reactions in the test set (21.35%).The relative performance of the different transformer models in this subset was comparable to that of the entire dataset, indicating that the transformer model was able to learn the enantiomeric preference of enantioselective enzymes as successfully as the overall enzymatic transformation (Fig. 3E).
Examples of correct and incorrect predictions by the enzymatic transformer
The types of enzymatic reactions predicted correctly by the enzymatic transformer are well illustrated by selected cases (Fig. 5).These include the correct product prediction including chirality for kinetic resolutions using lipases (reactions (1) 40 and (2)), 41 two enantioselective reductions of ketones using alcohol dehydrogenases (reaction (3) 42 and ( 4)), 43 an enantioselective imine reduction (reaction ( 5)) 44 and reductive amination with a transaminase (reaction ( 6)). 45 Considering that none of the products of these reactions have been seen by the model during training, the ability of the enzymatic transformer to predict not only the correct reaction product but also the correct stereochemical outcome of the enantiomeric resolution reactions is remarkable.It must be pointed out that the prediction is always done by analogy to examples, including cases of engineered enzymes.For instance, in reaction (1) with a mutant CALB enzyme, the transformer has learned from the training set that this triple mutant has an altered stereospecicity, and listing the mutation is sufficient for the model to make the correct prediction in the example from the test set.The product structure prediction is still correct but the stereoselectivity is lost when using simply "Candida antarctica lipase B" as enzyme description, which corresponds to the experimental result (Fig. S4 †).
Cytochrome P450 mediated regioselective demethylation (reaction ( 7)) 46 or hydroxylations (reactions (8) 47 and ( 9)) 48 further illustrate the predictive power of the enzymatic transformer.From the 405 cytochrome P450 mediated reactions in ENZR, 316 were used in the training set and 46 in the validation set.The resulting enzymatic transformer correctly predicted the product structure of 17 (40%) of the 43 cytochrome P450 reactions in the test set considering the top 1 predictions and 22 (51%) considering the top 2 predictions.The numbers increased to 21 (49%) correct predictions for the top 1 and 25 (58%) for the top 2 predictions when ignoring stereochemistry.These prediction accuracies are far from perfect but still very remarkable considering that the reaction site and type of cytochrome P450 reactions transformation are difficult to predict for a chemist (Fig. S5 and S6 †).
In the above examples, a shorter description of the enzyme oen reduces the condence score and may induce errors in the predicted stereochemistry or product structure (red labels in Fig. 5 and S4 †).Such errors when using short enzyme names are not surprising considering that models trained with only "-ase" words performed worse than models trained with the full enzyme description (Fig. 3A).
Analyzing unsuccessful predictions by the enzymatic transformer in a random sample of 200 reactions from the test set selected to cover various reaction types and enzymes provides further insights (Fig. 6).Inaccurate predictions may sometimes simply reect errors in database entries.For instance, the enzymatic transformer correctly predicts, with a high condence score, the formation of thymine from the hydrolysis of a thymidine nucleoside analog by uridine phosphorylase, however the database entry wrongly recorded the isomeric 6methyl-uracil as the product (reaction (10)). 49The model also correctly predicts with high condence score the alcohol hydrolysis product in the hydrolysis of a b-hydroxysulfone by porcine liver esterase.However, this product is unstable and spontaneously eliminates to form a styrene, which is the product isolated and recorded in the database (reaction (11)). 50urthermore, the model correctly predicts that 5-deoxy-b-Dribofuranose is the product formed by the action of a nucleosidase on the parent adenosine nucleoside, which it writes down in the cyclic hemi-acetal form, while the database entry recorded the open-chain aldehyde form (reaction ( 12)). 51ther examples reect true limitations of our model, for example errors in the regioselectivity of hydroxylation of 7methoxy-3,4-dihydronaphthalen-1(2H)-one (reaction ( 13)) 52 and a-naphthol (reaction ( 17)) 53 by cytochrome P450.In the case of the formation of (+)-d-cadinene from geranyl pyrophosphate by (+) cadinene synthase, our model predicts the correct product structure and stereochemistry, however the deuterium label, which is lost during cyclization, is wrongly incorporated into the predicted product (reaction ( 14)). 54The model may also predict the correct product structure but the opposite enantiomer, as illustrated for the benzylic hydroxylation of ethylbenzene by cytochrome P450 (reaction ( 15)), 55 or with missing stereochemistry, as illustrated for the biotransformation of 4-methylcyclohexanol by a sequence of an alcohol dehydrogenase and a cyclohexanone monooxygenase to produce an enantiomerically pure lactone (reaction ( 16)). 56ote that the enzymatic transformer can only predict the structure of reaction products based on what it has learned from examples in the ENZR source database.For example, the reaction rates of 49 different alcohol substrates with a wild-type choline oxidase (WT) and an engineered version with an expanded substrate scope (M) have been reported with a broad range of values. 57However, the Reaxys entry used for ENZR attributed each reaction only to one of the two enzymes, which was in each case the faster reacting enzyme, even if the rates were almost equal.The enzymatic transformer was trained with a random subset of 32 reactions attributed to M and ve reactions attributed to WT (Fig. S7 †) and validated with ve M and two WT cases (Fig. S8 †).The model then correctly predicts the two WT and three M reactions in the test set, however in each case the same product is predicted with very high condence for both WT and M enzymes (Fig. S9 †).This prediction is correct for the two WT cases where the reported rates are almost equal for WT and M, but inaccurate for the three M cases where the activity of WT is much lower, including one case where even the M rate is impractically low, reecting the fact that the training data does not consider reaction rate information.
How to use the enzymatic transformer
The examples discussed above belong to the ENZR test set for which the product molecules have never been seen by the enzymatic transformer during training and validation, but they are recorded cases for which a look-up in the scientic literature will give the answer.In a possible application, one might use the enzymatic transformer to select which enzyme might be best suited for a given biotransformation not yet recorded in the dataset.To carry out such prediction, one would analyze the product structures and condence scores returned by the model when presented with a given substrate and various enzymes.
Edge Article
Chemical Science As a theoretical example, we consider the reduction of levulinic anilide to either enantiomer of the corresponding chiral alcohol, a reaction which is not present in the training set.We used the enzymatic transformer to predict which product would be formed by exposing this ketone to 163 alcohol dehydrogenases and 60 ketoreductases in the ENZR dataset.In this case, the transformer model predicts with high condence two experimentally veried cases of two different keto-reductases in the test set forming either the (S) or the (R) enantiomeric alcohol enantioselectively.In addition, the transformer also proposes high condence reactions to either enantiomers involving other ketoreductase and alcohol dehydrogenases enzymes, which could be considered for experimental testing (Fig. 7).
One might also use the enzymatic transformer to predict which substrates might be converted by a given enzyme.To illustrate this point, we considered the enzyme "D-glucose dehydrogenase alcohol dehydrogenase ymr226c from Saccharomyces cerevisiae", which is documented in six reactions of the training set to reduce various acetophenones enantioselectively and correctly predicts the product structure and stereochemistry for the 2 examples in the test set (Fig. S10, † substrates D1 and D2).One can then challenge the enzymatic transformer to predict which product might be formed with further ketone substrates and the same enzyme.The transformer predicts the probably correct alcohol products with high condence scores for ketones that are structurally related to the database examples (Fig. S10, † substrates D3-D15).Among further analogs that are less similar, three cases are predicted with high condence (Fig. S10, † substrates D16-D18), and the remaining ve cases have much lower condence scores as well as sometimes unlikely product structure, indicating that the model is uncertain about the possible outcome of these reactions (Fig. S10, † substrates D19-D22).
Conclusion
We had previously shown the principle of transfer learning to specialize the general USPTO transformer model at the example of carbohydrate reactions, however this approach used SMILES information only and a limited set of 405 tokens. 22Here we showed for the rst time that the general USPTO transformer model can be used as a basis for transfer learning using a more complex language information, here an extended vocabulary of several thousand language and atomic tokens describing enzymatic reactions in text format.Despite of the relatively small size of the ENZR dataset of enzymatic reactions used here, the resulting enzymatic transformer model predicted the outcome of enzymatic transformations including enantioselective reactions with excellent accuracy.This type of approach might be extended in the future to incorporate additional information such as reaction conditions and experimental procedures.
It should be noted that the text descriptions of enzymes used in our ENZR dataset most oen represent a rather plain description of the reaction and substrate involved, e.g."tyrosine decarboxylase", which provides a direct hint for the enzymatic transformer for proposing a product structure.Nevertheless, other descriptions of enzymes such as their EC number, 14 their amino acid sequence or a representation of the sequence produced by an auto-encoder, 58,59 might also be exploitable for the enzymatic transformer if these would be available since these descriptions in principle contain the same information, even if in a more indirect manner. 62ere we demonstrated the feasibility of using a text description of an enzyme to train a transformer model to predict product structure given a substrate and the enzyme.The same data type might be suitable to train a transformer to predict the substrate structure given a product and an enzyme (retro-synthesis) or to predict an enzyme name given a substrate and a product, however to succeed such models might require much larger datasets than the relatively small ENZR dataset used here.
In this study, we obtained the best prediction accuracies when using multi-task transfer learning based on the full description of the enzymes.However, model performance was limited by database size and was lower with enzymes for which only few examples were available.Furthermore, analysis of successes and failures showed that model performance is also limited by the occurrence of database entry errors.Model performance can probably be increased by using larger and higher quality training dataset.Furthermore, the performance of our enzymatic transformer model was highest with the enzymes that are most represented in the ENZR dataset, which were lipases and dehydrogenases due to the historical nature of the data source reecting which enzymes have been mostly used in the literature.Considering that transformer models learn from example, increasing the performance for other types of biotransformations such as keto-reductases and monooxygenases will critically depend on acquiring training data for such types of enzymes.Provided the availability of experimental training data, the transfer learning approach demonstrated here should be optimally suited to integrate this data into predictive models capable of assisting chemists in implementing biotransformations for chemical synthesis.
Data collection
The USPTO data was downloaded from the patent mining work of Lowe. 24The ENZR data set was downloaded from Reaxys. 25 Enzymatic reactions were found querying "enzymatic reaction" keywords directly in the search eld.
Transformer training
The enzymatic transformer model was trained based on the molecular transformer work from Schwaller et al. 18 The version 1.1.1 of OpenNMT, 38 freely available on GitHub, 60 were used to preprocess, train and test the models.Minor changes were performed based on the version of Schwaller et al. 18 SMILES were also tokenized using the same tokenizer as Schwaller et al. 18 The ENZR description sentences were tokenized by the Hugging Face Tokenizers 37 using a byte pair encoding 61 resulting in a vocabulary of 6139 language tokens (top 40 most frequent tokens in Fig. S11 †) for which the occurrence frequencies follow a power-law distribution shown in Fig. S12.†
Fig. 1
Fig. 1 General concept of the enzymatic transformer training.The USPTO data set contains reactions SMILES describing reactants, reagents and products.The ENZR data set contains reaction SMILES as well as an additional text component.
Fig. 2
Fig. 2 Analysis of the ENZR dataset.(a) Number of reactions depending on how many "-ase" words are present in the sentence.(b) Frequency of the top 15 "-ase" words depending on the count of enzyme name per reaction.(c) TMAP of reactions similarity color-coded by the 10 most frequent "-ase" words as listed in (b) combinations.The "other" category groups reactions with "-ase" words other than the top 10 "-ase" words as well as reactions containing more than one "-ase" word.Inset lower right: TMAP highlighting enantioselective and kinetic resolution reactions.
Fig. 3 (
Fig. 3 (A) Top prediction accuracy and invalid SMILES on the enzyme reaction test set for various models.(a) USPTO model from Schwaller et al. trained without any enzymatic transfer learning and tested without enzyme sentence.(b) Enzymatic DB without USPTO data set.(c) USPTO model transfer learned (sequential) to enzymatic DB trained without any enzyme description part.(d) USPTO model transfer learned (multi-task) to enzymatic DB trained without any enzyme description part.(e) Enzymatic DB without USPTO data set trained with '-ase' words only.(f) USPTO model transfer learned (sequential) to enzymatic DB trained with '-ase' words only.(g) USPTO model transfer learned (multi-task) to enzymatic DB trained with '-ase' words only.(h) Enzymatic DB without USPTO data set trained with enzyme full sentences.(i) USPTO model transfer learned (sequential) to enzymatic DB trained with enzyme full sentences.(j) USPTO model transfer learned (multi-task) to enzymatic DB trained with enzyme full sentences.(k) Best multi-task model tested by swapping enzyme full sentences between reactions of the test set.(B) Accuracy on the test set depending on how many "-ase" words are present in the sentence.(C) Accuracy on the test set depending on how frequent the "-ase" words combination from the sentences appears in the training set.(D) True predictions rate against confidence scores, bins were adjusted to obtain an equal distribution of predictions over the bins.Vertical red bars represent our limits to indicate true or false predictions.(E) Top prediction accuracy and invalid SMILES on lipase reactions of the test set only.(F) Top prediction accuracy and invalid SMILES on enantiomeric resolution reactions of the test set only.
Fig. 4
Fig. 4 Examples of substrates applied to various enzymes using the MTL transformer with full sentences, which illustrate predictions of reactions from the test set not seen by the model during training.The color code indicates high confidence predictions (score > 98%, black), uncertain predictions (score 80-98%, blue), and low confidence predictions (score < 80%), see Fig. 3D for discussion of confidence scores.All enzymatic reactions are predicted correctly, however the confidence score varies.The predictions of the MTL no text model are shown to illustrate what the transformer predicts when the enzyme information is missing.
Fig. 5
Fig. 5 Examples of successful predictions by the enzymatic transformer.
Fig. 6
Fig. 6 Examples of unsuccessful predictions by the enzymatic transformer.
Fig. 7
Fig. 7 Examples of usage of the enzymatic prediction model to find suitable enzymes leading to different enantiomers.Screening sentences were extracted from the entire dataset.Filtering was applied for dehydrogenases and ketoreductases from single enzyme systems and filtered for simple sentences (less than 5 words).Resulting in a total of 223 sentences (163 dehydrogenases and 60 ketoreductases).Are shown the top 5 confidence score sentences leading to both enantiomers.Red colored sentences were present in the test set providing experimental proof.
|
2021-07-14T05:22:59.936Z
|
2020-10-30T00:00:00.000
|
{
"year": 2021,
"sha1": "b42c61fba149b146b5c4c4c19e27ecdf36be09dc",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/sc/d1sc02362d",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b42c61fba149b146b5c4c4c19e27ecdf36be09dc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
39409855
|
pes2o/s2orc
|
v3-fos-license
|
The influence of motivation in recruitment and retention of rural and remote allied health professionals : a literature review
Introduction: Recruitment and retention of allied health professionals (AHPs) to remote and rural Australia is challenging and correlates with poorer health status of remote and rural residents. While much has been written about the recruitment and retention problem, this study took a new approach by reviewing the literature describing the motivation of AHPs to work in remote and rural areas and then analyzing the findings from the perspective of motivation theory using Herzberg’s extrinsic and intrinsic classification. Intrinsic motivation incentives are known to contribute to job satisfaction and come from within the individual, for example the pleasure derived from autonomy or challenge at work. In contrast, extrinsic motivation incentives are provided by the job and include such factors as salary and professional development provisions. Extrinsic incentives are important because they prevent job dissatisfaction. Job satisfaction has been shown to be linked with increased retention. Method: Thirty-five articles, including 26 from Australia, met the inclusion criteria. The key findings related to motivation from each article are outlined and the results classified into the extrinsic–intrinsic framework. The incentives are then further analyzed as having a positive or a negative influence. Results: In total, 38 different incentives were described a total of 246 times. Of the total, almost half (n=115) comprised extrinsic incentives with a negative influence, with poor access to professional development, professional isolation and insufficient supervision the most frequently reported. Rural lifestyle and diverse caseloads were the most frequently mentioned positive extrinsic incentives,
Introduction: Recruitment and retention of allied health professionals (AHPs) to remote and rural Australia is challenging and correlates with poorer health status of remote and rural residents.While much has been written about the recruitment and retention problem, this study took a new approach by reviewing the literature describing the motivation of AHPs to work in remote and rural areas and then analyzing the findings from the perspective of motivation theory using Herzberg's extrinsic and intrinsic classification.Intrinsic motivation incentives are known to contribute to job satisfaction and come from within the individual, for example the pleasure derived from autonomy or challenge at work.In contrast, extrinsic motivation incentives are provided by the job and include such factors as salary and professional development provisions.Extrinsic incentives are important because they prevent job dissatisfaction.Job satisfaction has been shown to be linked with increased retention.Method: Thirty-five articles, including 26 from Australia, met the inclusion criteria.The key findings related to motivation from each article are outlined and the results classified into the extrinsic-intrinsic framework.The incentives are then further analyzed as having a positive or a negative influence.Results: In total, 38 different incentives were described a total of 246 times.Of the total, almost half (n=115) comprised extrinsic incentives with a negative influence, with poor access to professional development, professional isolation and insufficient supervision the most frequently reported.Rural lifestyle and diverse caseloads were the most frequently mentioned positive extrinsic incentives,
Introduction
The recruitment and retention of allied health professionals (AHPs) to remote and rural areas in Australia, and elsewhere in the world, has long been problematic [1][2][3] .Maldistribution of the health workforce has been widely acknowledged, with urban populations having greatest access to health care, including that provided by AHPs.Increasing remoteness correlates with increasing workforce shortages and a higher burden of disease [3][4][5][6][7] .The reasons professionals come, what makes them stay and the reason why they leave remote and rural areas has been the subject of discussion and debate in both the academic literature and government policy [2][3][4][5][8][9][10][11][12] .
This review analysed the literature describing AHP motivation to work or not work in remote and rural areas.Motivation is defined as the reasons, beyond personal traits, that drive an individual towards a goal 13 .In this article the 'individual' is an AHP and the 'goal' is working in remote or rural workplaces.The aims of this review were to address two research questions: (i) what does the literature describe as the incentives that motivate AHPs to work in remote and rural areas; and (ii) are these incentives classifiable into a framework that is useful in addressing the workforce maldistribution?An analysis of this type can assist in policy design and organisational strategies which focus on the recruitment and retention problem.This analysis is underpinned by Frederick Herzberg's seminal framework of motivation at work, which classified worker motivation into two types: intrinsic and extrinsic 14 .Herzberg's research provided insights into motivation that are integral to current understandings of job satisfaction, which is associated with workforce retention 15,16 .
Extrinsic motivation incentives are provided by the workplace.Examples include salary, work status and security, leave allowances, and professional development.These types of incentives have been termed 'hygiene factors' because they prevent job dissatisfaction 14 rather than providing job satisfaction.Perceived restriction of extrinsic incentives (eg inadequate salary for responsibilities expected) has been linked to reduced job satisfaction 14,17,18 .
Intrinsic motivation incentives are inherent in work, that is the pleasure derived from the work itself 19 .They make a person 'feel good' about their work and their performance of it.These incentives provide a reason above and beyond the extrinsic incentives to engage in the work and contribute directly to job satisfaction 14 .Examples include challenge, autonomy, and perceived significance of the work.
Studies of health professions, including nursing, medicine and allied health have confirmed the relative importance of intrinsic incentives over extrinsic incentives in providing job satisfaction 17,18,[20][21][22][23] .For example, Randolph in a study of 328 rehabilitation professionals in North America concluded that extrinsic incentives were weaker in significance for predicting job satisfaction and intent to stay compared with intrinsic incentives, such as professional growth and a work environment in line with personal values 18 .Likewise a study by Lyons of 787 American healthcare professionals found that three out of four of the top factors for predicting job satisfaction were intrinsic incentives, including 'worthwhile accomplishment and opportunities for growth and recognition' 17 .Kamien, in a longitudinal study of Australian rural medical workforce defined three incentives contributing to professional job satisfaction: variety in work, autonomy of practice and the feeling that one is doing an important job 16 .
For any individual at any point in time, there is a dynamic balance of extrinsic and intrinsic incentives.Unique environmental, professional and personal factors interact with motivation to impact job satisfaction and recruitment and retention outcomes 5,24- 29 .Age and experience interplay with background and overall goals.Younger professionals rank career opportunities (extrinsic) more highly than older professionals 30,31 .The intrinsic reward of job challenge and autonomy can offset the extrinsic disincentive of personal and professional isolation associated with remote and rural work 31,32 .Additionally, factors that operate as incentives for some may be viewed as disincentives by others (eg rural lifestyle) 33 .
The factors that motivate AHPs to work in remote and rural areas are of considerable interest to both service providers and policymakers because recruitment and retention of AHPs to these areas is problematic 11 .One model showed that the risk of an AHP leaving a remote or rural position is twice that of a nurse or doctor 11 .There is a body of descriptive research that has examined the contributors to recruitment and retention; however, almost no literature has looked at the motivation of remote and rural AHPs from the perspective of extrinsic and intrinsic motivation.This literature review examined the incentives affecting motivation in AHPs working in remote and rural areas, offering new approaches to improving recruitment and retention.In particular, it looked for the balance of extrinsic and intrinsic incentives and asked how this new insight can assist in the development of recruitment and retention strategies and practices.
Method
Literature was retrieved for this investigation (October 2010) using the PubMed and CINAHL electronic databases as well as Google Scholar (Fig1).Key words included allied health professional/personnel, motivation, job satisfaction, rural, remote, recruitment, retention, workforce, career decision, vocation, in addition to specific allied health professional titles (eg occupational therapist, audiologist).Variants on professional titles were used where appropriate (eg speech pathologist, speech-language pathologist, speech therapist).Snowballing techniques were then applied whereby the reference lists of retrieved publications were searched for other relevant citations.Initial inclusion criteria included Australian-based research studies or systematic reviews published between 1990 and October 2010; however, the search was broadened to include other English-language studies or reviews in developed countries.If studies clearly included AHPs plus other health professionals, they were included; however, articles specifically focussed only on doctors or nurses were excluded.Studies of AHP student perceptions of remote and rural work were included.
A variety of definitions for 'rural' and 'remote' was found in the retrieved literature so for the purposes of this article the terms are used in a common-sense fashion to refer to the range of communities beyond major metropolitan areas.Similarly, the term 'allied health' is one that currently has no agreed definition, however, is generally understood to be inclusive of health professionals with a tertiary qualification, eligible for registration with a recognised professional body or board, who 'apply their skills to restore optimal physical, sensory, psychological, cognitive and social function' (p4) 34 .This excludes health professionals with a specific medical or nursing qualification.Included allied health professions are listed (Table 1).
Results
The results of the search are detailed (Fig1).More than 1000 articles were retrieved initially by the first author; however, full papers merited scrutiny by all authors when they met the inclusion criteria.Thirty-five studies; 26 Australian, five Canadian, three American and one comparing American and Canadian AHPs, were included.When the article sought to answer research questions that were broader than the motivation of AHPs to work in remote and rural areas, only those findings specific to motivation have been included.The methodology in the eligible articles was predominantly survey or focus group studies using a mix of qualitative and quantitative analyses.An overview of each study is provided as are the major findings related to motivation (Table 2).
Australian research was most prominent in the retrieved articles.The reasons for this could include that Australia has a historical context rooted in the importance of rural areas.
Additionally it has the third lowest population density in the world (after Namibia and Mongolia; Canada is eighth, USA is 53rd and UK is ranked 161 of 193 countries counted) 35 , yet as a developed country Australia has the resources to invest in research capacity and infrastructure to address the very significant needs of its remote area populations.All studies reporting demographic characteristics had a predominance of Caucasian female respondents which reflects the feminised nature of the allied health workforce.Additionally respondents were largely trained in their country of residence, which reflects the current barriers to pathways for recognition of international allied health graduates.Some professions, particularly those with small workforces such as podiatry, were only represented in studies that included a range of allied health professions because there are comparatively few of these professionals working in remote areas.
The specific findings of each article were analysed using Herzberg's extrinsic and intrinsic classification.The results of this further analysis are shown graphically according to the number of articles reporting each extrinsic incentive (Fig2), and the number of articles reporting each intrinsic incentive(Fig3).The vertical axis in each figure presents the incentives reported in the articles while the horizontal axis shows the number of articles reporting each incentive.Note that the horizontal axis has a positive and negative side, depending on the influence of the incentive.An incentive could create a positive effect by its presence, for example 'family nearby'.Conversely an incentive could create a negative effect by its presence (eg 'large caseloads') or by its absence (eg 'lack of work resources').
Twenty different extrinsic and 18 different intrinsic incentives were mentioned a total of 246 times.Of these, a comparison of Figures 2 and 3 demonstrates that extrinsic factors with a negative influence (n=115) comprise almost half of all incentives mentioned, while intrinsic incentives with a negative effect are the least frequently reported (n=18).Incentives that influence positively were reported at similar frequencies for both extrinsic (n=54) and intrinsic (n=59) incentives.
Discussion
Individually each research article has a small sample size but combined the data represents strikingly similar views on remote and rural recruitment and retention incentives from more than 3000 AHPs.The factors described in the allied health literature are consistent with those described in literature relating to nurses, doctors and teachers in underserved areas 16,17,61 .
Investigated support, education and training needs
Incentives with a negative influence dominated findings with more extrinsic than intrinsic incentives highlighted Bent (1999) [32] Qualitative (semi-structured interviews) PT, Physiotherapists (Physical therapists in North America); SP, Speech pathologists (Speech-language pathologists in North America).
Extrinsic incentives or hygiene factors
Extrinsic incentives are provided by the job.In the literature reviewed, they included both tangible rewards such as salary and less tangible ones such as 'lack of community knowledge of role' (Fig2).The power of extrinsic rewards lies less in what the rewards provide and more in what they prevent.They act to prevent job dissatisfaction.The absence of adequate extrinsic rewards reduces job satisfaction; however, the presence of extrinsic incentives does not increase job satisfaction 14 .
Overall, the results in Figure 2 show extrinsic motivation incentives characterised by a negative influence, predominated in two ways.More disincentives were reported and more articles reported each disincentive compared with incentives characterised by a positive influence.Herzberg's hygiene theory, where lack of extrinsic reward reduces job satisfaction, flags the possibility that AHPs who work in remote and rural areas are likely to have reduced job satisfaction with correspondingly increased rates of turnover.This is supported by authors who demonstrated increased rate of turnover for AHPs in remote areas compared with other health professionals in remote areas 11,32 .
Many of the extrinsic incentives are linked.Poor access to professional development 8,28,41,48 and insufficient supervision 8,52 could both exacerbate the sense of professional isolation.For example, occupational therapists reported difficulty accessing feedback on their performance 45 .Services that implemented strategies to improve supervision and professional development opportunities reported positive outcomes 47,53 ; however, it should be noted that one study 62 failed to find a significant association between access to professional development and intention to stay.There were mixed findings on the financial implications of remote and rural work.Private practitioners tended to report a stronger financial motivation to work rurally, particularly pharmacists, physiotherapists and dentists 39,42,63 .Interestingly, a study of the Northern Territory (Australia) dental workforce reported that financial incentives did not have lasting effects for retention because dentists recruited on the basis of salary tended to be retained for 5 years or less 42 .In contrast, young graduates who believed their responsibilities were greater than their financial compensation were dissatisfied 31 .
A frequently mentioned positive extrinsic incentive for AHPs to work in rural or remote areas is the perception that these positions facilitate rapid development of professional and administrative skills due to the broad experience and diverse presenting caseloads 24,39,44 .New graduates find this particularly attractive as it provides a career fast-track, a growth experience, in order to secure a subsequent position in an urban area 31,54 .However, the resultant turnover disadvantages the region and has led to the labelling of remote and rural areas as 'professional nurseries' 64 .In contrast to the desirable career fast-track potential is the anxiety expressed by some AHPs over losing professional skills when working in an isolated remote or rural position 9,31 .Perceptions of metropolitan peers not respecting skills obtained in remote or rural practice were reported 57 but often found to be untrue when returning to urban positions 31 .Anxiety concerning the maintenance of professional skills is associated with frustration that specialisation and a career pathway for rural generalist clinical AHPs does not exist 44,55,56,60 .Creative professional development opportunities and two-way collaboration with metropolitan centres could assist; however, policies and strategies to fund and backfill positions during leave must be implemented 57 .Interestingly, a recent study on rural Canadian occupational therapists reported more reward than challenge in rural work and theorised that access to professional development and support using telehealth and other distance technologies may be making a positive difference 37 .
A number of extrinsic incentives were related to the characteristics of rural living.Rural lifestyle 24,48 , the presence of family locally 42,46 , and positive financial affordability make some rural areas more attractive than urban areas 24,50 .Conversely, it was clear that for some AHPs the cost of living and the cost of accessing professional and social networks are higher in remote and rural areas 32,45 .Thus personal views on rural living, desire to locate near to family and the financial implications of the particular remote or rural location are unique to individuals.Selection of rural-background students into training programs and adequate provision of work and educational opportunities in rural areas for extended families 48 could positively influence retention at critical life stage decision-points.
A large number of studies highlighted the stress resulting from managing large caseloads in remote areas 45,59 .In stark contrast only two studies reported manageable caseloads and concomitant increased job satisfaction 53,55 .Lack of locum support combined with long working hours 39,40 , lack of access to air travel 56 (which would reduce time required to travel to clients), excessive travel 8,44 as well as the idea that 'there is nobody else' (ie positions are left unfilled 32 ) add to the stress.
Overall, the studies demonstrated that hygiene factors or positive extrinsic incentives were lacking, underscoring the risk of continued high turnover of AHPs working in rural and remote areas, in part from reduced job satisfaction.
Intrinsic incentives
Intrinsic incentives are what makes a person feel good about engaging in their job 14,65 and contribute directly to job satisfaction.In the literature reviewed this included themes such as challenge and autonomy.A predominance of intrinsic incentives with a positive effect is shown (Fig2).
Autonomy was the most frequently mentioned intrinsic reward and is aligned with other literature 14,66 which emphasises the importance of autonomy in building motivation.Autonomy was valued in regard to decisionmaking about caseload management and schedules 39 when it allowed work to be structured in a way that suited the professional 45 , and because it allowed professionals to respond creatively to community needs rather than being locked into a bureaucratic system of service delivery 51 .
There was a strong theme of connectedness operating as an intrinsic incentive.While the presence of extended family enhanced connectedness to communities 24 , professionals also reported an emotional commitment to rural life 39 .Rural communities can enhance or reduce the motivation of AHPs to remain in their community by facilitating friendships and support networks 42 ; the isolation experienced by young professionals can be alleviated by access to peers 54 .
Educational 48 and employment provision 28 for the AHP's family also enhanced feelings of belonging.
Challenge, as an intrinsic motivator, exists on a continuum.Intrinsic motivation will flourish when the level of challenge contributes to a sense of achievement and fulfilment but does not overwhelm 67 .The AHPs who felt they could not meet the needs of the community reported feeling overwhelmed 56 .For example, new graduates enjoyed the challenge of diverse responsibilities found in remote and rural work and felt that the long-term career benefits outweighed the disadvantages 54 , while experienced professionals looking for a new challenge were attracted by the novelty of working with Indigenous communities 32 .The drawback to this finding is that novelty alone appears insufficient to influence long-term retention.Hall et al demonstrated statistically that retention of dentists in remote areas who were motivated by the novelty of cultural challenge was 5 years or less 42 .Workplaces seeking to reduce high recruitment costs 11 should look beyond novelty as an inducement and ensure that new professionals are well oriented and supported for the challenges they will encounter.
Retention is negatively impacted when a professional does not feel their work is significant or respected.Therefore it is of concern that some studies found AHPs felt their work was not valued by the community 50 .This was more likely to be an issue for professionals such as occupational therapists whose role can be diverse.Supportive management practices such as timely recruitment to vacancies 32 , locum provision, teamwork 54 and a critical mass of professionals 50 , strong liaison with other local health providers and engagement with the community 51 to ascertain their needs could alleviate this feeling.Furthermore, when positions are vacant for long periods, as is often the case in remote areas, it is much harder for incoming professionals to build networks and establish credibility and trust 31 .
Job satisfaction and the imbalance of extrinsic and intrinsic incentives
The findings of this literature review have demonstrated that the extrinsic/intrinsic classification is a useful framework from which to consider the motivation of AHPs to work in remote and rural areas.Figures 2 and 3 demonstrate the overwhelming predominance of extrinsic disincentives compared with all other incentives.Given Herzberg's contention that extrinsic disincentives reduce job dissatisfaction, the burden of negative extrinsic factors would suggest the likelihood of a dissatisfied workforce.
Few studies formally assessed job satisfaction.Those that did frequently commented that AHPs were dissatisfied with aspects such as conditions of employment 31 .Despite these frustrations, summary comments often referred to AHPs being satisfied with remote and rural work 58 .These kinds of statements are likely to refer to the intrinsic factors which are providing a measure of satisfaction but cannot fully mediate the extrinsic disincentives.Congruent with literature on urban AHPs 68 and nursing 69 it could be hypothesized that high turnover is related to job dissatisfaction from extrinsic incentives.
Policy implications
Evidence from the medical workforce suggests that policy aimed at addressing the extrinsic disincentives may make a difference to AHP workforce recruitment and retention.Australian government policy provides significant extrinsic incentives to the rural and remote medical workforce as a successful strategy to reduce shortages 5 , with the literature showing remote and rural doctors to also have a lower risk of turnover than AHPs 11 .Finally, the literature of GP job satisfaction demonstrates reasonable job satisfaction regardless of location 70 .Given the complementary relationship between extrinsic and intrinsic incentives, fortifying existing intrinsic incentives might provide a parallel strategy to addressing recruitment and retention challenges.
Limitations and further directions
This analysis was unable to capture data on the interaction between incentives and life stage because this workforce demographic was not always examined in the literature.It appears that new graduates are more willing to cope with an absence of extrinsic reward because of the potential benefits they gain.Looking at life stage could be an important avenue for future research 71 with the potential to reduce retention problems further by implementing life stage appropriate strategies.
Conclusion
Extrinsic motivation factors for remote and rural AHPs are clearly deficient.Recruitment and retention strategies must address this need urgently.While it is clear that the intrinsic incentives which contribute to job satisfaction are present they appear insufficient to mediate for the burden of extrinsic disincentives which contribute to excessively high turnover.The absence of positive extrinsic incentives is eroding job satisfaction that could improve retention.As demonstrated in the literature 53,57 , remote and rural healthcare organisations that encourage and foster the fulfilment of the intrinsic motivation of a professional, as well as providing the extrinsic elements to motivation have the most potential to recruit and then enhance the longevity of the AHP in that position.
working in rural and remote Australia.
Figure 2 :
Figure 2: Number of articles reporting each extrinsic incentive.
Figure 3 :
Figure 3: Number of articles reporting each intrinsic incentive.
Table 1 :
Allied health professions eligible for inclusion in this study
|
2017-06-23T16:28:40.347Z
|
2012-06-21T00:00:00.000
|
{
"year": 2012,
"sha1": "51e877fe3fc2d4690533a14b3ca67d1d88aa5967",
"oa_license": "CCBY",
"oa_url": "https://www.rrh.org.au/journal/download/pdf/1900/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "51e877fe3fc2d4690533a14b3ca67d1d88aa5967",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
15031087
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy and Safety of Tunneled Pleural Catheters in Adults with Malignant Pleural Effusions: A Systematic Review
Background Malignant pleural effusions (MPE) are a frequent cause of dyspnea and discomfort at the end of cancer patients' lives. The tunneled indwelling pleural catheter (TIPC) was approved by the FDA in 1997 and has been investigated as a treatment for MPE. Objective To systematically review published data on the efficacy and safety of the TIPC for treatment of MPE. Design We searched the MEDLINE, EMBASE, and ISI Web of Science databases to identify studies published through October 2009 that reported outcomes in adult patients with MPE treated with a TIPC. Data were aggregated using summary statistics when outcomes were described in the same way among multiple primary studies. Main Measures Symptomatic improvement and complications associated with use of the TIPC. Key Results Nineteen studies with a total of 1,370 patients met criteria for inclusion in the review. Only one randomized study directly compared the TIPC with the current gold standard treatment, pleurodesis. All other studies were case series. Symptomatic improvement was reported in 628/657 patients (95.6%). Quality of life measurements were infrequently reported. Spontaneous pleurodesis occurred in 430/943 patients (45.6%). Serious complications were rare and included empyema in 33/1168 patients (2.8%), pneumothorax requiring a chest tube in 3/51 (5.9%), and unspecified pneumothorax in 17/439 (3.9%). Minor complications included cellulitis in 32/935 (3.4%), obstruction/clogging in 33/895 (3.7%) and unspecified malfunction of the catheter in 11/121 (9.1%). The use of the TIPC was without complication in 517/591 patients (87.5%). Conclusions Based on low-quality evidence in the form of case series, the TIPC may improve symptoms for patients with MPE and does not appear to be associated with major complications. Prospective randomized studies comparing the TIPC to pleurodesis are needed before the TIPC can be definitively recommended as a first-line treatment of MPE.
INTRODUCTION
Malignant pleural effusions (MPE) occur in up to 15% of patients with advanced malignancies and contribute significantly to dyspnea and discomfort at the end of cancer patients' lives. 1 The current standard of care for MPEs that reaccumulate after percutaneous drainage is pleurodesis. Pleurodesis, using chemical and physical agents, has been the mainstay of treatment for decades, but questions have been raised about its safety. Talc, the most effective and commonly used pleurodesis agent, has been shown to have multiple complications, including respiratory failure, pneumonia, and treatment-related death. 2,3 There is evidence that these serious risks are reduced through the use of more carefully regulated talc particles. 4 However, pleurodesis is also associated with fever, pain, and often necessitates general anesthesia and a several-day hospitalization.
Since its approval by the FDA in 1997, the tunneled indwelling pleural catheter (TIPC) has increasingly been used as an alternate therapy. The indwelling catheter is inserted percutaneously and allows for intermittent drainage at home through an easily managed tube. 5 Shortly after FDA approval, Putnam and colleagues published the only randomized trial to date comparing pleurodesis to TIPC for MPE. 6 They found that TIPC patients experienced fewer in-hospital complications and a similar improvement in quality of life when compared with pleurodesis, but a lower rate of spontaneous pleurodesis and higher rates of late complications. Since Putnam's publication, the use of these drainage systems has become more widespread, and multiple large and small observational studies using the TIPC to treat MPE have been published.
The goal of our study is to locate, evaluate, and summarize existing clinical studies examining the safety and efficacy of the TIPC for MPE in order to help clinicians understand the risks and benefits of using these devices. We also hope to provide a foundation to encourage future randomized studies directly comparing pleurodesis to TIPC. To accomplish this, we conducted a systematic review of published studies to explore the efficacy and safety of tunneled indwelling pleural catheters in the treatment of MPE.
Data Sources and Searches
We searched the MEDLINE, EMBASE, and ISI Web of Science databases for all relevant articles published through October 2009. The search strategy was created together with a research librarian and designed to capture all terms that may be used describe malignant pleural effusions and pleural catheters. Complete search terms for each database are included in Table 1. We also manually searched the reference lists of included studies and relevant reviews for additional studies not detected by the electronic literature searches.
Study Selection
Studies with and without a control group were eligible if they included at least eight consecutive adult patients with an effusion in the setting of malignancy who received an indwelling, tunneled pleural catheter. Criteria for exclusion were studies of nonmalignant effusions, empyema, or chylothorax; studies of large bore chest tubes or non-tunneled catheters; studies in which all patients underwent thoracoscopy, video-assisted thoracoscopic surgery (VATS), or pleurodesis; studies not published in the English language; and studies without primary data. If data appeared to be duplicated in multiple publications, only the study with the most complete data set was included, and studies excluded for this reason are explicitly noted. If a study was otherwise eligible but contained an identifiable subset of patients with excluded types of effusions or surgical procedures, the study was included and analyzed for the subset of interest.
Data Abstraction and Quality Assessment
Data were abstracted independently by two authors (MVM, KYM); discrepancies were resolved by discussion and consul-tation with the third author (RJK). Abstractors were not blinded to any details regarding the included studies. From each study, data were abstracted on participant age, sex, cancer type, prior treatments for effusion, length of time catheter was in place, amount of fluid drained, occurrence of spontaneous pleurodesis, symptomatic relief, quality of life, complications, hospital days, and survival days. For every outcome described in a particular study, the authors determined the numerator (i.e., the number of subjects that experienced that outcome) and denominator (i.e., the number of total subjects). Very few studies reported data on all outcomes of interest; if an outcome was not reported, that study was not included in either the numerator or the denominator. If more than one interpretation of outcomes data was possible, such as due to a discrepancy within a paper, the authors of the primary study were contacted for clarification. If the authors could not be reached, the abstractors opted towards the interpretation that would not bias in favor of the TIPC. The quality of evidence presented in the primary studies was evaluated using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Working Group system. 7
Data Synthesis and Analysis
In review of the studies, patient populations and interventions were felt to be relatively homogeneous, as described further in the Results. There was, however, significant variation in outcome reporting. Outcomes that were described in a similar way among multiple primary studies were aggregated by summing the numerator and denominator for each given outcome to yield the total number of events per total number of subjects. Outcomes that were not the same or were not described in the same way (such as "obstructed catheter" and "malfunction of catheter") were not combined. For outcomes that were combined, means were calculated using summary statistics. To give readers a measure of the uncertainty around the calculated mean values, we included the range of values reported in the primary studies. The overall mean was calculated for patient survival and catheter permanence time from studies that reported this information. All analyses were performed using Microsoft Office Access and Excel 2007. Figure 1, the literature search identified 1,011 potentially eligible reports. After review of the title, abstract, or complete manuscript, 986 reports were excluded based on the inclusion and exclusion criteria. Only two studies 8,9 that appeared to meet other criteria were excluded due to non-English language; the authors of one of these 8 later published their experience in English, and this study is included. 10 Of the remaining 25 eligible reports, 6 were excluded due to overlap with patients reported in other studies. [11][12][13][14][15][16] The 19 included reports (Table 2) were all consecutive case series of patients with MPE who received a TIPC, except one randomized controlled trial of doxycycline pleurodesis versus TIPC; only the group receiving the TIPC is included in this analysis. 6 Three meeting abstracts 10,17,18 and one letter 19 were eligible and presented sufficient data to be included in the ("Indwelling catheter"/exp OR "catheters and tubes"/ de OR "catheter"/de OR "tube"/de OR "drain"/de) AND (("pleura effusion"/de AND "malignant") OR ("pleural effusion":ti,ab OR "pleura effusion":ti,ab AND "malignant":ti,ab)) analysis. Eight studies were conducted outside the USA. All except one study 20 employed the Pleurx® tunneled pleural catheter system initially manufactured by Denver Biomedical (Golden, Colorado), a company that was acquired by Cardinal Health (Dublin, Ohio) in 2006. The oldest study, published in 1994, used a tube known as the Tenckhoff catheter, which is also a small-bore flexible catheter that was tunneled into the pleural space. 20 One study employed a reusable plastic pump to drain pleural fluid instead of the standard vacuum drainage bottles designed for use with the Pleurx® catheter. 21 Funding source was noted for only two reports, and in those cases the study was supported by the Pleurx® manufacturer. 6,22 Quality of evidence, as rated using the GRADE system, was very low for all of the case series and moderate for the one randomized controlled trial. 6 A total of 1,370 patients were included in the analysis, with an average age of 63.0 years and 50.5% women. Of the 1,370 patients, 1,348 patients had malignant pleural effusions; two studies included a minority of patients with pleural effusions of benign etiology, such as heart failure and hepatic hydrothorax. 23,24 Because data were not reported separately for the patients with non-MPE, these 22 patients were included in the overall analysis, but are felt to contribute insignificantly. Most patients had recurrent effusions that had failed previous thoracentesis or other treatments; details of the individual patient populations are included in Table 2. In the 17 studies that reported distribution of cancer type among 1,236 participants, lung cancer, breast cancer, and mesothelioma together comprised approximately 70% of the included patients (33.5%, 25.9%, and 10.5%, respectively). A variety of other hematologic malignancies and solid tumors were represented in smaller numbers, as described in Table 2.
As detailed in
Survival and catheter permanence times were reported in 10 and 11 studies, respectively, either as a median or a mean, with or without a range. These times were defined starting from the day of TIPC placement. Survival varied from 3 to 1,240 days, with a combined mean of 87 and reported medians ranging from 59.5 to 144 days. Catheter permanence time varied from 2 to 434 days, with a combined mean of 51 and reported medians ranging from 44 to 60 days.
Studies differed significantly in how they reported outcomes, but results were pooled when possible, as described in the Methods section. Combined outcomes are presented in Figure 2. Note that denominators in this figure are different because not all studies reported all complications, and if an outcome was not recorded, the study was removed from both the numerator and denominator.
Of the reported complications, malfunction of the catheter (11/121, 9.1%), catheter clogging (33/895, 3.7%), and unspecified pain (8/142, 5.6%) were the most common. One study reported that mild procedure-related pain occurred in "most" patients, 23 but pain that persisted beyond the immediate postprocedural period was documented formally in only 18/558 patients (3.2%) from five studies. Infectious complications such as empyema (33/1168, 2.8%), cellulitis (32/935, 3.4%), and unspecified infection (7/346, 2.0%) were similarly uncommon. The TIPC was removed before death because of a complication in 54/633 patients (8.5%). Tumor metastasis along the catheter tract, although initially a strong theoretical concern, was well documented in these studies and occurred in less than 1% of patients. One procedure-related death was reported in the earliest published study and was related to respiratory arrest following intravenous sedation in a patient who had previously requested no resuscitation. 20 Ten studies reported that TIPC devices were used without any complication in a total of 517/591 patients (87.5%).
Symptomatic improvement was reported in a variety of ways. Some studies simply stated that patients experienced "symptomatic improvement" without further delineation. 10,20,25 One study rated dyspnea improvement on a 3-point scale as complete, partial, or absent. 26 The remaining studies reported symptomatic improvement as "relief of dyspnea," "improvement in respiratory performance status," "increased exercise tolerance," "improvement of pain," and "catheter was useful." In sum, these studies showed that 95.6% of patients experienced symptomatic improvement after placement of the TIPC. Putnam et al. 6 used Borg scores and the Guyatt Chronic Respiratory Questionnaire to assess symptomatic improvement and found similar improvements after treatment with both the TIPC and doxycycline pleurodesis; the number of patients with improvement was not reported and therefore could not be included in this combined outcome statistic. Quality of life assessments were infrequently included, but 46/46 patients from two studies cited improvement from the TIPC. 27,28 Spontaneous pleurodesis was also defined in a variety of ways. In general, these studies used the term spontaneous pleurodesis to describe enduring or long-term achievement of pleural symphysis allowing for TIPC removal without recurrence of the pleural effusion, although some patients chose to keep the catheters in place despite achievement of pleurodesis. Spontaneous pleurodesis occurred in 430/943 patients, for an overall rate of 45.6%, with an average time to pleurodesis of 52 days. The catheter was removed due to pleurodesis in 381/808 patients (47.1%). Recurrence of effusion after initial control was reported in 50/651 patients (7.7%), and 33/652 patients (5.1%)
DISCUSSION
The presence of a MPE portends a poor prognosis for cancer patients, with a median survival of 4.0 months; 34 the American Joint Committee on Cancer (AJCC) staging system for lung cancer has recently been revised to reflect this. 35 Given the limited life expectancy, treatment for MPE is often individualized to each patient's clinical characteristics and goals of care. While repeated therapeutic thoracentesis may provide sufficient symptomatic relief in some patients, others have a pleural effusion that is so large and rapidly accumulating that a more permanent intervention is required. Due to his high operative risk, the patient with metastatic lung cancer described in the case presentation was not a good candidate for chemical or mechanical pleurodesis, the traditional standard of care for MPE. 1,36 Data on the use of the TIPC for the treatment of MPE have been accumulating over the past 15 years. Prior reviews of the TIPC 37-40 have included only a few select articles and focus on the data from the largest centers. To our knowledge, this report, which draws studies from multiple databases and includes a total of 1,370 patients, is the most rigorous and complete review published on this topic.
Although the results are limited by the low quality of the available evidence, as discussed below, our systematic review suggests that the TIPC may improve symptoms for patients with MPE and does not appear to be associated with major complications. Symptom relief was variably defined in the studies that reported it, and only 1 study 6 used validated scales; when combined, 628/657 patients (95.6%) experienced some degree of improvement in their symptoms, although the magnitude of improvement cannot be determined. Spontaneous *TIPC inserted under local anesthesia unless otherwise noted †Trapped lung is characterized by a thick, fibrous peel on the visceral pleura that prevents full reexpansion and pleural apposition after thoracentesis, leading to rapid reaccumulation of fluid and chronic pleural effusions Abbreviations: MPE, malignant pleural effusion; NS, not specified; RCC, renal cell carcinoma; TIPC, tunneled indwelling pleural catheter; VATS, videoassisted thoracoscopic surgery pleurodesis occurred in 430/943 (45.6%); this is not the primary goal of TIPC placement, but does allow removal of the catheter and may contribute to symptom relief.
Interpretation and application of these results must be done with caution given the limitations of our study. Most importantly, the data come almost exclusively from uncontrolled, often retrospective, case series. In the widely used GRADE system, case series are categorized as very low quality evidence. 7 These studies were not blinded, patients were not randomized to treatment, and outcomes were inconsistently reported. Even among the outcomes that were reported with some regularity, degree or severity was often not delineated. There is also potential for publication bias, especially at lessexperienced centers that have used the TIPC without as much success. Furthermore, although most of the largest studies in this review explicitly stated that they included consecutive patients who received the TIPC, 9 of the 19 studies did not state this, thus raising the possibility that some patients may have not been included in their series. Potential confounding factors, such as concurrent treatment with opioids or chemotherapy, were not reported. Finally, there is potential for bias in the way our systematic review was conducted. We attempted to minimize this by using a protocol for study inclusion and exclusion and by having two authors separately abstract the data.
While this study provides a suggestion that the TIPC may be useful in the treatment of patients with MPE, there is little evidence comparing the TIPC to traditional pleurodesis, and the patient population in which the TIPC could be best utilized remains undefined. For example, it is possible that patients with a longer survival may experience more TIPC complications due to longer catheter permanence time and would benefit instead from early mechanical or chemical pleurodesis. It is also unknown whether the TIPC may be more efficacious in effusions due to certain cancers. Warren et al. found that tumor type, the presence of malignant cells on pleural fluid cytology, and trapped lung are all predictive of pleural symphysis and 33 Interestingly, there may also be a role for pleurodesis agents delivered through the TIPC if catheter drainage alone does not initially lead to spontaneous pleurodesis and the patient's clinical situation and goals warrant this additional treatment. [41][42][43] Few studies have included analysis of the cost effectiveness of the TIPC. One recent analysis by Olden et al. found the TIPC to be slightly more expensive than talc pleurodesis ($9,012 versus $8,171 USD) with similar effectiveness. 44 However, sensitivity analysis suggested that TIPC may be cost effective for patients with a prognosis of 6 weeks or less. Putnam et al. found that early mean hospital charges were significantly lower in patients receiving a TIPC placed in the outpatient setting as compared to those who were treated as an inpatient with either chest tube or TIPC. 22 As the biggest long-term cost of the TIPC is that of the disposable vacuum drainage bottles, some of this expense could be reduced with use of alternate drainage equipment. For example, Al Halfawy and Light reported the feasibility of using a reusable plastic pump for draining pleural fluid, with a cost of only $3 compared to $400 per month for the disposable vacuum bottles. 21 In summary, although experience with using the TIPC for MPE is becoming more widespread, there is little existing high-quality evidence comparing the efficacy and safety of the TIPC to other available treatments. Prospective randomized studies comparing the TIPC to pleurodesis are needed before the TIPC can be definitively recommended as a firstline treatment of MPE. To best aid providers in decision making regarding the use of the TIPC, future studies should include validated measures of symptom control and quality of life, and should address characteristics of patients most likely to benefit from the TIPC.
The coal miner presented above underwent TIPC placement as a palliative option for management of his recurrent pleural effusion. The TIPC provided relief of his chest pain and shortness of breath. He continued to live at home in the care of his family, without additional hospitalizations. He required one additional outpatient visit to unclog the catheter. As was his wish, the patient was able to attend his granddaughter's college graduation, and he died 11 weeks after TIPC placement with the catheter still in place.
|
2014-10-01T00:00:00.000Z
|
2010-08-10T00:00:00.000
|
{
"year": 2010,
"sha1": "1864e8969ec6f9619ebf5fd2cf84116ad9a931cc",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-010-1472-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1864e8969ec6f9619ebf5fd2cf84116ad9a931cc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259187833
|
pes2o/s2orc
|
v3-fos-license
|
Fault Detection in Induction Motors using Functional Dimensionality Reduction Methods
The implementation of strategies for fault detection and diagnosis on rotating electrical machines is crucial for the reliability and safety of modern industrial systems. The contribution of this work is a methodology that combines conventional strategy of Motor Current Signature Analysis with functional dimensionality reduction methods, namely Functional Principal Components Analysis and Functional Diffusion Maps, for detecting and classifying fault conditions in induction motors. The results obtained from the proposed scheme are very encouraging, revealing a potential use in the future not only for real-time detection of the presence of a fault in an induction motor, but also in the identification of a greater number of types of faults present through an offline analysis.
Introduction
The implementation of strategies for incipient faults detection and diagnosis on Rotating Electrical Machines (REM) is very important for the reliability and safety of modern industrial systems. Its execution allows planning interruptions of continuous production processes in scheduled stops, thus reducing maintenance time and associated economic losses [1,2].
The diagnosis of faults present in a REM is integrated by the detection, identification and isolation of an anomaly, which can be achieved by using the information obtained on the state of operation of the equipment or drive [3].
As a result, it is possible to consider fault diagnosis as a pattern recognition problem with respect to the condition of a REM [4]. To effectively diagnose faults in a REM, it is essential to distinguish between failures originating from the machine itself, whether electrical or mechanical, and those corresponding to the associated load [5].
In recent decades, with the advancement of communication technologies and the inclusion of control devices in REM, non-invasive faults detection and diagnosis techniques based on the use of electrical variables have been studied more than those that use acoustic emissions, analysis lubrication, thermography and vibrations. The latter have been the techniques most widely used for some time, in which different methods are used for analysis, among the most common, Fast Fourier Transform (FFT) in the frequency domain, and wavelet analysis and empirical model decomposition in the domain time-frequency [6].
The techniques based on electrical variables have focused mainly on those methods sustained on the Motor Current Signature Analysis (MCSA), instantaneous power analysis, and Park's vector analysis, among others [1]. In this way it is possible to detect a large number of failures in induction motors, which are associated with the presence of certain components of the frequency spectrum. It should be noted that, when performing these types of analysis, the presence of an expert in the area is required to carry out the task according to the information contained in the processed signals.
Its increasing use is due to the fact that the monitoring of the electrical variables to be analyzed is carried out without modifying the state or structure of the electrical machine [3]. In particular, the sensors are placed on a control panel, thus avoiding problems with difficult access to the equipment to be analyzed and even reducing the risks for the operator in dangerous environments.
In an ideal scenario, all this process should be done in an automatic way.
Although machine learning methods fit nicely for this purpose, we should taken into account that it is very difficult to obtain a targeted sample in this context. Thus, supervised models should be discarded. As an alternative, dimensionality reduction and clustering methods could help to analyze and group the available electrical motor signals.
It is worth mentioning that electrical variables are functions, and hence they can be studied from a functional perspective using Functional Data Analysis (FDA; [7]) techniques. The novelty of this article resides precisely on applying functional dimensionality reduction methods, namely Functional Principal Component Analysis (FPCA; [8]) and Functional Diffusion Maps (FDM; [9]), to detect and identify faults due to broken bars and low-frequency load oscillations in IMs. Moreover, a complete methodology is proposed, covering from data processing to fault detection and fault diagnosis.
The rest of the paper is organized as follows. In Section 2 the state of the art in fault detection applying machine learning techniques is presented. Section 3 briefly reviews the theory under the functional dimensionality reduction techniques used in this work. Section 4 explains the data collection and the applied experimental methodology, and shows the obtained results. Finally, Section 5 presents some conclusions of this work, as well as possible research lines to further extend it.
Machine learning and induction motor fault detection
In order to automate faults detection and diagnosis, significant progress has been made in the use of data processing, specifically data mining based on Artificial Intelligence methods. This has been achieved through the implementation and integration of different machine learning techniques and computational statistics, as exposed in recent literature [5,6,10,11,12,13,14]. These advancements enable non-experts in the field to analyze systems without detailed knowledge of the model being studied, resulting in simpler diagnoses [3].
In [11], promising diagnostic techniques based on machine learning are pre-sented, with a focus on their attributes. In [12], an analysis of the advantages and disadvantages of various intelligent diagnostic techniques used in REM is presented, including decision trees, support vector machines, principal component analysis, and genetic algorithms. This analysis was carry out based on the most common mechanical and electrical failures observed in these machines.
As explained in Section 1, we will focus on unsupervised machine learning methods, specifically on dimensionality reduction techniques. Regarding this field, in [14] Principal Component Analysis (PCA) is identified as one of the most promising Machine Learning technique and is highlighted as a method that provides interesting results. Its use allows the identification of the most significant failure characteristics and the extraction of underlying patterns, all while reducing data dimension.
Several case studies have focused on the use of PCA together with with various Machine Learning techniques for diagnosing REM failures, with particular emphasis on the diagnosis of broken bars. Such is the case of [15], where an advanced signal processing method based on wavelet analysis, PCA, and multilayer neural networks is presented. This technique enables the extraction of suitable characteristics, reduces the correlation between the extracted features, and determines the magnitude of a failure due to broken bars in an IM.
On the other hand, [16] provides a comparative analysis of three methods for detecting broken bars in an induction motor, based on the electrical signal analysis, particularly MCSA, MSCSA and PCA. Additionally, [17] presents a method for the detection of broken bars through the use of PCA in the three stator currents, to later be used in the calculation of Q statistic that determines the presence or absence of the fault.
As mentioned before, due to its nature, electrical variables can be studied from a functional perspective using FDA techniques that, as far as we have read, no study has used before for trying to solve the problem at hand.
Functional dimensionality reduction methods
Dimensionality reduction methods are statistical techniques where highdimensional data are represented in lower dimensional spaces, for example by capturing most of the variance of the original data in the new space, like PCA does, or by reflecting lower dimensional underlying manifolds where the original data lay, as manifold learning techniques intend [18].
When data are functions, they live in infinite-dimensional spaces, and hence finding low-dimensional representations becomes essential. Finding reliable representations of low-dimensional data, specifically in two or three dimensions, is beneficial in real-world problems for visualization, description and general exploration purposes [19]. Moreover, these representations can be used as feature vectors in supervised machine learning algorithms which require multivariate inputs [20].
The most popular technique is FPCA. However, non-linear dimensionality reduction methods such as FDM and Isomap [21] have been gaining popularity in recent years and have outperformed FPCA in some data applications.
In the next subsections, we will briefly introduce FDA, and present the theoretical framework for FPCA and FDM.
Functional data analysis
Functional Data Analysis [7] studies samples of functions where t ∈ J , J is a compact interval, and each x i (t) is the observation of an independent functional variable X i identically distributed as X. It is usual to assume that the functional variable X is a second order stochastic process, x(t)y(t)dt. The inner product allows to introduce the notion of distance between functions by the L 2 -norm ∥x∥ 2 = ⟨x, x⟩ = b a x 2 (t)dt. Therefore, in L 2 space, the distance between two functions can be calculated as the norm of their difference, which is expressed as ∥x − y∥.
Functional PCA
Functional Principal Component Analysis [8] is a linear functional dimensionality reduction method that generalizes multivariate Principal Component Analysis [22] to the case where data are functions. In FPCA, the infinitedimensional random functions are projected onto the lower dimensional subspace generated by the eigenfunctions of the covariance operator.
Let x 1 (t), . . . , x N (t) be the realizations of a stochastic processes over a compact domain. The sample variability is characterized by the spectral decomposition of the sample covariance operator, is the sample covariance function. The directions ξ l of the FPCA projection into an L-dimensional subspace are chosen such that they maximized the variance of the projection; more specifically they are the solution of the following problem: The above expression can be simplified by using the sample covariance operator defined as max ξ l ⟨Γξ l , ξ l ⟩ s.t. ⟨ξ l , ξ k ⟩ = δ lk , k ≤ l, l = 1, ..., L.
The solutions of this problem are obtained by solving the eigenequation where λ 1 ≥ λ 2 ≥ · · · ≥ 0 are the eigenvalues and ξ 1 , ξ 2 . . . are the eigenfunctions, which form an orthonormal basis for the space of functions being analyzed. Hence, To apply this method, there are two possible strategies to approach the eigenanalysis problem (1): discretizing the functions or expressing the functions in a known basis. In both cases, we convert the continuous functional eigenanalysis problem into an approximately equivalent matrix eigenanalysis task. The whole procedure of the first strategy, which will be the one used in the experiments, is shown in Algorithm 1. Here, FPCA is equivalent to a standard multivariate PCA with the metric defined by the quadrature weight matrix. For more information about the other strategy, please refer to [23].
Algorithm 1 FPCA over discretized functions. Input: 2: Compute weight matrix W from quadrature weights using some numeric integration rule.
Functional DM
Functional Diffusion Maps [9] is a nonlinear dimensionality reduction algorithm applied to functional data that extend multivariate Diffusion Maps (DM, [24,25]) to the functional domain. FDM seeks to identify low-dimensional representations of L 2 functions on nonlinear functional manifolds after defining a diffusion process on a normalized graph based on pairwise similarities between functional data.
In more detail, let X = x 1 (t), ..., x N (t) be the realizations of a stochastic process over a compact domain. In this context, X is assumed to lie on a To identify the underlying manifold, the initial stage of FDM involves building a weighted graph G = (X , K), where the graph vertices are functions x i (t) and the weights k ij are obtained from a symmetric and pointwise positive N × N kernel matrix K. This kernel matrix defines a local measure of similarity within a certain neighborhood, so that outside the neighborhood the function quickly goes to zero. The standard kernel used to compute the similarity between functional data is the Gaussian kernel, , where the size of the local neighborhood considered is determined by the hyperparameter σ. Alternatively, the Laplacian kernel can be used, defined as . These types of kernels define a connected graph.
Once the matrix K is obtained, the connected graph is normalized using a density parameter α ∈ [0, 1]. This results in a new graph, denoted as G ′ = Here, d i = N j=1 k ij is the degree of the graph and the power d α i approximates the density of each vertex. Now, we can create a Markov chain on the normalized graph whose transition matrix P is defined by ij . The transition matrix P provides the probabilities of arriving from node i to node j in a single step.
By taking powers of the P matrix, we can increase the number of steps taken in the random walk. This defines a diffusion process that reveals the global geometric structure of X at different scales. Now we are ready to define a diffusion distance D T based on the geometrical structure of the obtained diffusion process, This metric measures the similarities between data as the connectivity or probability of transition between them. Therefore, T plays the role of a scale parameter and D 2 T (x i (t), x j (t)) will be small if there exist a lot of paths of length T that connect x i (t) and x j (t).
Spectral analysis of the Markov chain allows us to consider an alternative formulation of the diffusion distance that uses eigenvalues and eigenvectors of P. As detailed in [24], even though P is not symmetric, it makes sense to perform its spectral decomposition using its left and right eigenvectors, such The eigenvalue λ 0 = 1 of P is discarded since ψ 0 is a vector with all its values equal to one. The other eigenvalues λ 1 , λ 2 , . . . , λ N −1 tend to 0 and satisfy λ l < 1 for all l ≥ 1. Thus, the diffusion distance can be approximated by the first L eigenvalues and eigenvectors using the new representation of P, Finally, the diffusion map is given by satisfying that the diffusion distance on the original space can be approximated by the Euclidean distance of the Ψ T projections in R L : The complete methodology of the method is presented in Algorithm 2.
Data collection
For this particular research, several tests were carried out in an experimental test bench made up of two induction motors mechanically coupled through a Gummi type flexible coupling. There it was possible to reproduce the failure conditions due to broken bars and low-frequency load oscillation.
The motor under test is connected directly to the grid while the motor used as load is connected to a variable speed drive with torque control. An
Algorithm 2 FDM.
Input: L -Embedding dimension T -Steps in random walk α -Density parameter Output: Ψ T (X ) -Embedded functional data 1: Construct G = (X , K), where K is a positive and symmetric matrix with ij . 5: Obtain eigenvalues {λ l } l≥0 and right eigenvectors {ψ l } l≥0 of P such that Table 1. The recording and storage of electrical variables were carried out using a digital recorder with 4 channels, with data storage capacity through an internal memory in the recorder itself. In Figure 1, a schematic diagram of the laboratory assembly is shown (test bench and measurement equipment).
The analyzed dataset is made up of 10 measurements done with the IM without anomalies (Healthy Motor, HM) per load percentage. Each measurement was repeated twice per experimental setup, so a total of 50 HM measurements were compiled. We have to add 10 measurements for the motor with 1 broken bar (1BB), 10 measurements with 2 broken bars (2BB), and 10 measurements In this way, a total of 120 current measurements are available for data analysis using the proposed functional dimensionality reduction techniques.
Apart from these current signals, together with the two line voltage signals mentioned above, instantaneous active power (IAP) signals are obtained for the cases of motors with faults, as in [26]. Therefore, 70 IAP measurements are also available.
To summarize, Table 2 shows the labels used for each type of induction motor diagnostic throughout our experiments and their frequencies.
Experimental methodology
This section presents the experimental framework developed in this research, which uses functional dimensionality reduction methods to detect and diagnose faults in induction motors.
The current and instantaneous active power data will be analyzed, includ- Sinusoidal Signal load at 1 Hz, 1 mV SS 1 A 10 10 Sinusoidal Signal load at 1 Hz, 1.5 mV SS 1 B 10 10 Sinusoidal Signal load at 2 Hz, 1 mV SS 2 A 10 10 Sinusoidal Signal load at 2 Hz, 1.5 mV SS 2 B 10 10 data, and the results obtained by each technique will be compared. Finally, a scheme for detecting and diagnosing failures in IMs will be proposed based on the obtained results.
Data preprocessing
First of all, we align data using the first x-axis cut-off point. Then, the datum corresponding to one broken bar with load at 20 % was dropped as it was identified as an outlier by expert knowledge. Therefore, this implies working with a total of 119 current signals and 69 IAP signals. We scale data to the range of −1 and 1. In this way, by normalizing the signals, it is possible to compare them independently of the associated load percentage. Since the managed data are periodical, we just consider the first 750 steps as a representative sample.
Thus, we deal with the curves in the first 9.3625 ms.
Next, we estimate the derivatives of both current and IAP signals by finite differences [27] and the current and active power signatures by applying the Fast Fourier Transform (FFT; [28]) method to the normalized data.
In Figure 2, examples of current data and instantaneous active power data obtained from motors with faults due to broken bars and motors with lowfrequency load oscillations are shown. Specifically, these data were obtained from the motor with two broken bars and the motor with a 1 Hz, 1.5 mV load oscillations, using load at 80 %. The figure shows both preprocessed signals, as well as derivatives of the signals and FFTs. In addition, for the current data, signals obtained from the healthy motor at the same load percentage are also shown.
By visually analyzing the current data, we observe differences between data from faulty motors and data from the healthy motor. These differences are clearer in the case of FFTs, which show peaks around the fundamental frequency, equal to 50 Hz, when motors have faults.
Analyzing the IAP data, we barely find differences in the preprocessed signals and derivatives that allow us to distinguish the type of motor fault. However, considerable magnitude peaks are observed at low frequencies in the FFTs of the sinusoidal signals, which do not appear in the FFTs of motors with broken bars.
It is worth noting that when visually discriminating healthy motors from faulty motors using FFTs and discriminating motors with low-frequency load oscillations faults using FFTs of IAP, it is not necessary to resort to analyzing instantaneous reactive power (IRP).
Experimental configuration
FDM require an initial analysis to identify the suitable parameters to reveal patterns of interest or clusters in the data. Table 3 shows the hyperparameters obtained after a visual analysis for different parameters configurations. Based on the results, we can conclude that the Gaussian kernel gives better outcomes for current data, while the Laplacian kernel appears to be more appropriate for signatures.
Experimental results
In these experiments, the embeddings obtained from FPCA and FDM are analyzed for both current and instantaneous active power data in the time and frequency domains. Initially, we will apply the functional dimensionality reduc- tion techniques discussed previously to the raw current signal and its derivative.
Following this, we will apply these methods to the current signatures, and the same analysis will be repeated for the instantaneous active power.
Analysis of the current signal and its derivative
The goal of the first experiment is to detect motors with faults using dimension reduction methods over the current signals and their derivatives. Figure 3 shows the scatterplots of FPCA and FDM scores over them.
While the FPCA embedding over the current signals fails for data with high load percentages, the FPCA embedding obtained for the current signal derivatives groups data from the healthy motor and data from the motors with faults into separable clusters. Therefore, there is more discriminatory information in the growth of the current data than in the amplitude. The influence of load on the embeddings can also be observed, as the scores are grouped by load instead of by type of failure. Even so, it is worth highlighting the scores corresponding to motors with 3 broken bars, which appear at the embedding edges. faulty motors with data from the healthy one.
In summary, we have obtained an embedding that centralizes all the components from the healthy motor and excludes those from faulty motors. This is a novel contribution to the state of the art in induction motor faults detection using current instead of its signature and obtained in an automated way.
This suggests that, by using an unsupervised analysis, it is possible to identify data coming from faulty motors. However, accurately identifying the type of malfunction remains a challenging task that requires additional information to be achieved. In the subsequent experiments, we will only use data from faulty motors in order to distinguish the type of fault.
Analysis of the current signature
In the second experiment, we will verify if the proposed dimensionality reduction techniques allow us to determine the type of fault present in the motors. We will examine whether these techniques can replace the common visual analysis of the current signature in induction motor fault detection literature [29]. Therefore, any embedding makes it possible to diagnose faults due to broken bars from faults due to low-frequency load oscillations.
In conclusion, dimensionality reduction methods applied to the current signature are not sufficient to diagnose the type of failure in induction motors. We need to use more information and we resort to the instantaneous active power.
In the same way as in the previous experiments, we will analyze the power signal, its derivative and finally, its signature.
Analysis of the instantaneous active power signal and its derivative
In the third experiment, the instantaneous active power signals and their derivatives are analyzed. Figure 5 shows the scatterplots of FPCA and FDM scores over the IAP signals and their derivatives.
None of the obtained embeddings allows to diagnose the type of fault. How- ever, we can stand out the one obtained by FPCA on the IAP signal derivatives as it is the one that presents data more separated, grouped from motors with faults due to broken bars at the bottom, data from faulty motors with load oscillations at 2 Hz in the middle, and data from faulty motors with load oscillations at 1 Hz at the extremes. Nevertheless, the clusters appear very close and even overlapped.
Analysis of the instantaneous active power signature
After the last experiment, we will try to diagnose the type of motor fault by analyzing the IAP signatures. The FPCA and FDM embeddings obtained are displayed in Figure 6.
FPCA allows to differentiate motors with failures due to broken bars from motors with failures due to load oscillations, distinguishing those at 1 Hz and the number of broken bars of the motors remains a challenge as they appear grouped together in the embedding. Another important contribution is that by using the active power signature, the embedding obtained gives much less importance to the motor load percentage than in the rest of the embeddings.
By comparison, FDM embedding also keeps the above classes separated, but the separation distance between classes is smaller.
Detection and diagnosis algorithm
Based on the previous results, it can be concluded that functional dimensionality reduction methods offer a good performance for electrical signals from induction motors, which enables the proposal of an algorithm for detecting and diagnosing induction faults. Figure 7 depicts the proposed induction faults detection algorithm. First of all, current signals are collected from an induction motor, along with line voltage signals, which facilitate the computation of instantaneous active power data. Then, a preprocessing step consisting of aligning data, scaling to the range of −1 and 1, and keeping only the first 750 steps as a representative sample is performed. Consecutively, FPCA is applied to the time-domain derivative of a single stator current in order to distinguish healthy motors from motors with faults. Next, in the event that the motor exhibits faults, FPCA is applied to the frequency domain, specifically to the frequency spectrum of the instantaneous active power signal, to distinguish between motors with broken bars faults and those with low-frequency load oscillation faults, as well as their various subtypes.
Conclusions
Implementing strategies to detect and diagnose early faults in rotating electrical machines online is crucial for ensuring the reliability and safety of modern industrial systems. Its execution allows planning interruptions of continuous production processes in scheduled stops, thus reducing maintenance time and associated economic losses [1,2]. The diagnosis of faults present in an REM is integrated by the detection, identification and isolation of an anomaly, which can be applied by using the information obtained on the state of operation of the equipment or drive [3]. As a result, it is possible to consider fault diagnosis as a pattern recognition problem with respect to the condition of an REM [4].
The proposed unsupervised scheme consist of using functional dimensionality reduction techniques, specifically FPCA and FDM, to detect and identify the presence of faults due to broken bars and low-frequency load oscillations in induction motors. An analysis of a single stator current and an analysis of the instantaneous active power of the IM is carried out. In this analysis, both the raw data and its derivatives, as well as their signatures, are used.
The results obtained from the scheme proposed are very encouraging, revealing a potential use in the future not only for real-time detection of the presence of a fault in an IM, but also in the identification of a greater number of types of faults present through an offline analysis. Both FPCA and FDM give very similar results, although FPCA is more competitive for these data. We have seen that FPCA on the derivative of the current signal allows to distinguish motors with faults from healthy motors and applying this technique for the instantaneous active power signature it is possible to diagnose the type of motor failure: broken bars or low-frequency load oscillations.
|
2023-06-19T01:15:45.260Z
|
2023-06-14T00:00:00.000
|
{
"year": 2023,
"sha1": "ea203278d283ea910493e95911cbb0a0bc0656ed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ea203278d283ea910493e95911cbb0a0bc0656ed",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
221723294
|
pes2o/s2orc
|
v3-fos-license
|
Successful management of SARS-CoV-2 acute respiratory distress syndrome and newly diagnosed acute lymphoblastic leukemia
Key Points Standard chemotherapy can still be used for new diagnosis of acute lymphoblastic leukemia in patients with SARS-CoV-2. Corticosteroid can be given safely to patients with SARS-CoV-2 presenting with acute respiratory distress syndrome and ALL.
Introduction
Although recommendations are emerging for the general management of oncology patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), 1,2 there is little experience in patients with newly diagnosed acute lymphoblastic leukemia (ALL). Providers may have concern about initiating multiagent chemotherapy in patients with SARS-CoV-2, particularly corticosteroids, which are an essential part of induction regimens, but raise the theoretical possibility of delayed viral clearance. We describe our experience of successfully initiating therapy for an adolescent diagnosed with ALL, while managing severe SARS-CoV-2 infection marked by respiratory failure, systemic inflammation, and autoimmune hemolytic anemia (AIHA).
Case description
An 18-year-old adolescent male presented to our emergency room with pallor, having experienced cough and fever for 1 week. He tested positive for SARS-CoV-2 by polymerase chain reaction. He had multiple known exposures: his father had recently died of SARS-CoV-2 and his mother and siblings were symptomatic at home. Initial laboratory evaluation was significant for the following counts: white blood cells (WBCs), 105 3 10 3 /mL with 95% blasts; hemoglobin (Hgb), 3.7 g/dL; and platelets, 30 3 10 3 /mL. Flow cytometry of peripheral blood confirmed the diagnosis of B-cell ALL.
He had low-grade fever, normal oxygen saturation, and no respiratory distress at time of presentation, with the remainder of his laboratory results notable for hyperuricemia and mildly elevated lactate dehydrogenase. He received supportive care with hyperhydration and allopurinol, given his high risk for tumor lysis syndrome, and broad-spectrum antibiotics (piperacillin/tazobactam). Shortly after admission, he developed persistent high fevers and intermittent hypoxia with sudden respiratory decompensation requiring mechanical ventilation and hypotension requiring vasopressor support on hospital day 3 (HD 3). A chest radiograph showed bibasilar opacities consistent with moderate acute respiratory distress syndrome (ARDS). 3 In the setting of worsening SARS-CoV-2 disease, his WBC count downtrended, with hydration but no specific antileukemic therapy, to 8 3 10 3 /mL; in addition, he developed transfusion-resistant anemia and thrombocytopenia and had a mild increase in bilirubin. He was found to have AIHA with positive immunoglobulin G and C3 polyspecific direct antiglobulins, but a negative eluate and no alloantibodies. He also had rising ferritin and elevated interleukin 6 (IL-6) and soluble IL-2 receptor (sIL2R) levels ( Figure 1), which raised concern for a hemophagocytic lymphohistiocytosis (HLH) vs a SARS-CoV-2-related cytokine storm.
Due to the severity of his condition, antileukemic therapy was initiated with methylprednisolone alone on HD 5. His clinical condition steadily improved thereafter. He was extubated on HD 7, showed resolution of fevers by HD 8, and his inflammatory markers decreased ( Figure 1). His response to transfusions improved, and Hgb and platelet count stabilized. Induction chemotherapy for ALL was then continued with vincristine on HD 11 and daunorubicin on HD 18. Polyethylene glycol (PEG)-asparaginase was held due to an elevated lipase level of 179 U/L (1.93 upper limit of normal), thought to be due to SARS-CoV-2 infection. He tolerated the chemotherapy well and was discharged home on HD 18. He received further chemotherapy (vincristine, daunorubicin) per the 4 PEG-asparaginase was given 10 days after hospital discharge once his lipase level returned to normal. His end-of-induction bone marrow showed complete remission with minimal residual disease of 2.7% by flow cytometry. JAK1 mutations (p.Arg724Cys, p.Arg724His), JAK2 mutations (p.Arg938Gln, p.Arg867Gln), loss of IKZF1, PAX5, and CDKN2A/2B, and cytokine receptor-like factor 2 rearrangement were found, consistent with Philadelphia chromosome-like ALL. He has subsequently been enrolled in a phase 2 study of ruxolitinib with chemotherapy (clinicaltrials.gov NCT02723994).
Methods
We collected the patient's medical records including clinical course, laboratory parameters, treatment record, and outcome.
Results and discussion
Patients receiving myelosuppressive cancer therapy certainly have a theoretically increased risk for SARS-CoV-2 and more severe disease, although currently reported data paint a mixed picture. 5 Reviews of pediatric oncology centers found a relatively low overall number of infected patients, with mild infection most common. 6,7 Whether treatments should be altered in pediatric cancer patients remains unclear, although emerging guidelines suggest postponing high-intensity treatments. 1,2 In this case, intensive remission induction chemotherapy was initially delayed due to concern for potential worsening of SARS-CoV-2 disease by exacerbating the patient's already immunocompromised state in the setting of ALL. However, the patient developed progressively worsening symptoms, laboratory markers suggestive of cytokine storm, and rapid clinical deterioration to respiratory failure and shock. Restoration of this patient's immune function was not expected without initiation of cytotoxic chemotherapy. He additionally showed evidence of immune dysregulation with persistent fevers, pancytopenia, and elevated serum ferritin, IL-6, and sIL2R. Notably, as the patient's clinical condition deteriorated, his high initial WBC count decreased drastically with hyperhydration alone. This finding was concurrent with the development of transfusion-unresponsive AIHA and thrombocytopenia. This constellation of findings could be attributed not only to SARS-CoV-2 infection but also to possible secondary HLH from untreated acute leukemia. 8,9 Additionally, secondary HLH may have contributed to SARS-CoV-2-induced lung inury. 10 Reports from China at the onset of the SARS-CoV-2 pandemic reported that patients with more severe disease had higher ferritin, sIL2R, IL-6, IL-8, IL-10, and tumor necrosis factor a (TNF-a). 11 Although lymphopenia is well described with SARS-CoV-2 infection, 12,13 this patient's case suggests that the hyperinflammatory response seen in severe SARS-CoV-2 infection may involve immune dysregulation leading to consumptive thrombocytopenia and AIHA, which should be further explored among patients infected with SARS-CoV-2. AIHA in this patient gave concern for breakdown of immunologic tolerance. 14 As there was evidence of both impaired and dysregulated immune function with evolving ARDS and shock, the decision was made to initiate corticosteroid therapy.
Significant systemic inflammation with elevated sIL2R and IL-6 levels, as seen in our patient, has led to the investigation of targeted cytokine blockade for patients with severe SARS-CoV-2 disease, with particular interest in targeting IL6 with tocilizumab. [15][16][17] This was considered in our patient: however, in light of concurrent ALL and concern for possible secondary HLH, the decision was made to initiate therapy with corticosteroids and reserve tocilizumab as second-line therapy. The benefit to mortality with steroid use in patients with severe SARS-CoV-2 disease was unclear, with mixed outcomes in observational studies at the time of our patient's treatment. [18][19][20][21] Current recommendations for the management of severe SARS-CoV-2 infection do not support routine systemic corticosteroid use outside of a clinical trial, but do allow for clinical discretion with regard to moderate ARDS. 3,[22][23][24][25] In this patient's case, however, corticosteroids were an integral part of his antileukemic therapy and management of AIHA and HLH, and had anti-inflammatory potential; thus, benefits were determined to outweigh the risks. Our patient's clinical condition and inflammatory marker elevation rapidly improved with steroid treatment, allowing him to continue standard ALL treatment. This suggests that in the setting of active SARS-CoV-2 infection in leukemia, systemic corticosteroids can be safely given without delay as a bridge to more myelosuppressive therapy.
Apart from supportive care and steroids, the patient did not receive any other SARS-CoV-2-directed therapy. He showed remarkable improvement upon initiation of steroids with resolution of fevers, decrease in inflammatory markers and D-dimer, stabilization of transfusion requirements, and rapid respiratory improvement with extubation possible within 3 days. Given his rapid recovery, additional antileukemic therapy with a modified 4-drug combination 4 was given, first with addition of vincristine and subsequently daunorubicin and PEG-asparaginase once the patient showed further clinical recovery from his SARS-CoV-2 infection. He tolerated the remainder of induction and achieved complete remission. We would thus recommend a similar stepwise approach in initiating systemic chemotherapy in patients with newly diagnosed ALL complicated by SARS-CoV-2 infection, utilizing steroids along with nonmyelosuppressive chemotherapy upfront, with addition of more toxic chemotherapy once the critical window of clinical deterioration from SARS-CoV-2 passes.
|
2020-09-16T13:06:17.958Z
|
2020-09-14T00:00:00.000
|
{
"year": 2020,
"sha1": "e895ddbcd76bf550726b9707b232d29fd8782b9a",
"oa_license": null,
"oa_url": "https://ashpublications.org/bloodadvances/article-pdf/4/18/4358/1758265/advancesadv2020002745.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dea88f7c81d110c2c1b52272f29ba771f4ec35ac",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13865089
|
pes2o/s2orc
|
v3-fos-license
|
Qualitative and Quantitative Content Determination of Macro-Minor Elements in Bryonia Alba L . Roots using Flame Atomic Absorption Spectroscopy Technique
Uliana Vladimirovna Karpiuk, Tel: +380505965725, Email: uliana.karpiuk@gmail.com, §: These authors contributed equally 2016 The Authors. This is an Open Access article distributed under the terms of the Creative Commons Attribution (CC BY), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers. Adv Pharm Bull, 2016, 6(2), 285-291 doi: 10.15171/apb.2016.040 http://apb.tbzmed.ac.ir Advanced Pharmaceutical Bulletin
Introduction
Bryonia alba L., is one of the smallest genus in the family cucurbitaceae, consists of 12 species distributed throughout the Europe and West Asia. 1,2It has been traditionally used in the treatment of different diseases such as cough, frontal pain, inflammation of serous tissues, peritonitis, pneumonia, jaundice, typhoid, rheumatism, brain disorders with serous exudation and as a heart tonic. 2 Nowadays, the interest in chemical composition analysis of medicinal herb products is growing owing to the continuing developments in nutrition and in biochemical surveying and mineral prospecting. 3,4Additionally, the studies related to therapeutic plants not only aim to characterize the active components found in plants, but also for scientific support of its therapeutic properties. 5acro, micro and trace elements are known to play a vital role in biological functions in plants and in human metabolic reactions.Moreover, trace elements play an important role in the formation of bioactive chemical constituents in medicinal herb plants and thus are responsible for their medicinal and toxic properties accordingly. 6,7everal number of techniques such as voltammetery, atomic absorption spectrometry (AAS), 8 inductively coupled plasma atomic optical emission spectrometry (ICP-OES), 9 X-ray fluorescence (XRF), 10 differential pulse cathode stripping voltamperometry (DPCSV) 11 and instrumental neutron activation analysis (INAA) 12 are normally used for the determination of trace elements in medicinal herbal plants.Because of its specificity, sensitivity, high precision, simplicity, rapid analysis, low cost, low detection limit, and wide linear range, AAS is the most widely recommended instrument used in analytical procedures for the trace metal analysis found in complex biological samples. 3AAS methods are considered as direct aspiration determinations where they are accomplished as single element analysis and are relatively free of interelement spectral interferences.In other words, the use of special light sources produced by the cathode lamp is emitted from excited atoms of the same element of interest and specific spectrum selection allows the quantitative determination of individual components of a multielement mixture. 13AAS as an accurate and rapid technique was chosen for the present work.The most commonly used methods for the sample preparation of plant species are dry ashing, wet ashing and microwave assisted treatment. 14n the present work, nineteen minor and trace nutrient elements (Fe, Si, P, Al, Mn, Mg, Pb, Ni, Mo, Ca, Cu, Zn, Na, K, Sr, Co, Cd, As, and Hg) in Bryonia alba L. roots growing in Ukraine were determined by AAS coupled with dry ashing method.
Reagents and solutions
Standard stock solutions with a concentration (1000 mg L -1 ) of the individual metal element were used to prepare the requested concentrations by dilution using a 1% (v/v) nitric acid solution.The diluted standard solutions were used to build the calibration curves.Metal element standards were purchased from Sigma-Aldrich (St Louis, MO, USA).An analytical reagents grade of concentrated nitric acid (70%) and hydrogen peroxide (30%) were purchased from Sigma-Aldrich (St Louis, MO, USA).In all experiments carried out, the glass /plastic containers were cleaned by soaking in10%, v/v HNO 3 for at least 24 h and rinsing with distilled water prior to use.All other chemicals used were of analytical grade.The ultrapure deionized water obtained from a Milli-Q water purification system (Millipore, Bedford, MA, USA) was used for preparing the solutions and for all dilutions.
Instrumentation
An AAS instrument (Perkin Elmer AAnalyst 700 model AAS) with deuterium background corrector was used for the determination of Fe, Si, P, Al, Mn, Mg, Pb, Ni, Mo, Ca, Cu, Zn, Na, K, Sr, Co, Cd, As, and Hg.Pb, Cd, and Ni were determined by HGA graphite furnace using high purity argon while other measurements were carried out in an air/acetylene flame.The operating parameters for working elements were set according to the recommendations of the manufacturer.
Plant materials
Fresh roots of Bryonia alba L. were collected from the Crimean Peninsula region, which is located south of the Ukrainian region of Kherson and west of the Russian region of Kuban.The collection has been done in autumn during 2014 -2015.The temperature recorded at harvesting time ranged between 15 -20°C.Botanical identification/authentication was performed at Bogomolets National Medical University, Kiev, Ukraine.
Macroscopic techniques for identification/authentication of Bryonia alba L.
It is well known that authentication/identification of raw material is the essential starting point in developing a botanical product.However, inherent chemical variability will certainly be observed with any botanical.Additionally, each step of harvest, storage, processing, and formulation may significantly change the quality and uniformity of the final product, whether by preserving the desired marker components or by eliminating unwanted contaminants. 15Therefore, methods to assure quality control in manufacturing and storage are required tools not only to ensure optimal efficacy but also safety of these products.Additionally, such controls are important for the evaluation of toxicological, pharmacological, or clinical studies involving botanical dietary supplements. 16
Macroscopic study
The macroscopic study of the morphological description of the plant parts was conducted by our naked eye and with the aid of magnifying lens.
Sample preparation
The collected roots were washed thoroughly with tap water followed by deionized water and then allowed to dry in an oven at 50 -60 °C.The sample was dried to constant weight (0.0005 g).One gram sample was accurately weighed and ground with the aid of a food processor Power Plus 1300 (Braun, Germany) for 3 min (instead of using mortar and pestle for better homogeneity as it allows for faster decomposition, thus more precise results).Then it was sieved through a 0.5 mm diameter sieve.The obtained powder of the plant material was stored in a dry and dark place at room temperature in the polyethylene bags till used.
Dry ashing procedure
One gram of sample was transferred into a porcelain crucible.The muffle furnace temperature was gradually increased from room temperature to 450˚C in 1 h.The sample was redried for 1h in the oven, cooled, and reweighed.The steps were repeated at 1h drying intervals until the differences in the variations in the released water were less than 0.05%.The obtained sample was ashed for about 8 h until a gray or white ash residue was obtained.The residue was dissolved in 5 mL of HNO 3 (25%, v/v) and, if necessary, the mixture was heated slowly to dissolve the residue.Then the digestion solution was heated up using an electric hot plate at 150°C until evaporated to near dryness.The residue was filtered through Whatman filter paper and transferred into a volumetric flask and made up to 25 mL with 3% HNO 3 .The blank digestion experiment was also prepared in the same way.This procedure was adopted from the work of Soylak et al. 17 Cd, As, and Hg in Bryonia alba L. root sample was analyzed using AAS equipped with flame and graphite furnace.Graphite furnace was used for the determination of trace and ultra-trace concentrations (Pb, Ni, Mo, Co, Cd, As, Hg).The following solutions (La(III) ions when determining Mg or Ca, and CsCl solutions were used as an ionization buffer in measurements of K, Mg and Na and were added to both sample and standard solutions in order to overcome chemical interferences in the flame upon determination. 18The operation conditions used to operate AAS instrument were as recommended by the manufacturer.Data were rounded off properly based on the value of standard deviation from measurement conducted in triplicate.
Results and Discussion
Some of the trace elements known to be essential to our body such as As, Co, Cu, Fe, Mn, Ni, Si, Zn and the other essential major elements are Ca, K, Na, and Mg.So, different trace elements in the different medicinal plants will have their definite role in the functioning of our body.The roles of the detected elements are given below.In our preliminary trials we examined the accuracy of the current work by analyzing the spiked samples at three level concentrations (low, medium and high), which were taken from stock solution of each metal and spiked in a 100 mL Erlenmeyer flask containing 1g sample.The recoveries of all metals in the spiked samples were ranged from 90 to 101 %.The average results of elemental analysis obtained by the AAS technique for the analysis of Bryonia alba L. roots are shown in Table 1 in mg /100 g dry weight of the sample.It is to be noted that each result is an average of three independent triplicate measurements.As can be seen in this table, K, Ca, Mg, P, and Si are the most abundant elements in roots presenting concentrations ranging from 78 to 1170 mg/100 g dry weight of the sample levels.The concentrations of trace elements Fe, Al, Zn, and Na were found in the medium range of 5.2 to 13 mg/100 g dry weight of the sample.Ni, Cd and Pb are toxic elements which occur naturally in plants as a result of uptake, generally in places with high concentration caused by atmospheric and industrial fallout. 3These are found in <0.01 to 0.0052 mg /100 g dry weight of the sample.Other elements such as Mo, Co, As, Hg, Mn, Cu, and Sr were found in the range 0.01 to 0.13 mg/100 g dry weight of the sample.
Iron (Fe)
Iron (Fe) is considered an essential element necessary for human body.It is the main component of myoglobin, hemoglobin, and a number of enzymes that play an important role in the oxygenation of red blood cells.It is needed for a healthy immune system as well as for energy production.It has been reported that iron severe deficiency results in anemia, which ranges from a fall in plasma ferritin to the strict iron deficiency that characterized by small red blood cells and low hemoglobin concentration level. 3,19he daily requirement of Fe for a child is 10 mg/day, whereas for an adult is 20 mg/day.The Fe concentration in Bryonia alba L. roots sample analyzed was found 13 mg /100 g dry weight of the sample.
Manganese (Mn)
As known pyruvate carboxylase and superoxide dismutase are enzymes which contain manganese. 19The manganese content found in Bryonia alba L. roots sample was 0.13 mg /100 g dry weight of the sample.
Magnesium (Mg)
It has been reported that magnesium (Mg) is considered the most imperative mineral for stress relief, 3 and as a non-essential element for living organisms. 20Bryonia alba L. roots sample was found to contain 105 mg /100 g dry weight of the sample.The concentrations of this element upon analysis the roots of four medicinal samples by Dafalla, 2014 ranged from 1.42 to 42 ppm. 20
Lead (Pb), Cadmium (Cd), and Nickel (Ni)
Bryonia alba L. roots sample contains Pb <0.03, Cd <0.01, and Ni 0.052 mg /100 g dry weight of the sample.These elements are considered to be toxic in nature, and thus their presence at trace level in various medicinal plant samples analyzed may be due to the pollution occurring from industrial activities and automobile. 19admium is considered a very hazardous to human health.Additionally, it causes high blood pressure, damages kidneys and liver. 21The permissible limits set by WHO 22 are 0.2 to 0.81 ppm and from 0.1 to 10 ppm for cadmium and lead, respectively.Thus the concentrations of cadmium and lead found in Bryonia alba L. roots were within the limits.In the work of Dafalla, 2014, 20 who studied the roots of four medicinal plants, Cd concentrations varied from 1.8 to 14.4 ppm compared to the obtained results.
Calcium (Ca)
Calcium (Ca) plays a vital role in the absorption of dietary vitamin B, also for the activation of enzymes like the pancreatic lypase, and for the synthesis of the neurotransmitter acetylcholine.The recommended daily allowance for taking Ca is 800 mg for adults and for children 500 -1000 mg.Therefore, in order to attain a Ca level of practically one percent of the total diet would be difficult. 3The concentration of Ca found in Bryonia alba L. roots sample was 210 mg /100 g dry weight of the sample.
Sodium (Na) and Potassium (K)
The concentration of sodium (Na) and potassium (K) in Bryonia alba L. roots sample was found to be 13, and 1170 mg /100 g dry weight of the sample, respectively.Potassium is considered the richest element among all elements found.Usually, plants absorb potassium and sodium from the soil in the form of K and Na ions.Moreover, it is very important for enzyme activation, photosynthesis, water use efficiency, starch formation and protein synthesis. 21The obtained data in our study indicate that the Bryonia alba L. roots sample is not deficient in potassium.Therefore, it is useful to be used as a food source, rich in K, for humans as it might help in the case of potassium deficiency. 3The high concentration of K was found as well in the roots of the four medicinal plants studied by Dafalla, 2014 where the concentrations ranged from 400 -842 ppm. 21
Zinc (Zn)
Zinc is considered as one of the main components of over 200 enzymes having both catalytic and structural roles including the alcohol dehydrogenase, ribonucleic polymerases, alkaline phosphatase, and carbonic anhydrase. 23Scientific studies conducted on animals have shown that zinc deficiency occurred during pregnancy may cause developmental disorders in the offspring. 24Low intake of zinc may cause coronary artery disease.The concentration of zinc in Bryonia alba L. roots sample was 5.2 mg/ 100 g dry weight of the sample.
Copper (Cu)
Following zinc and iron, copper is considered the third most abundant trace element in the body.It has also been reported that it is an important catalyst for iron absorption.Its deficiency may be a risk factor for cardiovascular disease.When obvious copper deficiency occurs, symptoms such as neuropenia, cardiac disorders, osteoporosis and anemia may occur. 25n Bryonia alba L. roots sample, copper was found to be 0.13 mg/ 100 g dry weight of the sample.Excess copper is toxic.Normal daily intake of copper is 2-5 mg per day.In edible plants permissible limit set by FAO/WHO 26 in 1984 was 3.00 ppm.Thus, in Bryonia alba L. roots copper is less than the permissible limit. 21
Cobalt (Co)
Cobalt is one of the most important essential components for the B12 vitamin and thyroid metabolism. 3It is necessary in very small amounts in all mammals.It is used to treat different types of cancer in humans anemia treatment.The intake of high amount can cause heart diseases. 13Cobalt is present in Bryonia alba L. roots sample at very low concentration of about <0.03 mg/ 100 g dry weight of the sample.
Silicon (Si)
Silicon (Si) is one of the most abundant elements found in the earth's crust and is commonly distributed in nature in different forms.It is well known that Si is associated to bone structure and its strength (Osteoporosis), reducing the risk of developing Alzheimer's disease, and preventing collagen metabolism abnormalities. 27The concentration of silicon in Bryonia alba L. roots sample was 78 mg/ 100 g dry weight of the sample.
Phosphorus (P)
Phosphorus (P) is the constituent of more than 240 enzymes.Its deficiency in the organism is accompanied by multisystem dysfunction.Moreover, P is responsible for fetus development, sperm manufacture, and suitable function of the immune response. 20,28The concentration of P in Bryonia alba L. roots sample was 88 mg/ 100 g dry weight of the sample.According to Dafalla work, 20 P concentrations were from 2.61 to 14.24 ppm.
Aluminum (Al)
Aluminum (Al) ions are considered toxic to most plants.It has negative results in being suppressed root growth and causing a series of abnormal metabolic effects. 29eing an important herbal plant in the world, Bryonia alba L. roots also contain a concentration of Al of about 7.8 mg/ 100 g dry weight of the sample, and can be considered an important source of dietary Al.
Molybdenum (Mo)
Molybdenum (Mo) concentration found in Bryonia alba L. roots sample was <0.03 mg/ 100 g dry weight of the sample.Although Mo is required in trace amounts for the body as it is mostly present in the pancreas and plays a significant role in the production of insulin.Deficiency of Mo results in the disorder of the liver, and the daily intake should not exceed 1.0 mg.Beyond this level is toxic. 30When analyzed in the roots of four medicinal plants namely; Ocimum basilicum, Poupulus nigra, Taraxacum officinale, and Convallaria maialis, its concentration varied from 0.82 to 9.2 ppm as most samples usually have concentrations between 0.51 and 0.72 ppm are considered the highest Mo concentration. 20
Strontium (Sr)
Strontium (Sr) is an element which can be found in the environment in a large concentration ranges.The transfer of such a contaminant from the environment to plants will lead to contamination of the food pathway and finally into the human body.Then it deposits in bones and teeth and can cause bone and renal diseases accordingly. 31It has been reported by Brambilla et al. 32 that plants can be contaminated by three main pathways namely; leaf uptake, root uptake, and deposition of contaminant on plant aerial parts. 33Additionally, it is also affected by the properties of soil such as organic matter content, ionic composition, and pH. 34The concentration of strontium in Bryonia alba L. roots sample was 0.78 mg/ 100 g dry weight of the sample.
Arsenic (As)
Arsenic (As) is a toxic pollutant in the environment, which results from anthropogenic and natural sources.It has been reported that plants growing in As contaminated soils accumulate As in their biomass. 35Additionally, As may accumulate in soils and sediments owing to the use of arsenical pesticides, fertilizers, irrigation, and oxidation of volatile arsine in air disposal of industrial, municipal and animal waste. 36Once a plant is contaminated with arsenic, it causes toxicity such as leaf chlorosis and necrosis, and also reduces growth.Different As species in plants show different toxicity, so it is important to quantify As in plants to better understand their metabolism.The concentration of arsenic in Bryonia alba L. roots sample was <0.01 mg/ 100 g dry weight of the sample.
Mercury (Hg)
Mercury (Hg) is extremely toxic trace metal pollutant.Bio-accumulation of Hg in plants and then its entry into the food chain results in long term health hazards. 37The toxicity of Hg depends on its chemical state.Some forms of Hg are relatively non-toxic and have been used as medicines, e.g., for the treatment of syphilis.Speciation of mercury at trace and ultra-trace levels is a matter of current interest.The concentration of Hg in Bryonia alba L. roots sample was <0.01 mg/ 100 g dry weight of the sample.In this study, the concentration of K, Ca, Mg, P, and Si were found to be the highest followed by the remaining elements in trace levels.Referring to the elemental concentration in Table 1, we can use the particular medicinal plant as requested by knowing the functional values of each element.Extra attention must be paid not to take those medicinal plants containing a large concentration of the above elements for a long time daily.For toxicity, the presence of arsenic in Bryonia alba L. roots sample will not lead to any undesirable effect because its concentration is very low as indicated by the World Health Organization Maximum Tolerable Daily Intake (WHO-MTDI, 2008) 38 value (2µg/day/ kg body weight).The low level observed for Pb and Cd leads to the conclusion that the plant is grown in the pollution free soil as elemental uptake by the plant mainly depends on regional soil as well as climate conditions and thus could be used as a medicinal plant.
Conclusion
The different concentrations of elements present in Bryonia alba L. roots leads to the conclusion that these roots will have different specific roles in the treatment of different diseases.The results obtained from the present study provide vital data on the availability of diverse essential elements, which can be useful to provide dietary information for designing value -added foods and for food bio fortification.This study provides a comprehensive investigation of the contents of 19 trace elements in Bryonia alba L. roots sample growing in Ukraine.The dry ashing method coupled with atomic absorption spectrometry was used for the determination of trace elements in Bryonia alba L. roots.The results indicated that Bryonia alba L. roots had a high content of K, Ca, Mg, P, and Si.The contents of the toxic heavy trace elements Cd and Pb were very low (<0.03 mg/ 100 g dry weight of the sample) and could not cause any threat to the consuming population.This technique is considered reliable for routine analysis elements determination in a wide range of botanicals and dietary supplements.
3
Analytical procedureAAS is a widely used technique for determining a large number of metals.In AAS, an aqueous sample containing the metal analyte of interest is aspirated into an airacetylene flame, causing evaporation of the solvent as well as vaporization of the free metal atoms (atomization).Fe, Si, P, Al, Mn, Mg, Pb, Ni, Mo, Ca, Cu, Zn, Na, K, Sr, Co, | Elements in Bryonia Alba L. Advanced Pharmaceutical Bulletin, 2016, 6(2), 285-291
Table 1 .
The content of macro-minor elements of Bryonia alba L. roots using flame atomic absorption spectrophotometry (n = 3).
|
2018-04-03T01:07:33.860Z
|
2016-06-30T00:00:00.000
|
{
"year": 2016,
"sha1": "fd3a8908a908c557757f960f94635a86010799e9",
"oa_license": "CCBY",
"oa_url": "https://apb.tbzmed.ac.ir/PDF/APB_1387_20160227132357",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fd3a8908a908c557757f960f94635a86010799e9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
244023138
|
pes2o/s2orc
|
v3-fos-license
|
Comparative effectiveness of virtual reality (VR) vs 3D printed models of congenital heart disease in resident and nurse practitioner educational experience
Background Medical trainees frequently note that cardiac anatomy is difficult to conceive within a two dimensional framework. The specific anatomic defects and the subsequent pathophysiology in flow dynamics may become more apparent when framed in three dimensional models. Given the evidence of improved comprehension using such modeling, this study aimed to contribute further to that understanding by comparing Virtual Reality (VR) and 3D printed models (3DP) in medical education. Objectives We sought to systematically compare the perceived subjective effectiveness of Virtual Reality (VR) and 3D printed models (3DP) in the educational experience of residents and nurse practitioners. Methods Trainees and practitioners underwent individual 15-minute teaching sessions in which features of a developmentally typical heart as well as a congenitally diseased heart were demonstrated using both Virtual Reality (VR) and 3D printed models (3DP). Participants then briefly explored each modality before filling out a short survey in which they identified which model (3DP or VR) they felt was more effective in enhancing their understanding of cardiac anatomy and associated pathophysiology. The survey included a binary summative assessment and a series of Likert scale questions addressing usefulness of each model type and degree of comfort with each modality. Results Twenty-seven pediatric residents and 3 nurse practitioners explored models of a developmentally typical heart and tetralogy of Fallot pathology. Most participants had minimal prior exposure to VR (1.1 ± 0.4) or 3D printed models (2.1 ± 1.5). Participants endorsed a greater degree of understanding with VR models (8.5 ± 1) compared with 3D Printed models (6.3 ± 1.8) or traditional models of instruction (5.5 ± 1.5) p < 0.001. Most participants felt comfortable with modern technology (7.6 ± 2.1). 87% of participants preferred VR over 3DP. Conclusions Our study shows that, overall, VR was preferred over 3DP models by pediatric residents and nurse practitioners for understanding cardiac anatomy and pathophysiology. Supplementary Information The online version contains supplementary material available at 10.1186/s41205-022-00164-6.
Introduction
Medical trainees frequently note that cardiac anatomy and pathophysiology is difficult to fully conceive in a two dimensional (2D) framework [1,2]. While it is certainly possible to extrapolate three dimensional (3D) relationships from 2D representations, this conceptual leap is best achieved with an acuity developed over time, not readily available to the novice learner. Such an understanding is especially important in pediatric cardiology in which there is an intricate relationship between the spatial orientation of cardiac anatomy and associated physiology. In an attempt to strengthen this understanding, various 3D modalities have been developed including 3D digital models (3DD), 3D printed models (3DP), Virtual Reality (VR), Augmented Reality (AR), and Mixed reality (MR) [3][4][5]. VR is an immersive digital experience in a simulated environment separate from the real world typically using a head-mounted display or headset. AR enhances or augments a real world environment with superimposed digital information such as data and images while MR extends AR to allow for interaction between the virtual and real word components of the combined environment [6]. While the use of each of these modalities is growing substantially, there is sparse data on the comparative value of each modality.
3DP has found multiple applications in pediatric cardiology education. Several studies have incorporated 3DP into curricula to compare learner response compared to traditional teaching modalities [1,[7][8][9]. These studies have consistently found improved learner satisfaction in the domains of knowledge acquisition and structural conceptualization. Of increasing interest has been whether such subjective assessments translate into improved objective performance. One study compared pre-test and post-test performance for a control group (traditional teaching) versus intervention group (3DP) in relation to knowledge acquisition about vascular rings [9]. This study found improved performance for the intervention group. Another study, similarly structured, found no difference in post-scores for medical student groups exposed to traditional 2D vs 3D printed models of tetralogy of Fallot [1]. The authors speculated, however, that the lack of improvement may have been due to questions that focused on pre-formed medical knowledge versus the spatial orientation assessment that 3D printed models would be more likely to enhance. While 3DP has found several landing points in medical education, surgical preparation, and clinical reinforcement, 3DP has important limitations including cost and limited cutting planes. Significant interest has therefore developed in alternative 3D dimensional representations including Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) [5,10]. Sacks and Axelrod (2020) connect adult learning theory to the potential pedagogical value of VR by noting that adults learn best when they are in control of their learning environment, a framework that is congruent with the interactive space of VR [11]. In a study that compared understanding of congenital heart disease among residents and medical students exposed to VR versus conventional 2D display, improved diagnostic scores were noted among the intervention group [2]. Building on the capacity of VR to shorten preparation times, another study demonstrated that applying VR directly to raw MRI data without intermediate segmentation steps could shorten preparation time to 5 min compared to the 8 hours for 3DP [12].
A comparative study in the field of neurosurgery found VR was more effective than 3DP and traditional 2D representations in enhancing understanding of craniovertebral junction deformities [13]. Similarly, in a comparative study including multi-level trainees, VR angiograms have been found to outperform 3DP in regard to resolution, ability to zoom and ease of manipulation while 3DP had the advantage in depth perception [14]. In the context of congenital heart disease, a comparative study between VR and 3DP found similar subjective assessments of visual clarity between 3DP and VR but greater perceived instructive potential for VR over 3DP among the participants composed exclusively of radiographers, sonographers and radiologists [15] To our knowledge, there has not been a study directly comparing the utility of VR versus 3DP in CHD education among medical trainees. Our study sought to make this comparison by giving trainees an opportunity to interact with both representations back to back followed by an assessment of their relative effectiveness in enhancing their understanding of normal heart architecture as well as a common CHD lesion, tetralogy of Fallot.
Materials and methods
As an initial step to create the 3D models, cardiac CT/ MRI data for a developmentally typical heart (15 year old patient) and tetralogy of Fallot (15 year old patient) were identified from our institutional cross sectional imaging database. Raw DICOM data from either MRI or CT was loaded into MIMICS (version 19, Materialise, Leuven Belgium) and segmented to label the blood pool and myocardium. Objects were generated and exported to 3-MATIC (version 11, Materialise, Leuven Belgium) for the following steps: wrapping, island removal, smoothing, exterior hollowing, Boolean union (blood pool's derived shell with the myocardium), vessel trimming to provide a visually and ambiguous heart and slicing into parts to ensure that the goal features of anatomy would be easily visualized (Fig. 1). More than one color was used but these were divided along opening planes, rather than by anatomical components to limit potential visual distraction away from the defect.
IRB approval was obtained in order to obtain patient data. Patients with the above congenital heart disease were identified from our CT/MR database generating the. STL file. The appropriate model was then created.
For these hearts, cut planes were determined to ensure that the anatomical features were abundantly clear and unambiguous to the viewer with minimal visual exploration. The 3D model data was then overlaid on the native data to assess for accuracy. Following cut-plane selection, models were scaled to be of similar size and columnar punchouts were created on the cut faces. This facilitated post-printing embedding of magnets to allow the models to "snap" together.
STL models were first printed on a ZPrinter ® 250 printer (Z-Corp, Cambridge, MA) with cyanoacrylate infiltration. As thin sections such as valve component or vessels branches remained fragile, models were reprinted in multi-jet fusion (MJF 580, Hewlett-Packard, Palo Alto CA). Then, magnets were placed, the parts were selectively dyed with conventional fabric dye, and employed for the described work. Segmentation and post-processing were performed by a trained and experienced pediatric cardiologist (SB) and imaging scientist (SDF) Figs. 2 and 3.
The VR models were derived from the same models created from the CT/MRI data used to form the 3D printed models. These models were viewed using the VR/ AR interface of Z-Space software (zCORE, version 5.0, San Jose CA) The zSpace ® system consists of a central processing unit (CPU), a 23.6 in, 1080-p high definition liquid crystal 3D stereoscopic display screen, a 3 button stylus with integrated haptic technology, and a set of polarized eye glasses with reflective sensors for tracking cameras ( Fig. 4a-b). The stylus could manipulate a virtual slicing tool which projected through the VR model to obtain multiple planes. As the zSpace system does not use a head-mounted immersive display but a projected image viewed through glasses, there is some overlap with classical AR platforms in which there is a superimposed digital image on the real world. We retained the categorization of VR given the primary focus of the virtual image over the real background.
To achieve randomization, participants signed up for time slots unrestricted and ungrouped by any participant characteristics. Each consecutive participant was then assigned to be taught using either the VR models or the 3D models in an alternating pattern. Following consent, trainees and practitioners underwent individual 15-minute teaching sessions with us where we demonstrated features of a developmentally typical heart as well as a congenitally diseased heart (tetralogy of Fallot) using both VR and 3DP. The participants were trained in the same way regardless of whether they were being shown a developmentally typical or pathological heart. The 3DP demonstration included identification of key structures along the pre-set slicing planes. The VR demonstration included a brief demonstration on how to manipulate the VR model as well as how to use the slicing tool to obtain multiple cuts through each model. Participants then briefly explored each modality individually before filling out a short survey (Additional file 1) in which they identified which model (3DP or VR) they felt was more effective in enhancing their understanding of cardiac anatomy and associated defects. The survey included a binary summative assessment and a series of Likert scale questions addressing usefulness of each model type and degree of comfort with each modality. We deliberately chose to keep the questions concise and straightforward as the scope of this study was limited to initial impressions of understanding based on each modality. Responses were compared using 2 way paired t-tests/ ANOVA or non-parametric tests based on distribution. Univariate regressions were performed to determine associations. In addition, Pearson correlation coefficients were calculated using covariance and standard deviation data to determine strength of relationships. All statistical analyses were performed using SPSS 19.0 (SPSS Inc., Chicago, IL). Statistical significance was defined as p < 0.05.
Results
Twenty-seven pediatric residents and 3 nurse practitioners explored models of a developmentally typical heart and tetralogy of Fallot pathology (n = 30). The pediatric residents consisted of 7 interns, 12 second year residents and 8 third year residents. Overall, participants endorsed a greater degree of understanding with VR models (8.5 ± 1) compared with 3DP (6.3 ± 1.8) or traditional models of instruction (5.5 ± 1.5) p < 0.001 (Table 1).
"Traditional models of instruction" refers to how participants are typically instructed on such subject matter in prior teaching sessions using 2D schematics as a subjective comparison point to the 3D modalities comprising this study. Most participants identified minimal prior exposure to VR (1.1 ± 0.4) or 3-D printed models (2.1 ± 1.5). Based on broad minimal exposure and sample size, no adjustment was made for previous exposure to VR or 3DP. Most participants expressed comfort with the use of modern technology at baseline (7.6 ± 2.1). "Modern technology" was not explicitly defined but implicitly alluded to the digital tools that form part of modern daily lived experience. By level of training, 4/7 (57%) interns preferred VR while 11/12 (92%) of second year residents and 8/8 (100%) of third year residents held a similar preference for VR (Fig. 5).
In the summative assessment, 87% (n = 26) expressed a preference for Virtual Reality models compared to 13% (n = 4) for 3DP. In addition, participants offered narrative elaboration on their preferences ( Table 2).
Discussion
Medical education within Pediatric Cardiology is increasingly recognizing that extrapolation of 3 dimensional structures from 2 D models, while possible and trainable, may not be the optimal way to teach trainees about the heart. Given the close relationship between spatial orientation and physiology that characterizes Pediatric Cardiology, effective instruction must provide dynamic visual representation. 3DP and VR have arisen as potential tools in this effort and the preponderance of evidence to date suggests benefits in regard to learner engagement [1,7,[16][17][18]. Less robust evidence exists to compare the relative value of these representations. This question is important to address as the two approaches vary significantly in regard to questions of cost, preparation time, and portability which are key factors in the wider adoption of these approaches in curricula. It was this gap in educational efficacy between 3DP and VR that our study was designed to begin to address. By systematically comparing trainee experience with each modality side by side, a meaningful assessment was obtained to help guide further training efforts and studies.
The results were somewhat surprising in how definitively they skewed toward VR versus 3DP (87% vs 13% summative preference). We hypothesized that there would be a significant number of participants for whom the tactile and haptic qualities of 3DP models would make them preferable to VR models. For the few who in fact had this preference, these factors were mentioned. Also noted, however, was the limitation of the pre-determined cutting planes. In contrast, VR, had a slicing tool which offered multiple planes in virtually any orientation and was repeatedly cited as an appealing factor in narrative comments (Table 2). This ability to direct the learning experience more precisely is what a previous study on virtual skills learning identified as "presence" and "agency" [19]. However, the grey scale of the Virtual Reality model was mentioned as a limitation compared to the two colored 3DP models, suggesting there is a clarifying role for color differentiation regardless of modality. Other software interfaces have color options which likely will improve satisfaction even more.
An additional consideration in comparing these modalities is the effect of prior exposure, whether direct as in VR/3DP in CHD education or indirect as in other settings such as video game usage. Related to this exposure question is Roger and Cohen's (2020) discussion of generational learning where he advocates for a framework that is best suited to the current generation of learners to include technology [20]. In our study, participants generally had limited prior exposure to either VR (1.1 ± 0.4) or 3DP (2.1 ± 1.5) but felt generally comfortable with modern technology (7.6 ± 2.1). We hypothesize that the minimal prior VR/3DP exposure limits the effect of this exposure on the preference of VR over 3DP; similarly comfort with daily modern technology may translate into increased ease with VR technologies or, alternatively, may be less applicable to the novel spatial challenges of these modalities. Novel features like pen wand navigation may have contributed to the appeal of VR.
Digging deeper into what it means to "like" or "prefer" a modality raises additional questions: did intrinsic parts of the educational process such as learning to use the VR wand offer an internal reward that made preference more likely? Less ambiguous is that experienced raters were essentially unanimous in their VR preference. This preference could reflect less of a need or desire on the part of early learners for dimensional data than expert learners.
The educational potential of Virtual Reality is certainly being explored in a number of other fields as well with potentially transferable principles. In nursing, for example, an intervention group who were taught a procedure using VR were able to perform more of these procedures in an hour compared to the control group [21]. However, these gains were not sustained 2 weeks after the initial study suggesting that some of the benefits that VR imparts may require tech "boosters" to be sustained. VR is also being employed in pharmacy education where dynamic applications are being explored such as tracking a drug as it proceeds through the body observing visually how it is changed at each stage [22]. Such dynamic 4D tracking can be applied to real time analysis of cardiac structures as described in a recent technological innovation review [23]. In the orthopedic domain, a study examining the impact of VR and 3D models on preoperative planning for humeral fracture repair found the use of these modalities led to shorter operative time and less blood loss than conventional methods [24].
Of critical importance in the ongoing evaluation of these modalities is to consider both objective effectiveness and feasibility. In a recent study examining the impact of VR on participant understanding of atrioventricular canals, no difference was found in post-test scores between the control group (desk-top computer) and intervention group (VR). However, the VR group did report a better learning experience and engagement level. Almost counterintuitively, the VR group also had a stronger correlation between their perceived strength of knowledge and their actual performance suggesting "Virtual Reality models are beautiful and more intuitive than 2D models. " that this modality may have role in bridging the gap between perceived knowledge and actual knowledge [25]. In a counterexample, a study looking at the relationship between participant confidence of correctness and actual correctness in the virtual environment of a pre-surgical planning session found the correlation was low [26]. This finding may be related to the challenges of measuring depth and features in VR. There continues to be a need for rigorous studies that evaluate objective improvement in knowledge acquisition and spatial conceptualization which can be difficult to capture. Su et al., (2018) developed a controlled study examining the impact of 3D models in a medical student curriculum is a promising example [18]. By asking both subjective questions as well as fact based and spatial conceptualization questions in the post-test, this study was able to demonstrate improvement in knowledge acquisition more rigorously. In regard to 3DP, a recent review highlights the need to systematically examine if there are certain groups who may benefit more from such modalities [7].
Having demonstrated effectiveness, the final hurdle for the wider use of such modalities is feasibility. 3DP models are expensive and time consuming to prepare [17]. VR, depending on the interface, can also involve significant cost but lower technology iterations exist. If such factors as cost can be addressed, VR holds further promise given shareability. Such technological nimbleness and ability to share remotely is critical in an age where we witnessed a physical interaction standstill with the novel coronavirus (COVID-19). Further nuanced work can reveal where modalities like Augmented Reality (AR), which retains the capacity to still see the physical world, may be more optimal [27]. VR promises to not only make CHD education more effective, but may also have important global pediatric cardiology applications including the capacity to remotely train others in low and middle income countries (LMIC) where such work could be an important part of capacity building [28]. Such work would also form a robust response to the charge issued by the Lancet Independent Global Commission for the Education of Health Professionals for the twenty-first Century calling for "transformative learning" through the harnessing of technological innovations [29] .
Limitations
This was a single center study with a modest number of participants. Results were self-reported and such perceptions in learning impact are by nature subjective. A comprehensive, objective post-test would more rigorously support improvements in learning and would be an important consideration for subsequent comparative studies. As the participants were primarily residents, these results are not necessarily generalizable to higher level trainees such as fellows. In addition, our group was primarily composed of medical residents; surgical residents may have derived a differential benefit from each representation. Again, a comparative study, perhaps with more complex lesions would be a promising future line of inquiry in this direction. With the global phrasing of our questions in the questionnaire, our conclusions were limited regarded perceived understanding of specific features of the tetralogy of Fallot heart for example. Along with including more complex lesions in the future, it will be important to ask more specific anatomic questions to assess understanding of pathology more precisely. Finally, it was challenging to fully account for the effect of prior "VR like"experience. While most participants endorsed minimal prior exposure to VR, VR like experiences in non-educational settings such as video games and mobile applications may have influenced their preference for this representation. Future studies can more specifically ask about these experiences as well as track if VR preference tracks along any demographic lines.
Conclusion
Our study shows that, overall, VR was preferred over 3DP by pediatric residents and nurse practitioners for understanding cardiac anatomy and pathophysiology. Future comparative studies with objective assessments as well as explorations into questions of feasibility such as cost and portability will help to illuminate the full pedagogical value of these modalities.
|
2021-11-12T16:16:43.333Z
|
2021-11-10T00:00:00.000
|
{
"year": 2023,
"sha1": "dc6c99c0def8d12b657f89937fb88154de7fbf5a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "b090c9aa8ea40cb649216cc381ced10ddcc30d63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
8096216
|
pes2o/s2orc
|
v3-fos-license
|
Neuronavigation: principles, clinical applications and potential pitfalls.
Localization of brain lesions and prevention of damage to vital structures are important in operation of brain pathologies. Despite development of many techniques including angiography, MRI, sonography, and frame base stereotaxy, a more accurate localizing technique is still needed (1, 2). A step forward to achieve this goal is to develop a navigation system. In this manuscript, we explained some clinical applications, advantages, and disadvantages of navigation system and tried to have a short glimpse on its future.
Localization and delineation of extent of lesions are critical for safe maximal resection of brain and spinal cord tumors. Neuronavigation systems have been developed for image-guided neurosurgery to aid in the accurate resection of brain tumors (3,4) . Basic principles of navigated surgery are to see the tip of a pointer in an image space. A relationship between the device space and the image space has to be established [5,6]. This operation is called registration or calibration of the navigation device. Basically, a transformation matrix (T) has to be calculated to map the coordinates of any point between the image and the device spaces. The aim of transformation matrix is to create a linkage between digital image data and anatomical structure , and therefore, to provide increasing 3-D orientation (7) . Today's navigation systems provide approximately 2mm accuracy (8) . Stereoscopic navigation-controlled display of preoperative MRI and intraoperative 3D ultrasound is a new technology for minimally invasive image-guided surgery approaches in planning and guiding neurosurgery. Interactive stereoscopic visualization improves perception and enhances the ability to understand complex 3D anatomy [9,10].In 1947, Spiegel and Wycis performed the first stereotactic thalamotomy in humans, using the commissura posterior or pineal body as an internal individual reference system (11,12). Functional operations with similar frames and techniques were introduced by Talairach in Paris in 1949 (13), by Riechert [14] in Freiburg, Germany in 1952, and by Leksell [3] in Stockholm in 1949 for the treatment of extrapyramidal movement disorders, intractable pain, epilepsy, and psychiatric disorders. After the development of CT technology by Hounsfield in 1973 (15) and Cormack (16,17) based on mathematical solutions published by the Viennese mathematician Radon in 1917 [18], stereotactic coordinate based calculation was applicable in the whole intracranial space, enlarging the field of indications to biopsies, interstitial brachytherapy, endoscopy, and localization of tumors for open surgery (4,19). Till the end of the 1980s, frame-based stereotaxy was the standard method for accurately localizing small intracranial lesions by introducing catheters into the tumors or for determining the tumor volume in space (20,21,22). Coordinate transformation of the selected target point between the image and the frame space was established using a localization frame. The idea behind using frameless, interactive, computeraided surgery in navigation systems is to show in real time the position of the tip of an instrument in the corresponding images, without requiring a stereotactic frame for calculation. In Switzerland in 1988, Reinhardt was working on an armless navigation system which used a pointer emitting ultrasound sources (23,24). Magnetic sources were also described later by Kato [25], and infrared light-emitting diodes (LEDs) as emitting sources by Zamorano [26]. Additional robotic capabilities were integrated into navigated microscopes by Giorgi (27) and Luber (28).
Clinical applications
The aim of image-guided neurosurgery is to accurately project computed tomography (CT) or magnetic resonance imaging (MRI) data into the operative field for defining anatomical landmarks, pathological structures and tumor margins. To achieve this goal, different image-guided and computer-assisted, socalled "neuronavigation" systems have been developed to provide precise spatial information to neurosurgeons (29). The main clinical utilities in modern neurosurgery are: localization of small intracranial lesions, skull-base surgery, intra cerebral biopsies, intracranial endoscopy, functional neurosurgery and spinal navigation (30). Localization of small intracranial tumors is currently the most frequent application of navigated technology in neurosurgery for adults and children (31,32). Safe navigating of the critical anatomy is of prime importance in skull base surgeries. This is particularly the case for endoscopic skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures (7,33 ). Navigation can help accurate localization of important anatomic structures such as the carotid artery or cranial nerves, particularly if they are deep in the tumor, as in medial peritoneal wing meningioma or transsphenoidal pituitary surgery [34,35]. Navigation can also help make approaches through the petrosal bone safer, sparing structures of the inner ear. In orofacial approaches to the C2 vertebra [36], and in tumor recurrences at the skull base with a changed anatomic situation, navigation can help the neurosurgeon to operate accurately. Ear, nose, and throat surgeons also use navigation as an aid for nasal, Para nasal, and otologic interventions. (37,38,39) Endoscopic procedures are mostly performed by the freehand technique. However, navigation can also be incredibly useful in endoscopy and help the surgeon to make more precise planned trajectories; for instance, in cases of narrow access through the foramen of Monro into the third ventricle to prevent damage to the fornix either the navigation is performed with the pointer to mark the burr hole and decide the trajectory, or the endoscope itself is used as a pointer navigation system (40,41). Inside the third ventricle, Muacevic used navigation in ventriculostomies passes through the narrow passage of the foramen of Monro (42). Schr_der and Gaab (43) also used navigation in aqueductoplasty to choose the best approach to the aqueduct. Furthermore, navigation could provide information about the structures behind the surface of the ventricle wall which is visible by the endoscope.Therefore,, the best place can be selected for removal of the specimens or defining the perforation point (44). Above the third ventricle, fenestration of a cavum vergae can be supported by stereotactic methods to plan an approach ,sparing eloquent cortical areas. In addition, in cases of very large cysts around the lateral ventricles, the perforation into the lateral ventricle can be dangerous if the membrane is thick and not translucent. Orientation inside the cyst is difficult if anatomic landmarks are missing. In these cases, the point of perforation into the lateral ventricle can be defined by navigation. The endoscopic image itself can be further digitized and mapped into the image space of the navigation system. The spatial resolution allows performance of distance measurements and, theoretically, also coagulation in the presence of bleeding (45). Zamorano used endoscopic navigation in over 150 tumor cases including biopsies, colloid cyst removals, and tumor extirpations (46).There is also a report on using navigation in pseudo tumor cerebri. In such cases, ventriculoperitoneal shunts were used instead of lumboperitoneal shunts, and favorable outcomes were reported [47,48]. Moreover, endoscopic navigation has been used for decompression of Superior Orbital Fissure Fracture. (49) Functional neurosurgery, intracranial neurosurgical interventions in the deep brain structures regarding pain, extrapyramidal movement disorders, and particularly epilepsy are the classic indications for applying the frame-based technique. Navigation can also be successfully used in epilepsy surgery for localization and introduction of subdural strip and grid electrodes or for implanting deep-brain electrodes in the hippocampus . Equally interesting is the navigated orientation during ablative surgery in cases of epilepsy, such as with hippocampectomy, to more accurately localize the resection size (50,51,52,53,54). Laborde reported the drainage of abscesses guided by navigation (55). It is also possible to use these catheters for local antibiotic therapy. Rohde introduced a catheter by navigation into intracranial bleedings for evacuation and lysis therapy (56). Other authors used the navigated placement of catheters in connection with interstitial radiation therapy (57). The magnetic-force-based Computed Assisted Neurosurgery System has been used for epilepsy surgery to localize targets accurately in the operative field (58). It can also be used in the treatment of Parkinson's disease (59). Furthermore, use of neuronavigation and electrophysiology in surgery of subcortically located lesions in the sensorimotor strip have been reported [60]. There are some peculiarities about the anatomy navigation in spinal surgery. The spinal cord is much more flexible, and therefore dependent on the position of the patient. Skin markers are thus not applicable. Registration must be performed after preparation of the vertebrae on their characteristic anatomic landmarks either with a paired-point technique or in combination with surface matching .The dynamic reference frame is fixed on a spinous process inside the operating field to register any displacement close to the working space . The main clinical indication for computer-aided navigation in spinal surgery is the transpedicular insertion of screws in the thoracolumbar region (61,62,63,64).
Advantages and disadvantages
There are some concerns about navigation systems including time consuming calculation and registration, restriction of space and view inside the operating field ,and so on .Nevertheless, there are many advantages that can be helpful in the process of operation (65,66). An error in the white matter by the navigation device even in the range of 3 mm or 4 mm is still lower than when relying only on neurosurgical knowledge . The neurosurgeon is able to calculate the localization and approach a small lesion accurately, therefore feeling more confident. The corticotomy is associated with less stress, particularly in eloquent areas such as the central region in some pathologies such as low grade gliomas . After opening the Dura, we will not be able to see superficial visible pathology. In such situations, we can find the right sulcus by using navigation device. Thus, another important function of surgical navigation is providing guidance to sub cortical tumors (67,68,69). By improved CT/MR imaging, a more precise anatomic localization is possible, and navigation techniques help to make atraumatic openings and approaches ( 70). Intra-operative brain deformation (brain shift) limits the accuracy of image-guided neuro-surgery systems (71). Ultrasound imaging as a simple, fast and being real time has become an alternative to MR imaging which is an expensive system for brain shift calculation (9,72). The main challenges due to speckle noise and artifacts in US images, is to perform an accurate and fast registration of US images with pre-operative MR images. (73) There are some ideas that suggest an efficient point based registration method called Coherent Point Drift (CPD) , which is implemented and compared to the conventional ICP method . Fusion technique can also be used in this system .It may help lower the cost by allowing previously acquired non stereotactic images such as MRI to be fused to a lowcost stereotactic scan such as CT without contrast .Incorporation of diffusion tensor imaging (DTI) and fiber tracking into the image data set also helps the precision of the system and prevents damage to the eloquent areas . Visualization of certain low-grade tumors may be enhanced by fusing color-encoded fluid-attenuated inversion recovery (FLAIR) images with high-resolution volume MRI. PET, cerebral blood volume or MRS maps may be fused with a stereotactic study to identify the optimal point for brain biopsy (1). Navigation system also reduces the length of surgery, lowers the incidence of wound infections, and shortens length of hospital stay (74). In addition, it reduces the risk for neurological morbidity by allowing the surgeon to determine the relationships of the lesion and surgical approach to nearby critical brain structures .Moreover, visualization of critical surface or draining veins may be facilitated using these systems. Accurate, safe intracranial access for the purpose of biopsy by a variety of techniques can be provided by surgical navigation systems (75,76). One of the shortcomings of this system is Brain Shift and Local Tissue Deformation. This can be minimized
Methods
As a simple, fast, and being real time, ultrasound Iranian J Psychiatry 7:2, Spring 2012 imaging is an alternative to MR imaging which is considered an expensive system for brain shift calculation. The main challenges to speckle noise and artifacts in US images is to perform an accurate and fast registration of US images with pre-operative MR images (9). It is important to ensure that the DRF (dynamic reference frame) is securely affixed, so its relationship to the head cannot be disturbed after the registration procedure. Brain shift is generally straight down toward the center of the earth. Therefore, by orienting the patient's head in a position where a vertical surgical trajectory is possible, the surgeon will only need to compensate for brain shift in one direction (i.e., the brain and tumor are lower than expected) rather than for a complex three-dimensional slide that may occur when operating from a different direction. Diuretics usage should be minimized; and compensating for volume loss by limiting or reversing hyperventilation may be a useful strategy (77). When only part of the resection involves critical brain, the surgeon should work on that area first while the shift is minimal. En bloc removal of tumors should be performed as much as possible [78]. Surgeons should avoid puncturing any cystic components or entering the ventricles until all critical areas of the tumor boundary have been surgically defined .Placing large cotton balls in the resection cavity can usually expand the cavity to the preoperative dimensions (79).
Future
It is difficult to provide any prognosis for the development and role of navigated surgery in the future, as the computer technology is changing so rapidly. Manwaring reported a magnetic emitting source fixed directly to the patient's skull, producing a nonlinear magnetic field through the brain. Flexible catheters with a magnetic tip could be introduced along these nonlinear magnetic trajectories to the target point. Navigational instruments are presently undergoing a process of evolution with many types developing due to different technical realizations (80). A presumable future of navigation seems to depend on microsurgical robots. There are some ideas about combining these two innovations to solve the most important shortcoming of neuronavigation : brain shift. This important can be achieved by injecting microsurgical robots through the vessels and synchronizing registration while observing the brain through various aspects from different points. By this root, signals can be transferred to a central computer out of the body. By integrating this information, a 3D map of different points of the brain and its pathologies can be formed. Simultaneously, these robots can play some therapeutic roles. (81,82,83) Conclusion Navigation system has some limitations in clinical applications and is expensive. Nevertheless, it can be useful for the surgeons and patients. Most reports indicate that these devices are cost-effective, and may reduce surgical morbidity, and enhance outcome.
|
2016-05-12T22:15:10.714Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "44002cf84632edde5d0ae92fe28e768ca24c593b",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "44002cf84632edde5d0ae92fe28e768ca24c593b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260453714
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of wall-pressure fluctuation sources from DNS of turbulent channel flow
The sources of wall-pressure fluctuation in turbulent channel flow are studied using a novel framework. The wall-pressure power spectral density (PSD) $(\phi_{pp}(\omega))$ is expressed as an integrated contribution from all wall-parallel plane pairs, $\phi_{pp}(\omega)=\int_{-1}^{+1}\int_{-1}^{+1}\Gamma(r,s,\omega)\,\mathrm{dr}\,\mathrm{ds}$, using the Green's function. $\Gamma(r,s,\omega)$ is termed the net source cross spectral density (CSD) between two wall-parallel planes, $y=r$ and $y=s$. Direct Numerical Simulation (DNS) data at friction Reynolds number of $180$ and $400$ are used to compute $\Gamma(r,s,\omega)$. Analysis of the net source CSD, $\Gamma(r,s,\omega)$ reveals that i) the location of dominant sources responsible for the premultiplied peak in the power spectra at $\omega^+\approx 0.35$ and the wavenumber spectra at $\lambda^+\approx 200$ is in the buffer layer at $y^+\approx 16.5$ and $18.4$ for $Re_{\tau}=180$ and $400$, respectively. ii) The contribution from a wall-parallel plane (located at distance $y^+$ from the wall) to wall-pressure PSD is log-normal in $y^+$ for $\omega^+>0.35$. iii) A dominant inner-overlap region interaction of the sources is observed at low frequencies. Further, the decorrelated features of the wall-pressure fluctuation sources are analyzed using spectral Proper Orthogonal Decomposition (POD). Spectral POD reveals that i) the phase of a spectral POD mode plays a large role in determining its contribution to wall-pressure PSD through constructive/destructive interference. ii) Near-wall wavepackets that contain sources travelling away from the wall are the dominant contributors to wall-pressure PSD at high frequencies. iii) At low frequencies, such wall-normal wavepacket structure of the dominant wall-pressure source is not observed. The wall-parallel waves that constructively interfere contribute the most to wall-pressure PSD.
Introduction
In a turbulent flow, wall-pressure fluctuations excite flexible structures. The fluctuations' spatio-temporal features determine their relation to the far-field sound radiation resulting from the structural excitation. Pressure fluctuations in an incompressible flow are governed by the Poisson equation, with appropriate boundary conditions. Here, p is the fluctuating pressure, and U i and u i are the mean and fluctuating component of the flow velocities, respectively. The linear and quadratic (in fluctuation) source terms in the above equations are called the rapid and slow terms, respectively (Pope 2001). The Poisson equation implies that the pressure fluctuation is a global quantity, meaning that the velocity at every point in the domain affects p at every point. This makes it harder to use arguments that are based on local length and velocity scale (that work very well for local mean and fluctuating velocities) to analyse pressure fluctuations. Several experiments (Willmarth & Wooldridge (1962), Corcos (1964), Blake (1970), Farabee & Casarella (1991). Gravante et al. (1998), Tsuji et al. (2007), Klewicki et al. (2008)) and numerical simulations (Kim (1989), Choi & Moin (1990), Chang III et al. (1999), Hu et al. (2006), Sillero et al. (2013), Park & Moin (2016), Panton et al. (2017)) have studied the spatio-temporal features of wall-pressure fluctuation in turbulent boundary layer and channel flows at different Reynolds numbers. Reviews by Willmarth (1975), Bull (1996) and Blake (2017) summarize the features of wall-pressure fluctuations in wall-bounded flows. Farabee & Casarella (1991) measured wall-pressure fluctuations in a boundary layer at friction Reynolds numbers Re τ = u τ δ/ν ranging from 1000 − 2000, where u τ = τ w /ρ is the friction velocity, δ is the boundary layer thickness, ν is the kinematic viscosity of the fluid, τ w is the wall-shear stress and ρ is the density of the fluid. Non-dimensionalization of the power spectral density (PSD) based on ρ, U o and δ * , where δ * is the displacement thickness of the boundary layer, yielded collapse of the low frequency region (ωδ/u τ < 5). The mid frequency (5 < ωδ/u τ < 100) region showed collapse with outer flow variables (u τ , δ, τ w ), but the high frequency region (ωδ/u τ > 0.3Re τ ) collapsed with inner flow variables (u τ , ν, τ w ). An overlap region (100 < ωδ/u τ < 0.3Re τ ) showed collapse with both outer and inner flow variables. Based on the wall-normal location associated with the corresponding non-dimensional variable group, Farabee & Casarella (1991) hypothesized the dominant contribution to the low, mid and high frequency regions of the wall-pressure PSD to be from the unsteady potential region (above the boundary layer), outer region and inner region of the boundary layer, respectively. Chang III et al. (1999) analyzed the contribution of individual source terms to wallpressure fluctuation PSD using Green's function formulation for Re τ = 180 channel flow. The contributions from the viscous sublayer, buffer, logarithmic and the outer region to wall-pressure fluctuation wavenumber spectra were investigated by computing partial pressures from sources located in the corresponding regions. The buffer region contribution was seen to be the most dominant for both slow and rapid terms over most of the wavenumber range. The logarithmic region was seen to contribute to the low wavenumbers through the rapid term. The viscous region was observed to contribute only to the high wavenumbers through both rapid and slow terms. Panton et al. (2017) investigated wall-pressure fluctuations using DNS datasets of turbulent channel flow at Re τ ranging from 180 − 5200. The premultiplied wall-pressure streamwise wavenumber spectra showed a peak around λ + 1 ≈ 200 − 300. Here, λ + 1 is the non-dimensional streamwise wavelength based on inner units. Because the peak wavenumber scaled with inner units, Panton et al. (2017) believed the location of the corresponding velocity sources to be in the inner region of the channel. Further, with increasing Reynolds number, the low wavenumber contribution was observed to increase in magnitude and separate from the high wavenumber contribution. Since the dominant low wavenumbers did not scale with inner units, the corresponding velocity sources were believed to be in the outer region of the channel. Hence, the outer region contribution to wall-pressure becomes important at very high Reynolds numbers.
We investigate the decorrelated features of wall-pressure fluctuation sources in the turbulent channel using Spectral Proper Orthogonal Decomposition (spectral POD). Spectral POD was originally introduced by Lumley (2007) and recently analyzed by Towne et al. (2018) for its relation to Dynamic Mode Decomposition and Resolvent analysis. It involves the eigendecomposition of the cross spectral density of the quantity of interest. The technique has been used previously ) as a post-processing tool to infer wavepackets in axsymmetric jets. We use this technique to obtain the decorrelated contribution from each wall-parallel plane to wall-pressure fluctuation PSD. To our knowledge, this is the first work that uses spectral POD to analyze wallpressure fluctuation sources.
Unlike the methodology of Chang III et al. (1999), the proposed method takes into account the wall-normal cross correlation of the source terms and accounts for the phase relationships between different wall-parallel planes. The contribution of cross-correlation between sources in any two wall-parallel planes to wall-pressure PSD is quantified as a function of frequency. Also, the collapse of the frequency and wavenumber spectrum based on inner and outer flow variables as carried out in Farabee & Casarella (1991) and Panton et al. (2017) do not yield such information on the wall-normal distribution, insight into which can be obtained from the proposed analysis. A 'net source distribution function' (also termed as 'net source' for brevity) is defined which yields the integrated effect of all sources in a particular wall-parallel plane. The cross spectral density (CSD) of the net source function is computed from the generated DNS database. The net source CSD when doubly integrated in the wall-normal direction yields the wall-pressure PSD and, when singly integrated yields the CSD between wall-pressure fluctuation and the net source. In addition to the spectral features, spectral POD is used to identify the decorrelated contribution from each wall-parallel plane. We present a parallel implementation of the analysis framework that is streaming, thus enabling processing of large data sets.
The paper is organized as follows. We discuss the DNS simulation details in section 2. The theory and implementation of the proposed analysis framework to investigate wallpressure sources is discussed in sections 3.1 and 3.2, respectively. Finally, in section 4, we discuss the spectral features of the net source function, the spectral POD results and its relevance to wall-pressure fluctuation PSD using DNS data at Re τ = 180 and 400.
DNS simulation details
The incompressible Navier-Stokes equations are solved using the collocated finite volume method of Mahesh et al. (2004) in a frame of reference moving with the bulk velocity of the fluid as done by Bernardini et al. (2013). Better prediction of the convection velocities and high wavenumber component of the streamwise velocity fluctuations was observed by Bernardini et al. (2013) in the moving frame of reference. We observed a slightly better prediction of high frequency component of the wall-pressure frequency spectra with the moving frame of reference formulation. The method is second-order accurate in space. We use the Crank-Nicholson time integration scheme to ensure secondorder accuracy in time and to allow for larger timesteps. The method uses a leastsquare cell-centered pressure gradient reconstruction to ensure discrete kinetic energy conservation in space. This ensures stability at large Reynolds number without adding numerical dissipation.
We define the subscripts x, y and z to be the streamwise, wall-normal and spanwise directions. The computational domain is a Cartesian box with side lengths L x = 6πδ, L y = 2δ and L z = 2πδ. A long streamwise domain was chosen to include large scale contribution within the domain. Also, the long domain eliminates periodicity effects otherwise seen in low-frequency streamwise wavenumber frequency spectra (not shown). The spurious high levels of low wavenumber region observed in the results of Choi & Moin (1990) at low frequencies is not present in the current simulation results (not shown). Table 1 shows the grid sizes (N x , N y , N z ) for Re τ = 180 and 400. The mesh is uniform in streamwise and spanwise directions, and a hyperbolic tangent spacing is used in the wall-normal direction with a stretching factor of 2.07 for both Re τ . The mesh spacing in viscous units (∆x + , ∆z + , ∆y + w , ∆y + c ) is given in table 1, where ∆y + w , ∆y + c is the wall-normal mesh spacing at the wall and at the centerline respectively. A superscript of + indicates non-dimensionalization with respect to inner layer variables u τ and ν respectively. The resolution is sufficient enough to resolve the near wall fine scale features. The velocity of the moving frame of reference (U + bref ) is chosen to be 15.8 and 17.8 for Re τ = 180 and 400 respectively. These values are close to the actual bulk velocity in the stationary frame of reference. A non-dimensional body force of 1 is applied in the streamwise direction throughout the domain. A slip velocity equal to the negative of the frame velocity is applied at the walls. Periodic boundary conditions are used in the streamwise and spanwise directions. A timestep of 5 × 10 −4 δ/u τ is used for both the simulations. The flow is initially transient and subsequently reaches a statistically stationary state when the discharge starts to oscillate around a mean value. The total simulation time for both Re τ = 180 and 400 cases is 8δ/u τ after the initial transient period.
Theory
We first write the solution to equation 1.1 using the Green's function formulation. The streamwise and spanwise extents are taken to be infinite and the frame of reference is assumed to be stationary. We use zero normal derivative of pressure fluctuation as the boundary condition at the top and bottom walls. The Stokes component of pressure arising from the non-zero wall-normal derivative of wall-pressure fluctuation at the top and bottom wall has been shown to be negligible when compared to the rapid and slow terms for high Reynolds number flows (Panton et al. 2017). The wall-normal coordinates of the top and bottom wall are y = −1 and y = +1 respectively. The Fourier transform is defined as whereĝ(ω) is the Fourier transform of g(t). The pressure fluctuation where f (x, y, z, t) is the right hand side source term in the Poisson equation (equation 1.1),p(k 1 , y, k 3 , t) andf (k 1 , y, k 3 , t) denote the Fourier transform in the spanwise and streamwise directions of p(x, y, z, t) and f (x, y, z, t) respectively and the Fourier transformp(k 1 , y, k 3 , t) is defined similar tof (k 1 , y, k 3 , t) in the above equation. The Green's function G(y, y , k 1 , k 3 ) can be shown to be G(y, y , k 1 , k 3 ) = cosh(k(y −1))cosh(k(y+1)) 2ksinh(k)cosh(k) , y y , cosh(k(y +1))cosh(k(y−1)) 2ksinh(k)cosh(k) , y > y , for all combinations of k 1 , k 3 except when both k 1 = 0 and k 3 = 0, for which we can obtain (3.4) In order to ensure uniqueness of the Green's function when k = 0, we have made use of the condition that the instantaneous average of the top and bottom wall-pressure fluctuation is zero. The above Green's function has been previously used by Kim (1989) to obtain wall-pressure fluctuations from the Kim et al. (1987) simulation. The wall-pressure fluctuation of a point (x, z) on the bottom wall is where f G (x, y, z, t) is termed as the 'net source' because it includes contribution from all sources in a wall-parallel plane and the Green's function. It includes the contribution from all streamwise and spanwise wavenumbers. The Green's function essentially assigns a weight to each wavenumber (k 1 , k 3 ) component of the source in the wall-parallel plane.
Note that the function f G (x, y, z, t) is homogenous in the streamwise and spanwise directions.
In order to characterize the features of the net source function f G (x, y, z, t), the net source CSD Γ (r, s, ω) is defined as It can be related to the five-dimensional CSD ϕ f f (r, s, k 1 , k 3 , ω) of the pressure Poisson source terms as where The PSD of the spatially homogenous wall-pressure fluctuation φ pp (ω) is related to the net source CSD. (3.9) In order to analyze the contribution from a particular wall-parallel plane at y = r, we include its cross-correlation with every other wall-normal location y = s by integrating Γ (r, s, ω) along s.
The resulting function Ψ (r, ω) can be shown to be the CSD of the wall-pressure fluctuation and the net source at r, i.e., We will call Ψ (r, ω) as the wall-pressure fluctuation net source cross spectral density (WPF-NS CSD). The wall-pressure PSD can be expressed in terms of Ψ (r, ω).
The decorrelated patterns of net source function at each frequency is obtained by the spectral POD of Γ (r, s, ω).
are the spectral POD eigenvalue and eigenfunction (mode) pairs that satisfy (3.14) The spectral POD eigenvalues are arranged in decreasing order. We can express Ψ (r, ω) and φ pp (ω) using the spectral POD eigenvalues and eigenfunctions. and In equation 3.16, the wall-pressure PSD is expressed as sum of positive contributions will be later used to identify the spectral POD modes that are dominant contributors to wall-pressure PSD.
Further, we can show that −1 Φ i (y, ω)dy and ∠ denotes the phase of the complex number that follows it. Using equation 3.17 in equation 3.16, we obtain From the above equation, we can observe that the eigenvalue, magnitude and phase of the spectral POD mode, all play a role in determining its contribution to wallpressure PSD. Sources contained in wall-normal regions where the phase is in the range |∠Φ i (y, ω) − ∠Φ n i (ω)| < π/2 undergo destructive interference with the sources contained in the region where π/2 < |∠Φ i (y, ω) − ∠Φ n i (ω)| < π. Therefore, the interference of the sources from different wall-normal regions represented by a spectral POD mode plays a role in determining the net contribution to wall-pressure PSD from the mode.
We recall some of the properties of spectral POD modes and eigenvalues. The spectral POD modes are orthonormal functions.
δ ij is the Kronecker delta. Substituting r = s = y in equation 3.13, the PSD of the net source Γ (y, y, ω) can be expressed as a linear combination of the magnitude of the spectral POD modes as (3.20) Further, the integral of the PSD is the sum of the spectral POD eigenvalues. Using equation 3.19 and 3.20, we have, (3.21)
Implementation
The five-dimensional CSD ϕ f f (r, s, k 1 , k 3 , ω) defined in equation 3.8 contains all pertinent information on velocity field sources from cross-correlation of two wall-normal locations. However, computing the function is extremely memory intensive. For the Re τ = 400 case, assuming 2000 frequencies, we would need ≈ 1220T B to store ϕ f f . We use a streaming parallel implementation procedure to compute the net source CSD Γ (r, s, ω) that makes the computation feasible.
The source term in equation 1.1 is computed and stored from the DNS. The stored data is divided into multiple chunks to compute the ensemble average in equation 3.8. For a given chunk, the source terms are first converted to stationary frame of reference and then Fourier transformed in x, z and t. The Fourier transforms are then used to update the net source CSD. Details of the parallel implementation are provided in the Appendix.
A total of 16000 timesteps are used to obtain the net source CSD Γ (r, s, ω) for both Re τ . The number of timesteps in each chunk is 2000 and 50% overlap is used in time to increase statistical convergence.
Results and discussion
First, we discuss the spectral features of the wall-pressure fluctuations obtained from the finite volume solver. Then, the wall pressure net source cross spectral density (WPF-NS CSD) and the dominant decorrelated net source patterns obtained using spectral POD are discussed.
DNS wall-pressure fluctuations
The one-sided PSD of the obtained Re τ = 180 and 400 wall-pressure fluctuations scaled with inner variables is shown in figure 1a. The streamwise wavenumber spectra of the fluctuations at the two Re τ are shown in figure 2a. Both the PSD and wavenumber spectra at Re τ = 180 agree well with the results of Choi & Moin (1990). The high frequency region with ω + = ων/u 2 τ > 1, shows a −5 decay which can be clearly seen for the higher Reynolds number (Re τ = 400). The high wavenumber region of the wavenumber spectra plotted in figure 2a also shows a −5 decay in the region k + 1 = k 1 ν/u τ > 0.1, for the Re τ = 400 case. The premultiplied power spectra plotted in figure 1b for both Re τ shows a peak at ω + p = 0.35. This peak at the same frequency has been previously observed by Hu et al. (2006) for Re τ upto 1440. Similar to the power spectra, the premultiplied streamwise wavenumber spectra in figure 2b also shows a peak at k + 1 = k 1 ν/u τ at k + p = 0.027. This peak has also been previously observed by Panton et al. (2017) for Re τ over the range 180 to 5000. The wall-pressure fluctuation PSD computed from the net source CSD using equation 3.9 agrees with that obtained directly from the solver (figure 1a) for both Re τ (not shown). We will further investigate the distribution of the net sources that give rise to this premultiplied PSD peak in the next section.
Wall-pressure source distribution analysis
The wall-parallel plane that contributes the most to the wall-pressure PSD can be determined from the real part of the WPF-NS CSD Ψ (y + , ω + ) (defined in section 3.1). Figure 3a shows the WPF-NS CSD in premultiplied form normalized by the root mean square (RMS) of the wall-pressure fluctuations (y + ω + Re(Ψ + (y + , ω + ))/ p 2 + ). y + Figure 3: a) Real part of premultiplied WPF-NS CSD (y + ω + Re(Ψ (y + , ω + ))/ p 2 ) for Re τ = 400 (black solid lines with filled contours with colormap C 1 ) and 180 (line contours with colormap C 2 ). Contour lines are 20 equally spaced values between 4e-4 and 2e-1. b) Premultiplied net source PSD y + ω + Γ (y + , y + , ω + )/ Γ 2 for Re τ = 400 (black solid lines with filled contours with colormap C 1 ) and 180 (line contours with colormap C 2 ). Contour lines are 20 equally spaced values between 4e-5 and 5e-2 is the distance from the wall in viscous units. The coordinates (y + p , ω + p ) of the peak value in the contours occur at (0.35, 16.5) and (0.35, 18.4) for Re τ = 180 and 400, respectively. The frequency coordinate of the peak in the contour levels (ω + = 0.35) is same as the premultiplied power spectra peak location shown in figure 1b. Therefore, the corresponding wall-normal coordinate yields the location of the wall-parallel plane that contributes the most to the premultiplied power spectra peak. Specifically, it is the cross-correlations with this dominant plane that contributes the most. This coincidence is not surprising since integrating figure 3a in wall-normal direction yields figure 1b Figure 4: a) Real part of the high frequency premultiplied WPF-NS CSD (y + ω + Re(Ψ (y + , ω + ))/ p 2 ) for Re τ = 400 (black solid lines with filled contours with colormap C 1 ) and 180 (line contours with colormap C 2 ). Contour lines are 20 equally spaced values between 4e-5 and 2e-2. b) High frequency premultiplied fractional net source power spectral density y + ω + Γ (y + , y + , ω + )/ Γ 2 for Re τ = 400 (black solid lines with filled contours with colormap C 1 ) and 180 (line contours with colormap C 2 ). Contour lines are 20 equally spaced values between 4e-6 and 2e-2 (normalized by p 2 ). The wall-normal coordinate of the peak indicates that it is the correlations with the buffer region that contribute the most to the wall-pressure PSD at the Reynolds numbers considered. The location of the peaks inside the buffer region is very close in viscous units for both Re τ , thereby suggesting a possibility of their scaling with inner units.
The phase difference between the wall-pressure and the dominant net source obtained from the argument of Ψ (y + p , ω + p ) is 0.013π and 0.016π for Re τ = 180 and 400 respectively, is very small. Hence, the dominant net sources and the wall-pressure fluctuation are in phase with each other. The contour levels of the normalized WPF-NS CSD plotted in figure 3a almost overlap in the range ω + > 0.3 ∼ 10 −0.5 . This indicates that the high frequency contribution to the RMS scales in inner units. However, in the near wall region (y + < 10), the overlap in the contours is observed for a much larger frequency range ω + > 0.16 ∼ 10 −0.8 . This implies that for most of the frequency range, the contribution to wall-pressure PSD from the near-wall region scales in inner units.
Next, we investigate whether the net source PSD can be used to infer the location of the dominant source of wall-pressure fluctuation instead of the WPF-NS CSD. Figure 3b shows the contours of the premultiplied net source PSD Γ (y + , y + , ω + ) in fractional form for both Re τ . The main contribution to the net source PSD is seen to be from the region around y + ≈ 30 and at frequencies much lower than ω + ≈ 0.35. There is no signature of the distinct premultiplied peak observed in figure 3a. From visual inspection at low frequencies (ω + < 1), the shape of the contours in figure 3b do not have similar shape to those in figure 3a. However, at high frequencies ω + > 1, we observe from figure 4a and 4b that the contour shapes near the wall (y + < 30) are almost identical. Hence, the wall-normal phase relationships and the cross-correlation matter at low frequencies and do not matter much at high frequencies. Therefore, the net source PSD Γ (y + , y + , ω + ) is a good proxy for WPF-NS CSD Ψ (y + , ω + ) at high frequencies to obtain the pattern of the net sources. Figure 5: a) Premultiplied WPF-NS CSD (y + Ψ + (y + , ω + )/φ + pp (ω + )) at different frequencies for Re τ = 180 (empty markers) and 400 (filled markers). b) Comparison of fitted log-normal Gaussians (filled markers) to the WPF-NS for Re τ = 400 (empty markers). c) Variation of mean µ(ω + ) and d) standard deviation σ(ω + ) of the fitted log-normal profile for Re τ = 400. , : ω + ≈ 0.35, , : ω + ≈ 0.5, , : ω + ≈ 0.7, , : ω + ≈ 1, , : ω + ≈ 2.
The WPF-NS CSD (normalized with wall-pressure PSD) is plotted in premultiplied form for selected frequencies between ω + = 0.35 and ω + = 2 in figure 5a. Due to the normalization, each profile has unit area under it. From the figure, we can observe that the curves for Re τ = 180 and 400 are very close to each other for the different frequencies plotted. Further, visual inspection shows that we can model the profiles using log-normal function in y + . Therefore, normalized log-normal profiles of the form are fitted to the Re τ = 400 data for different ω + using nonlinear least squares fit and plotted in figure 5b. The mean and standard deviation of the fitted log-normal curves characterize the location and the width of the dominant net source respectively as a function of frequency. The correlation between the planes contained in this width have a sizeable contribution to wall-pressure PSD. Figure 5c and 5d show the mean (µ(ω + )) and standard deviation (σ(ω + )) as a function of frequency respectively. We define the location of the dominant net source y + p (ω + ) as y + p (ω + ) = exp(µ(ω + )). From figure 5c, we observe that the location (a) (b) Figure 6: a) Variation of C α (ω + ) for Re τ = 400. b) Partial wall-pressure fluctuation spectra from sources that extend from the wall to a particular y + for Re τ = 180 and 400 in near wall region. The vertical solid, dashed, dash-dotted lines denote y + = 30, y/δ = 0.5 for Re τ = 180 and y/δ = 0.5 for Re τ = 400 respectively. The horizontal dash-dotted line denotes partial contribution equal to 1.
(4.3)
The parameter α is the proportion of the standard deviation used to define the width of the net source. Using the above expressions, the width ∆y + (ω + ; α, σ) can be shown to be ∆y + (ω + ; α, σ) = C(ω + ; α, σ)y + p (ω + ), C(ω + ; α, σ) = e ασ(ω + ) − e −ασ(ω + ) . (4.4) The variation of C α (ω + ) for α = 1, 2 using Re τ = 400 data is shown in figure 6a. The proportionality constant is observed to vary slowly for ω + > ln(−1). Hence, in this frequency range, the width of the dominant net source is proportional to its location. The contribution of the interaction between the net sources in the inner and overlap/outer region to wall-pressure PSD can be investigated using the WPF-NS CSD. Figure 6b shows the partial contribution (normalized by the wall-pressure PSD) y + 0 y + 0 Γ (r + , s + , ω + )/φ pp (ω + )dr + ds + from the net sources contained between the wall and a given y + for two selected frequencies. For the low frequency ω + = 0.2, we observe that the partial contribution first increases and then decreases. However, a monotonically increasing behavior is observed for the high frequency. In order to investigate the implication of the non-monotonic low frequency behavior, we split the domain 0 < y/δ < 0.5 into an inner region 0 < y + < 30, and an overlap/outer region 30 < y + < 0.5Re τ . The contribution to wall-pressure PSD from sources within y/δ = 0.5 can then be accordingly split as From figure 6b, we observe that at lower frequency ω + = 0.2, this contribution from sources within y/δ = 0.5 is smaller than the inner region contribution 30 0 30 0 (4.6) Note the the left hand side of the above inequality is a positive real number. This indicates that i) the contribution from the cross-correlations between the inner and the overlap/outer region dominantes the contribution from the outer/overlap region alone, ii) the phase difference between the net sources in these two regions is predominantly in the range π/2 to π or −π to −π/2. In other words, a positive (or negative) low frequency event in the near-wall region is predominantly correlated with a negative (or positive) low frequency event in the overlap/outer region. Therefore, the observed non-monotonic behavior at low frequencies implies a dominant interaction between the net sources the inner and outer regions of the channel at such frequencies. Such inner-outer interaction at long streamwise wavelengths has been previously observed for the streamwise velocity fluctuations by DelÁlamo & Jiménez (2003), Morrison (2007), and is the reason for the mixed scaling (De Graaff & Eaton 2000) of the streamwise velocity RMS peak in wall-bounded flows.
Spectral POD of net source CSD
Before we investigate the spectral POD modes of the net source CSD, we first examine the relevance of the modes to a realization of the net source function and the resulting wall-pressure fluctuation. We can decompose the wall-pressure fluctuation p(x, 0, z, t) at a typical point (x, z) on the wall by expressing its Fourier transformp(x, 0, z, ω) (equation 3.5) in terms of the spectral POD modes. We choose x = 0, z = 0 to simplify the discussion and define p(t) to be the wall-pressure fluctuation at x = 0, z = 0. We then have (4.7) ω). Rearranging the integral and writing Φ i (y, ω) as |Φ i (y, ω)|e −i∠Φi(y,ω) , (4.9) Rearranging the integral, (4.10) The above equation expresses the wall-pressure fluctuation as a contribution from each spectral POD mode. The individual contributions are decorrelated, i.e., where δ is the Dirac delta function. Note that since we integrate over all wavenumbers, the contribution of coherent structures of all length scales is included.
The wall-normal phase velocity of the net sources represented by i th spectral POD mode can be quantified as a function of the wall-normal distance using the phase ∠Φ i (y, ω). We define a local wall-normal phase velocity c + i (y + , ω + ) in viscous units as c + i (y + , ω + ) = −ω + /k + i y + , ω + , (4.11) where the local wavenumber k + i (y + , ω + ) is defined as k + i (y + , ω + ) = ∂∠Φi(y + ,ω + ) ∂y + . Note the negative sign in the definition of c + i (y + , ω + ), because we want positive wave speed to indicate a wave travelling away from the wall. Also, this is similar to estimating the instantaneous frequency of a temporal signal using Hilbert transform (Huang & Shen 2014). Figure 7 shows the spectral POD eigenvalues (normalized by its sum) for different frequencies. The frequencies range from ω + = 0.1 to ω + = 2. The contribution of the dominant spectral POD mode to the integral of net source PSD (equation 3.21) increases with increasing frequency. Also, fewer spectral POD modes become important with increasing frequency. Figure 8 shows the fractional contribution γ i (ω + ) (equation 3.9) of each spectral POD mode to the wall-pressure PSD for frequencies ω + = 0.35, 0.5, 0.7 and 1. At low frequency ω + = 0.35, the dominant spectral POD mode contributes less than 5% to the wallpressure PSD. However, the fractional contribution of the dominant mode increases with frequency. At ω + = 0.7 and 1, nearly 10% and 50% of the wall-pressure PSD is due to the dominant spectral POD mode respectively for both Re τ . Further, if we consider the first two dominant spectral POD modes, they contribute 80% and 90% of the wallpressure PSD at ω + = 0.7 and 1 respectively. This shows that the contribution of the dominant spectral POD modes to wall-pressure spectra becomes increasingly important with increasing frequency. Further, at ω + = 0.7, the second dominant spectral POD mode (a) (b) Figure 8: Contribution of the first 20 spectral POD modes (normalized by the wallpressure PSD) to wall-pressure PSD for a) Re τ = 180 and b) Re τ = 400 at different frequencies.
is the dominant wall-pressure mode (spectral POD mode with the largest contribution to wall-pressure PSD), where as at ω + = 1, the dominant spectral POD and wall-pressure mode coincide. The magnitude and phase of the first two dominant spectral POD modes at these two frequencies is shown in figure 9. Clearly, we observe that the dominant modes have structure corresponding to wavepackets. The envelope and phase of the wavepackets have similar shape for both Re τ which indicates similarity of the dominant modes at high frequencies. The envelope shows that dominant modes correspond to sources in the near wall region (y + < 30). The first and second dominant mode envelopes have one and two lobes respectively. Since the slope of the phase variation of both the dominant spectral POD modes is negative, equation 4.11 implies that these modes correspond to sources moving away from the wall. The fractional contribution plotted in figure 8 shows that at low frequencies, the dominant spectral POD is not the dominant wall-pressure mode. In order to investigate this further, we plot the dominant wall-pressure mode index as a function of frequency in figure 11. In the frequency range 0.55 < ω + < 1 and ω + > 1, the dominant wallpressure mode is the second and the first dominant spectral POD mode respectively. At low frequencies ω + < 0.55, the dominant wall-pressure mode index is larger than or equal wavepacket located in the overlap region whose phase variation shows an enclosed wave travelling towards the wall. The contribution from the enclosed wave to wall-pressure PSD also undergoes destructive interference. Hence, the low frequency dominant spectral POD modes are not the dominant wall-pressure modes. Note that this overlap region contribution is decorrelated with the dominant inner region contribution as they stem from different modes. At frequencies lower than ω + = 0.25, we observe that the dominant spectral POD modes do not resemble wall-normal wavepackets (not shown). We further investigate the frequency dependence of the destructive interference of the dominant spectral POD mode in figure 12. The envelope and the phase of the wavepacket are used to identify destructively interfering regions. In the figure, the vertical and crosshatched regions of the mode interfere destructively. In the cross-hatched and vertically hatched regions, the phase satisfies |∠Φ i (y, ω) − ∠Φ n i (ω)| < π/2 and π/2 < |∠Φ i (y, ω) − (a) (b) Figure 12: Comparison of destructively interfering regions of the dominant spectral POD mode as a function of frequency. In the cross and vertically hatched regions, |∠Φ i (y + , ω + ) − ∠Φ n i (ω + )| < π/2 and π/2 < |∠Φ i (y + , ω + ) − ∠Φ n i (ω + )| < π, respectively. ∠Φ n i (ω)| < π, respectively. With increase in frequency, the ratio of the cross and vertically hatched region increases. Therefore, the destructive interference in the contribution from the dominant spectral POD mode to wall-pressure PSD decreases. Hence, the dominant spectral POD mode becomes the dominant wall-pressure mode for ω + 1 (figure 11).
Next, we investigate the first two dominant wall-pressure modes at ω + = 0.35 which contribute approximately 50% to the wall-pressure PSD in figure 13. Note that the premultiplied spectra peak occurs at this frequency (figure 1b). The magnitude and phase variation shows that the dominant mode does not resemble a near-wall wavepacket. The slope of the phase variation of the dominant mode is zero around the peaks of the envelopes. Hence, there is no signature of an enclosed wave traveling away or towards the wall, typical of a wavepacket. However, the contributions from the dominant mode predominantly undergo constructive interference. This is because the phase of the mode varies between the two dashed lines. Further, the WPF-NS CSD includes contributions from all streamwise and spanwise wavenumbers which are wall-parallel waves. Therefore, we believe that it is the wall-parallel waves that constructively interefere and dominate the contribution to wall-pressure PSD at low frequencies.
Overall, the phase of the spectral POD mode plays an important role in determining its contribution to wall-pressure PSD. Wall-normal wavepackets are seen to be the dominant contributor to wall-pressure PSD at high frequencies. At low frequencies, such wavepacket structure of the dominant source is not observed and we believe that the wall-parallel waves contribute the most to wall-pressure PSD.
Summary
We present a novel framework to analyze the sources of wall-pressure fluctuation in turbulent channel flow. A net source function f G (y, t) is defined whose integral in the wall-normal direction gives the wall-pressure fluctuation, i.e., p(t) = +1 −1 f G (y, t)dy. The spectral properties of the defined net source function are studied by computing its CSD using the generated DNS dataset at Re τ = 180 and 400. The WPF-NS CSD shows a premultiplied peak at ω + = 0.35 for both Re τ . The wall-normal location corresponding to the peak is y + = 16.5 and 18.4 for Re τ = 180 and 400, respectively. Therefore, the peak in the premultiplied wall-pressure PSD at ω + = 0.35 is due to the correlation with the sources in buffer layer. The WPF-NS CSD has a log-normal behavior in y + for ω + > 0.35. The location of the dominant wall-parallel plane obtained from the mean of the log-normal profile varies exponentially with frequency. The wall-normal width of the dominant region obtained from the standard deviation of the log-normal profile is approximately proportional to the location of the dominant plane. At low frequencies, a dominant inner and overlap/outer region interaction is observed at both Re τ .
Decorrelated net source patterns are obtained using spectral POD of the net source CSD. The dominant spectral POD mode is a coherent wavepacket that enclose sources travelling away from the wall for ω + > 0.25. The partial contribution of the dominant mode to wall-pressure PSD increases with frequency. The dominant spectral POD mode is not the dominant wall-pressure mode for frequencies ω + < 1. This is due to the destructive interference of the contributions from different wall-normal locations.
The two dominant wall-pressure modes at frequency ω + = 0.7 and 1 are coherent wavepackets. The first two modes together contribute approximately 80% and 90% of the wall-pressure PSD at ω + = 0.7 and 1, respectively. Such wavepacket structure of the dominant wall-pressure modes is not observed at lower frequency ω + = 0.35. The two dominant wall-pressure modes at ω + = 0.35 that contribute 50% of the wall-pressure PSD are representative of constructively interfering wall-parallel waves.
The analysis framework presented in this paper can be used to quantitatively understand the contribution of large scale coherent motions in the outer region at very high Reynolds numbers. Such contributions are believed to be the reason for the increasing low wavenumber contribution to wall-pressure RMS (Panton et al. 2017). The wallpressure fluctuation net source CSD shows that sources correlated with the buffer layer are essential contributors to the premultiplied power spectra peak at ω + = 0.35. However, in wall modelled large eddy simulations where the first point is in the logarithmic layer, one would not resolve the net source terms that are in the buffer region. Hence, additional modeling specific to pressure fluctuations might be necessary in wall modelled large eddy simulations to account for this unresolved contribution from the buffer region. The dominant wall pressure modes obtained using spectral POD could be a starting point for the extension.
interpolation and stored in the source term array f as f = {f i,j,k,l | i = 1, . . . , N x , j = 1, . . . , N y , k = 1, . . . , N z , l = 1, . . . , N t }, Multiple processors are used to transfer the data from the Cartesian decomposition of the solver to a wall-parallel decomposition of the computational domain. The wallparallel decomposition facilitates the computation of the wavenumber frequency cross spectra of the source terms. In order to obtain the fluctuation, the temporal mean of the array f at each spatial point is subtracted to ensure that it has zero mean. i.e., where w p = sin 2 (πp/N t ) is the Hanning Window function multiplied with the time series in order to avoid spectral leakage. The wall-parallel slice data is over written by its three-dimensional Fourier transform. The processors are split in the wall-normal and time directions to carry out the task in parallel and we use the parallel-FFTW (Frigo & Johnson 2005) library to carry out the Fourier transform.
As discussed in the previous section, the memory requirement to store the fivedimensional function φ f f (r, s, k 1 , k 3 , ω) is too large. We store and append the net source cross spectral density sum array Γ s (defined below) instead. The possible {r i , s j } Ny i,j=1 pairs are split among multiple processors. For each (r i , s j ) pair, we read the arraysf :,i,:,: andf :,j,:,: from the scratch space and update the sum Γ s i,j,: as Γ s = {Γ s i,j,k | i = 1, . . . , N y , j = 1, . . . , N y , k = −N t /2, . . . , N t /2 − 1}, The factor 8/3 in the above equation accounts for the reduction in the spectral magnitude due to windowing (Bendat & Piersol 2011). The update to Γ s i,j,: given in the above equation A 6 is carried out in chunks along the frequency dimension due to limited memory available in a cluster node. The net source cross spectral density Γ array is then defined by dividing the Γ s array by the number of chunks n c , i.e. Γ = {Γ i,j,k | Γ i,j,k = Γ s i,j,k /n c , i = 1, . . . , N y , j = 1, . . . , N y , k = −N t /2, . . . , N t /2 − 1}. (A 7) We store and append only half of the entire Γ s array since Γ j,i,k = Γ * i,j,k . We use 50% overlap between the chunks to increase statistical convergence. As new chunk data become available, the net source cross spectral density Γ s is updated.
Note that the Green's function had to be evaluated in quadruple precision for Re τ = 400 because for some wavenumbers, both the numerator and denominator were so large that it could not be stored in double precision. However, when divided, the resulting number could be stored in double precision. The above post-processing methodology is parallel, aware of the limited memory available in a supercomputer cluster node and can be used to analyze even larger channel flow datasets obtained for higher friction Reynolds numbers.
The spectral POD modes are then obtained by computing the eigenvalues {λ i,l }
|
2019-11-19T19:55:10.000Z
|
2019-11-19T00:00:00.000
|
{
"year": 2019,
"sha1": "15cd1e56b0a349a3563f2a22a7055a058d9b5b6d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "15cd1e56b0a349a3563f2a22a7055a058d9b5b6d",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
96439395
|
pes2o/s2orc
|
v3-fos-license
|
Gaia and the Galactic Center Origin of Hypervelocity Stars
We use new Gaia measurements to explore the origin of the highest velocity stars in the Hypervelocity Star Survey. The measurements reveal a clear pattern in the B-type stars. Halo stars dominate the sample at speeds about 100 km/s below Galactic escape velocity. Disk runaway stars have speeds up to 100 km/s above Galactic escape velocity, but most disk runaways are bound. Stars with speeds about 100 km/s above Galactic escape velocity originate from the Galactic center. Two bound stars may also originate from the Galactic center. Future Gaia measurements will enable a large, clean sample of Galactic center ejections for measuring the massive black hole ejection rate of hypervelocity stars, and for constraining the mass distribution of the Milky Way dark matter halo.
1. INTRODUCTION Hills (1988) first proposed that a 3-body exchange between a pair of stars and a massive black hole (MBH) can eject "hypervelocity stars" (HVSs) at 1000 km s −1 velocities from the Galactic center. We discovered the first HVS (Brown et al. 2005). This 3 M ⊙ main sequence B star moves with a Galactic rest frame velocity >670 km s −1 , about twice Galactic escape velocity at its current distance of 100 kpc. Only a gravitational interaction with a massive compact object can plausibly explain its motion.
The discovery of HVS1 inspired the HVS Survey, a targeted radial velocity survey of B-type stars that should not exist at faint magnitudes in the halo (Brown et al. 2006(Brown et al. , 2007b. Twenty one stars are significantly unbound in radial velocity alone ). The extreme velocities, the short-lived nature of the stars, their distribution in Galactic latitude, and their overall numbers match theoretical expectations for the Galactic center origin proposed by Hills (1988). However the measurements provide only an indirect link to the MBH.
Alternative origins for unbound stars include ejection from the Galactic disk through binary disrup-tion (Blaauw 1961;Poveda et al. 1967) and ejection from the Large Magellanic Cloud (Boubert & Evans 2016;Boubert et al. 2017). Galactic disk ejections are called "runaways" (Blaauw 1961;Greenstein & Sargent 1974). The disruption of a binary by a supernova, where the surviving star is released at the orbital velocity of the progenitor binary, can yield unbound runaways in extreme circumstances (e.g. Tauris 2015). The first observational example of an unbound runaway is the B star HD 271791 (Heber et al. 2008;Przybilla et al. 2008a). The first example of an unbound Large Magellanic Cloud ejection is the B star HE 0437−5439 (Edelmann et al. 2005;Przybilla et al. 2008b;Erkal et al. 2018). Compact objects like white dwarfs can have higher binary disruption ejection velocities than main sequence stars. The first observational examples are the unbound subdwarf O star US 708 (Hirsch et al. 2005;Justham et al. 2009;Wang & Han 2009;Geier et al. 2015), the white dwarf LP 40 − 365 (Vennes et al. 2017;Raddi et al. 2018), and three new white dwarfs found with Gaia (Shen et al. 2018).
The European Space Agency mission Gaia has begun a new era of precision astrometry. The trajectories of unbound stars hold the key to their origin. Measuring radial velocity with km s −1 precision is easy with modern spectroscopy; measuring tangential velocity, the product of distance and proper motion, is difficult. Known hypervelocity stars are at distances of 50 to 100 kpc; their expected proper motions are <1 mas yr −1 . Newly released Gaia Data Release 2 provides improved proper motions for many HVSs (Gaia Collaboration et al. 2018a). Here, we use Gaia measurements to determine the origin of stars from the HVS Survey (Brown et al. 2007b on the basis of their trajectory and velocity. In Section 2 we define the sample and compare Gaia proper motions with previous Hubble Space Telescope (HST) measurements. In Section 3 we evaluate the origin of these stars on the basis of computed trajectories and ejection velocities. The results are in Section 4, and we conclude in Section 5. We identify Galactic center HVSs, Galactic disk runaways, and Galactic halo stars with different but overlapping velocities; the highest velocity stars are probably Galactic center ejections.
Sample
We study 42 radial velocity outliers from the Hypervelocity Star Survey (Brown et al. 2007b. We include all stars with heliocentric radial velocity transformed to the Galactic frame v rf >+275 km s −1 , v rf = v helio +11.1 cos l cos b+247.24 sin l cos b+7.25 sin b, (1) where l and b are Galactic longitude and latitude, respectively, and we assume the Sun is moving with respect to the local standard of rest as measured by Schönrich et al. (2010) and the Galactic disk circular velocity is 235 km s −1 (Reid et al. 2014). We choose v rf >+275 km s −1 because of the significant absence of negative velocity outliers in the HVS Survey. The four most negative velocity stars have −300 < v rf < −275 km s −1 , implying that the 42 stars with v rf > +275 km s −1 are a relatively clean sample of ejected stars with minimal halo star contamination.
The HVS Survey targeted stars selected by color with no kinematic selection. The spectroscopy is 99% complete. Stellar atmosphere fits establish that the majority of unbound stars are main sequence B stars (Brown et al. 2014, 2015, andreferences therein). The bound stars are also probably main sequence B stars on the basis of their velocity distribution. The absence of −300 km s −1 stars in the HVS Survey implies that the +300 km s −1 stars must have lifetimes less than their ∼1 Gyr orbital turnaround time (Brown et al. 2007a;Kollmeier & Gould 2007;Yu & Madau 2007). Thus they are B stars, and we calculate stellar distances using Padova mainsequence tracks (Girardi et al. 2004;Marigo et al. 2008;Bressan et al. 2012) with an estimated precision of 15%.
We transform heliocentric distances to the Galactic frame assuming that the Sun is located 8 kpc from the Galactic center (Camarillo et al. 2018).
The HVS Survey exclusively samples the stellar halo. The 17 < g < 20.25 apparent magnitude limit corresponds to approximately 25 < R < 120 kpc in Galactocentric radial distance. For completeness, we include five stars from the bright 15 < g < 19.5 portion of the initial HVS Survey (Brown et al. 2007b). The bright stars are nearby 10 < R < 30 kpc and bound, but four have HST proper motion measurements for comparison with Gaia. Table 1 lists the sample of 42 stars with their coordinates and apparent g-band SDSS magnitude. We sort the table by decreasing v rf . We refer to bound objects with the letter B followed by their target number in the HVS Survey, and unbound objects by their published HVS number. We also list the east-west and north-south components of proper motion, (µ α , µ δ ), obtained from three sources.
Proper Motions
Gaia Data Release 2 contains proper motions for 39 of the 42 stars listed in Table 1. The three missing stars (HVS14, HVS23, and B149) have too few Gaia measurements for a robust solution. We drop them from further consideration. The remaining 39 velocity outliers have median Gaia proper motion error ±0.73 mas yr −1 . The brightest stars have errors of only ±0.11 mas yr −1 . Brown et al. (2015) publish independent proper motion measurements made with HST for 14 stars. The HST proper motions were measured relative to background galaxies. They thus provide a test of Gaia's absolute reference frame. Figure 1 (upper panel) plots the difference between Gaia and HST proper motion divided by the errors summed in quadrature. Clipping two >3σ outliers (see below), the average difference ∆µ α (Gaia − HST ) = −0.09±0.29 mas yr −1 and ∆µ δ (Gaia−HST ) = −0.15± 0.25 mas yr −1 consistent with zero.
HST Comparison
The comparison also reveals that the most problematic HST measurements are the brightest stars. For bright stars, Brown et al. (2015) paired short and long exposures to tie the stars to faint background galaxies. This approach likely introduced additional error. We add ±0.5 mas yr −1 in quadrature to the published HST error for those objects (B434, B485, B711, B711, HVS7, HVS8). The median HST proper motion error for the 14 stars is then ±0.91 mas yr −1 ; the best-measured star has an error of ±0.34 mas yr −1 . The HST measurements highlight the value of obtaining pointed observations with long exposure times. Although Gaia errors are 3 times better than HST errors for bright stars, HST errors are 4 times better than Gaia errors for faint stars like HVS1. Errors are comparable in size at g ≃ 18.5 mag. A weighted mean would maximize the information from HST and Gaia, however we do not want to add measurements that include possible systematic error.
We adopt Gaia proper motions for the seven bright g < 18 stars with σ HST > 2σ Gaia . This subset includes all stars observed with paired short+long exposures in the HST program. We adopt a weighted mean for the three 18 < g < 19 stars where σ HST and σ Gaia are within a factor of two (HVS4, HVS6, and HVS9). We adopt the HST proper motions for the four g > 19 stars with σ Gaia > 2σ HST (HVS1, HVS10, HVS12, HVS13).
GPS1 Comparison
We compare Gaia proper motions with the Gaia-PanStarrs1-SDSS catalog (GPS1, Tian et al. 2017). GPS1 proper motions are based on astrometric positions from ground-based PanStarrs1, SDSS, and 2MASS Despite the longer time baseline of ground-based observations, σ GPS1 ≃ 5σ Gaia . Thus we do not use GPS1 values here. The median GPS1 proper motion error for the 33 stars is ±2.69 mas yr −1 ; the best-measured star has an error of ±1.68 mas yr −1 .
ANALYSIS
We evaluate the probability of origin on the basis of computed trajectories and ejection velocities. We select a gravitational potential model, trace the trajectory of each star backwards in time, and calculate the required ejection velocity from that position in the Milky Way.
We estimate likelihood from the density distribution of simulated trajectories as they cross the Galactic plane.
Gravitational Potential Model
To address the origin of HVS ejections from the Galactic center, we require a gravitational potential model that fits observed mass measurements from the Galactic center to the outer halo. We adopt the three component bugle-disk-halo model of Kenyon et al. (2014). The model has a flat 235 km s −1 rotation curve and a 10 12 M ⊙ halo mass consistent with recent Gaia measurements from the orbits of globular clusters and dwarf galaxies (Gaia Collaboration et al. 2018b;Fritz et al. 2018;Posti & Helmi 2018;Watkins et al. 2018).
The results are insensitive to the choice of potential model because the stars are on nearly radial trajectories. However, the choice of potential model determines whether the stars are bound or unbound.
We determine effective Galactic escape velocity, v esc , by dropping a test particle from rest at the virial radius. At the Solar circle R = 8 kpc, v esc = 580 km s −1 consistent with Solar neighborhood escape velocity measurements (Smith et al. 2007;Piffl et al. 2014). At the radius of influence of the supermassive black hole, v esc ≥ 900 km s −1 (Kenyon et al. 2008). Only a gravitational interaction with the supermassive black hole can eject a main sequence star at >900 km s −1 (Hills 1988). At the median R = 55 kpc depth of the HVS Survey sample, v esc = 350 km s −1 .
Trajectory Calculation
Calculations are straightforward for the nearly radial trajectories of the stars. We start from the measured position and velocity of each star, and integrate its trajectory backward in time through the Galactic potential. We record where each trajectory crosses the Galactic plane, and its distance from the Galactic center. We also record the ejection velocity required to launch the star from the disk-crossing location, given the angle of the trajectory relative to Galactic rotation. Figure 2 plots the proper motion measurement for each star. Because we know position and radial velocity, a proper motion in Figure 2 corresponds to a unique trajectory through the Galaxy given a gravitational potential model. The blue ellipses in Figure 2 are the loci of proper motions with trajectories that cross the Galactic plane at R = 20 kpc, the edge of the Milky Way disk. The red circle is the region of minimum Galactic disk ejection velocity, the region where Galactic disk ejections can most easily escape the gravitational pull of the Milky Way. A Galactic Center trajectory is marked by the large black dot. Measurement errors broaden the range of possible trajectories. Thus we draw final conclusions from the density distribution of possible trajectories. For each star, we perform 10 6 Monte Carlo trajectory calculations assuming that measurement errors are normally distributed. Distance and proper motion are the dominant sources of error. We then measure the density of simulated trajectories as they cross the Galactic disk plane.
We evaluate likelihood of origin based on the 0.3173 (1σ), 0.0455 (2σ), and 0.0027 (3σ) thresholds of the trajectory distribution at the Galactic plane. This approach is valid for testing the origin of unbound stars that cross the Galactic plane only once, or bound stars with lifetimes less than their orbital turn-around time; this approach is invalid for long-lived stars that cross the plane more than once. The measurements provide trajectory constraints for about half of the sample, namely for the stars with R < 60 kpc.
Ejection Velocity
The ejection velocity required to explain the present position and motion of each star provides another constraint on its origin. In the absence of a massive black hole, the speed limit for ejection from a stellar binary is set by the finite sizes of the stars (Leonard 1991). The orbital velocity of an equal-mass pair of stars separated by their radii is equal to the escape velocity from the surface of the stars. Because stars on the main sequence have a quasi-linear relation between mass and radius, most stars in the Milky Way share a common escape velocity from their surface of about 600 km s −1 . To achieve higher binary orbital speeds, main sequence stars would have to orbit inside each other, which is impossible. A 600 km s −1 speed limit is optimistic; the speed can only be lower if mass transfer, tidal heating, or binary evolution are taken into account (e.g. Fregeau et al. 2004;Renzo et al. 2018).
The ejection velocities required for the fastest stars in the HVS Survey exceed 600 km s −1 . Thus the ejection velocities demand a Galactic center origin. For stars near Galactic escape velocity, however, there is finite region of the Milky Way disk where the stars can be ejected at <600 km s −1 . The region is set by the Milky Way gravitational potential and the rotation of the Milky Way disk: the ejection velocity minimum is located in the outer disk, at the position where the disk rotation vector points in the direction of ejection (Bromley et al. 2009). Convolved with the power-law distribution of runaway ejection velocities (Portegies Zwart 2000;Perets & Subr 2012;Renzo et al. 2018), the most probable disk runaway origin location is this region of minimum ejection velocity. We mark the minimum disk ejection velocity region for each star in Figure 2 with a red ellipse.
CONSTRAINTS ON ORIGIN
We identify three classes of objects with distinct but overlapping velocity distributions in our sample: 1) Galactic center HVSs, 2) Galactic disk runaways, and 3) Galactic halo stars. Figure 3 and Figure 4 summarize the results. Figure 3 plots v rf versus Galactic radial distance R. The dashed line is the Galactic escape velocity curve. The symbol color indicates the likely origin of each star. Figure 4 groups the origins together and plots them relative to Galactic escape velocity, v rf − v esc . A total of 20 objects have robust constraints.
Galactic Center Hypervelocity Stars
A Galactic center origin is statistically preferred for all of the fastest stars with v rf > +500 km/s (HVS1, HVS4, HVS5, and HVS6). The trajectories currently provide 2σ constraints. The velocity itself provides an additional physical constraint for the unbound stars: the minimum Galactic disk ejection velocity is comparable to the escape velocity from the surface of the stars, a severe challenge to disk ejection. We identify seven probable Galactic center HVS ejections.
Two bound stars, B537 and B598, have trajectories that point directly from the Galactic center, and reject the Galactic disk origin hypothesis at >3σ significance. To better understand these objects, Figure 5 plots the probability contours where these two objects cross the Galactic plane in Cartesian coordinates. The red circles in Figure 5 mark the regions of minimum Galactic disk ejection velocity, excluded at 3σ confidence. The red lines in Figure 5 mark the locus of trajectories with 500 km s −1 Galactic disk ejection velocities. Thus it is possible that B537 and B598 are high speed Galactic disk ejections. A factor of 2 improvement in proper motion error would exclude this possibility for B598. It is also possible that B537 and B598 are halo stars on very radial orbits. High resolution spectroscopy can determine whether these are metal-poor halo stars or main sequence B stars.
Extrapolating these results to the unconstrained half of the sample, we expect that about half of the unbound stars in the HVS Survey are Galactic center ejections. Brown et al. (2014) thus overestimate the Galactic center ejection rate of by a factor of two. The revised ejection rate of unbound 2.5-4 M ⊙ HVSs from the Galactic center is ∼7 × 10 −7 yr −1 , in better agreement with theoretical ejection rate calculations for the MBH ejection scenario (Zhang et al. 2013).
Disk Runaways
We estimate the likelihood of Galactic disk origin by averaging the trajectory density over a 5 kpc radius region centered on the position of minimum Galactic disk ejection velocity. This metric can be pessimistic for the nearest bound objects, but it is well-matched to the average distance and proper motion errors of the sample. We identify nine disk runaways with trajectories inconsistent with the Galactic center hypothesis at ≥3σ significance, and statistically consistent with a Galactic disk ejection.
Disk runaways and Galactic center HVSs have comparable frequency at speeds around the Galactic escape Figure 5. Density of simulated trajectories (magenta region = 1σ, cyan region = 2σ, yellow region = 3σ) where bound HVS candidates B537 and B598 cross the Galactic plane, in Cartesian coordinates. The Galactic Center (black dot) is the origin most consistent with the measurements. Trajectories from the region of minimum Galactic disk ejection velocity (red circle) are excluded at 3σ confidence, however 500 km s −1 Galactic disk ejections (red line) are allowed. velocity ( Figure 4). The fastest disk runaways have v rf − v esc ≃ +100 km s −1 . However, most disk runaways are bound.
Interestingly, the three unbound disk runaways are spectroscopically unusual. HVS8 is one of the fastest rotators with v sin i ≃ 300 km s −1 (López-Morales & Bonanos 2008; Brown et al. 2014;Irrgang et al. 2018). HVS7 and HVS17 are chemically peculiar B stars (Przybilla et al. 2008c;Brown et al. 2013). HVS7 is 10-100 times underabundant in He and in CNO relative to the Sun, and 100-10,000 time over-abundant in iron group and rareearth elements (Przybilla et al. 2008c). It is difficult to say whether abundance patterns are linked to a supernova binary disruption origin, however, because the abundance patterns of HVS7, HVS17, and HD 271791 (Przybilla et al. 2008a) differ significantly.
The frequency of unbound runaways is linked to their ejection rate. Theoretical ejection models predict a power-law distribution of ejection velocities (Portegies Zwart 2000;Perets & Subr 2012;Renzo et al. 2018). In these models, >99% of runaways are launched from the disk with <200 km s −1 speeds. Simulated distributions of runaways in the Milky Way therefore predict a flattened spatial distribution of runaways with a scale height comparable to the thick disk (Bromley et al. 2009). We expect that magnitude-limited surveys of less luminous types of stars than B stars will find many more runaways near the disk.
Halo stars
Halo stars first appear at v rf < +300 km s −1 . There are four bound objects with trajectories that signifi-cantly reject both Galactic center and Galactic disk origins, and which cross the disk in the direction opposite Galactic rotation. These stars are likely halo stars, presumably hot blue horizontal branch stars with temperatures and surface gravities similar to main sequence B stars. The number of halo stars is consistent with the number of −300 < v rf < −275 km s −1 negative velocity outliers in the HVS Survey. At even lower v rf < 250 km s −1 velocities, halo stars completely dominate the HVS Survey .
CONCLUSIONS
Gaia proper motions enable distinction between true Galactic center HVSs and other high velocity stars. We examine the probable origin for B-type stars from the HVS Survey. Twenty objects have robust constraints. Halo stars dominate the sample at bound speeds, v rf − v esc < −100 km s −1 . We identify nine disk runaways with trajectories that significantly reject the Galactic center hypothesis. The fastest disk runaways have v rf − v esc ≃ +100 km s −1 , but the majority are bound. We identify seven probable Galactic center HVS ejections. The Galactic center ejections dominate the sample at speeds significantly above Galactic escape velocity.
A clean sample of HVSs is important for constraining the Galactic center ejection mechanism. With a sample of 50 HVSs, for example, discrimination between a single and a binary MBH ejection scenario might be possible (Sesana et al. 2007). Different MBH ejection mechanisms predict different spatial distributions of HVSs on the sky (Levin 2006;Abadi et al. 2009;Zhang et al. 2013;Boubert & Evans 2016). These distributions can be tested with larger samples of HVSs.
A well-defined sample of HVSs will also enable measurement of the ejection rate of stars encountering the MBH. Tidal disruption events may be closely related to the encounters that produce HVSs (Bromley et al. 2012).
Gaia end-of-mission proper motion errors should improve by a factor of 3. There are a dozen objects in our sample with 1-2σ trajectory constraints that will have >3σ constraints in only a few years. For the faintest stars, however, Gaia cannot compete with pointed observations. Next generation missions like the proposed Theia mission (Theia Collaboration et al. 2017) are needed to measure HVS proper motions with uncertainties significantly below 0.1 mas yr −1 .
With uncertainties below 0.1 mas yr −1 , HVSs become important tools for measuring the Milky Way mass distribution (Gnedin et al. 2005;Yu & Madau 2007). HVSs integrate the gravitational potential from the very center of the Milky Way to its outermost regions. Thus HVSs can complement studies possible now, that use halo star streams (Bonaca & Hogg 2018), globular clusters (Posti & Helmi 2018;Watkins et al. 2018), or dwarf galaxies (Fritz et al. 2018) to constrain the Milky Way dark matter halo. Any deviation of a HVS's trajectory from the Galactic center measures the Milky Way's nonspherical mass distribution, independent of any other technique.
Facility: Gaia
This work has made use of data from the European Space Agency mission Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has made use of NASA's Astrophysics Data System. This research was supported in part by the Smithsonian Institution.
|
2018-08-17T14:03:22.000Z
|
2018-05-10T00:00:00.000
|
{
"year": 2018,
"sha1": "9ec2ba7e9e33104817bfd638287b64ad7eda5f0c",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/aadb8e/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "9ec2ba7e9e33104817bfd638287b64ad7eda5f0c",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
221552191
|
pes2o/s2orc
|
v3-fos-license
|
Modularity affects the robustness of scale-free model and real-world social networks under betweenness and degree-based node attack
In this paper we investigate how the modularity of model and real-world social networks affect their robustness and the efficacy of node attack (removal) strategies based on node degree (ID) and node betweenness (IB). We build Barabasi–Albert model networks with different modularity by a new ad hoc algorithm that rewire links forming networks with community structure. We traced the network robustness using the largest connected component (LCC). We find that when model networks present absent or low modular structure ID strategy is more effective than IB to decrease the LCC. Conversely, in the case the model network present higher modularity, the IB strategy becomes the most effective to fragment the LCC. In addition, networks with higher modularity present a signature of a 1st order percolation transition and a decrease of the LCC with one or several abrupt changes when nodes are removed, for both strategies; differently, networks with non-modular structure or low modularity show a 2nd order percolation transition networks when nodes are removed. Last, we investigated how the modularity of the network structure evaluated by the modularity indicator (Q) affect the network robustness and the efficacy of the attack strategies in 12 real-world social networks. We found that the modularity Q is negatively correlated with the robustness of the real-world social networks for both the node attack strategies, especially for the IB strategy (p-value < 0.001). This result indicates how real-world networks with higher modularity (i.e. with higher community structure) may be more fragile to node attack. The results presented in this paper unveil the role of modularity and community structure for the robustness of networks and may be useful to select the best node attack strategies in network.
Introduction
The study of real-world complex networks has attracted much attention in recent decades because a large number of complex systems in the real world can be considered as complex networks, such as social (Borgatti et al. 2009, Bellingeri et al. 2020), technological (Albert et al. 1999, Faloutsos et al. 1999), biological (Jeong et al. 2000, Barra et al. 2010), ecological complex systems (Bellingeri and Bodini 2013;Bellingeri and Vincenzi 2013).Many real-world networks show a scale-free structure, making them resilient to random node failure (Cohen et al. 2000) but can disintegrate quickly when a small proportion of important nodes are removed (Albert et al. 1999).The network's robustness, which evaluates the capability of network to hold its functioning under such failures or attacks has drawn extensive attention in recent years (Albert and Barabási 2002;Cohen et al. 2000;Callaway et al. 2000;Iyer et al. 2013;Bellingeri et al. 2015;Bellingeri et al. 2014;Dall'Asta et al. 2006;Nguyen and Nguyen 2018;Wandelt et al. 2018;Bellingeri et al. 2019;2020).Usually, Monte-Carlo simulation is used to evaluate the network robustness: for random failure, nodes/edges are removed with the same probability (random removal), while for intentional attack, nodes/edges are removed according to different structural properties of the network and a robustness measure is then computed during the node/edge removal simulation (Albert et al. 2000;Cohen et al. 2000Cohen et al. , 2001;;Bellingeri et al. 2020;Lekha and Balakrishnan 2020).To identify the node/edge removal strategy that triggers the greatest amount of damage in the system is also highly important for revealing the links/nodes that act as key players in network functioning with many practical applications (Bellingeri et al. 2020).For example, the understanding of how the node/edge removal affects real social systems may predict how the abandoning of individuals affects the information spread in the social network, thus individuating the "influential spreaders" in the network, such as most important scholars or influencers (Ahajjam and Badir 2018;Bellingeri et al. 2020).On the other hand, in social contact network on which a disease can spread, it is critical to understand how node removal through vaccination affects the spread of the disease to efficiently prevent an epidemic (Holme 2004;Wang et al. 2015;Bellingeri et al. 2020).
One of the most important measure of network robustness is the size of the largest connected component (LCC), i.e. the LCC is the highest number of connected nodes in the network (Albert et al. 2000).The LCC gives us a simple interpretation of the system robustness when subjected to node/edge removal accounting the largest functioning part of the network.For example, if the Internet is attacked, all nodes (servers) within the LCC can still transfer information mutually and indicating the largest networked structure still active.Another example, in a social contact network, the LCC represents the highest number of individuals that can be affected by a disease spreading (Bellingeri et al. 2019).For this reason, the most efficient node attack strategy is the one that is able to induce the fastest LCC decrease (Figure 1), and numerical simulations have shown that attack strategies based on network's nodes centrality measures can effectively individuating the most important nodes to reduce the size of the LCC (Albert et al. 2000, Cohen et al. 2000, 2001, Callaway et al. 2000;Iyer et al. 2013;Bellingeri et al 2018;Bellingeri et al. 2014;Nguyen and Nguyen 2018;Wandelt et al. 2018).
In specific, overall findings showed that nodes attack strategies based on betweenness centrality are highly efficient to dismantle the LCC (Iyer et al. 2013, Bellingeri et al. 2014;Sun et al. 2017, Nguyen andNguyen 2018;Wandelt et al. 2018), especially for real-world networks.
However, the difference in the effectiveness varied considerably among different real-world networks (Iyer et al. 2013;Bellingeri et al. 2014;Wandelt et al. 2020).The mechanism that gives rise to such an abrupt decrease is studied using percolation theory and is assigned to the first-order percolation phase transition (Achilioptas et al. 2009, Riordan et al. 2011, Cho et al. 2013).However, the question whether such an abrupt decrease occurs for a certain real network under attack remain unclear.This question is of great importance from two aspects: on one hand, if we want to break a network using node removal, we would find strategies that remove nodes that can cause such abrupt and fast decrease in LCC's size.
On the other hand, if we want to protect a network, we must design it in a way that such abrupt decrease should not happen.Since the network robustness must depend on its topology, several studies have investigated the relationship between topological metrics and the robustness of a network.Iyer et al. (2013) studied robustness of model networks with power-law and exponential degree distribution, with various node clustering coefficient (or node transitivity) level.They found that increasing the clustering coefficient of the network nodes results in decreasing robustness to node attack with the most dramatic effect being displayed for node attack based on their degree and betweenness.The authors also suggested for increasing the robustness, it is necessary to design topological structures with low clustering coefficient as is consistent with the functional requirements of the network.Their simulation on real-world networks also show that the difference in the effectiveness between strategies varied across networks.Nguyen and Trang (2019) studied the Facebook social networks and found those networks with higher modularity Q have lower robustness to node removal.The modularity indicator Q introduced by Newman and Girvan (2004) measures how well a network breaks into communities, (i.e. a community or module in a network is a well-connected group of nodes which have sparser connections with the nodes outside the group).Networks with high modularity Q have dense connections (more links) among the nodes within modules but sparse connections (few links) among nodes from different modules.Therefore, the modularity Q is higher in networks with marked community structure, which are called modular networks (Girvan and Newman 2002).
Using percolation theory, Dong et al. (2018) pointed out that in a modular network, a small fraction of nodes that connect nodes of different modules, called 'interconnected nodes', is critical to the robustness of the network.By analyzing the LCC size during node removal process by varying the fraction of interconnected nodes (r) in the network, they found that LCC scale with r by a power-law with universal criticality.This result suggests that modular network with higher fraction of interconnected nodes (therefore low modularity Q because the fraction of links between nodes in the same modules is lower) will result in a lower LCC decrease during node removal and consequently higher network robustness.Shai et al. (2015) developed both analytical and simulation analyses for evaluating the robustness of random and scale-free model networks with modular structure (Shai et al. 2015).
They simulate the attack of interconnected nodes, i.e. nodes that connect to neighbors that are in other modules, and analyze the critical node occupation probability pc, i.e. the fraction of remaining p when a large decrease in LCC occurs, as a function of the number of modules m and the ratio between probabilities for an intra-and inter-module link α.
They found that percolation phase transition falls into two regimes depending on the number of modules m for a fixed α: -For m < m * the network presents very high modularity and collapses abruptly under node removal (i.e. 1 st order phase transition) as a result of the modules becoming disconnected from each another, while their internal structure is almost unaffected.
-In contrast, for m > m * , the network presents low modularity and the interconnected nodes play an important role to maintain the whole network connected when nodes are removed.Therefore, the node attack causes lower damage breaking continuously the entire system without sharp LCC decrease (i.e. 2 nd order phase transition).Put another way, m * represents the threshold above which the network modular structure vanishes and the network returns to behaving as a non-modular network.
In this work, we analyze how the modularity of scale-free model and real-world social networks affects their robustness and the efficacy of the node attack strategies.Using model network, we vary the level of modularity by changing the ratio of intra-modules links over inter-modules links (κ).We found that the attack strategy based on node betweenness, which was found to be the most effective strategy to break the LCC of real-world networks (Wandelt et al. 2018;Nguyen et al. 2019), is the best strategy to disrupt the LCC only when κ is higher than a given value κc, i.e. when the network has high modularity.Below, when network has low modularity Q, or even no modular structure, the attack strategy based on node degree is more effective.In addition, the type of the network percolation phase transition when nodes are removed change from a continuous 2 nd order (in which LCC has no abrupt decrease) to an abrupt 1 st order transition (with abrupt LCC decrease) when κ increases.We also examine the effect of network's density (i.e. the average number of links per node) and the number of modules on network robustness and found that those parameters affect the network robustness, but not the type of the network percolation phase transition (1 st or 2 nd order) which only depends on κ.Finally, we study those effects for a variety of real social networks and we found that real social networks with higher modularity Q are less robust when subjected to the attack strategy based on nodes betweenness.In other words, the efficacy of the attack strategy based on nodes betweenness is higher for real social networks showing higher modularity Q.
Methods
A network can be represented as a graph G = (V, E), where V = {1,2,...,N} is the set of N nodes (vertices), and E = {e ij | i, j ∈ V, i ≠ j} is the set of E links (edges).Networks can be undirected when the links have no specified direction, or directed, in the case links present directionality.
Network are unweighted when only the presence-absence of the links is considered, or weighted, in the case some interaction value is associated to the link, i.e. the link weight.
Undirected and unweighted networks can be abstracted by an adjacency an NxN matrix A where element ai,j=1 when there is a link between node i and j and ai,j=0 otherwise.In this paper, only undirected and unweighted networks are considered.
Generation of model scale-free network
Model scale-free networks with size of N = 10000 nodes are generated using the well-known Barabási-Albert (BA) model (Barabasi and Albert 1999).The BA model starts from a small clique (a completely connected graph) of N0 nodes.At each successive time step, a new node is added and connected to M0 different existing nodes (M0 < N0) with the probability of connect an existing node is proportional to its degree (i.e the number of links to the node).The network then has a power-law degree distribution P(k)=k -γ with degree exponent γ = 3 (Barabasi and Albert 1999).We chose the average node degree <k> between 2 and 32.
Our model to generate modular scale-free network
From the BA network we generated modular networks using a new ad hoc algorithm by rewiring links as following: -Each node is assigned randomly to a module ci = {1,2,…,m} where m is the total number of modules.The number of nodes in each module is approximately N/m.
-For each link connecting two nodes i and j of different modules ci cj (inter-modules links), we will rewire it with a probability w (and keep it without rewiring with probability 1w) by the following procedure: o We randomly select one node between the two ending nodes of the link, says i, and find another node l within the same module of the node i (cl = ci).We then detach the inter-modules link between nodes i and j and create a new intramodule link between nodes i and l.The node l is selected with a probability proportioned to its degree (node with higher degree in the module ci has higher probability of being selected) o If some nodes are isolated in the network after rewiring, they will be removed.
However, we find that only a negligible proportion of nodes can be isolated after the rewiring.
We show in Appendix A that, as long as N is high enough, this rewiring procedure statistically preserve the BA model node degree distribution (scale-free and degree exponent γ = 3).Thus, by changing the probability w we can change the ratio κ between the number of intramodule links Lintra (links that connects two nodes from the same module) and inter-modules links Linter (links that connects two nodes of different modules), thus varying the community structure of the network.The relation between κ and w can be derived as following: -The number of inter-modules links and intra-module links before the rewiring process are -The ratio between the number of intra-module links (Lintra) and inter-modules links (Linter) become which is a monotone function of w when m > 1.
-We derive α, the ratio between the probability for a given link to be intra-link (pintra) over that for a given link to be inter-link (pinter) as in (Shai et al 2015) by: which is also a monotone function of w when m > 1.
The monotone change of κ and α as function of w was confirmed with simulation results which are shown in Appendix B.
Thus increasing w, we increase the modularity of the network, i.e. increasing w we marked the network community structure.The node attack strategies The network generated above will be exposed to two node attack (removal) simulation processes (or node attack strategy) where a p proportion of nodes with lowest centrality measures are kept and q = 1-p proportion of highest centrality measure nodes are removed together with their links: -The first attack strategy removes nodes according to their degree, i.e. the number of links to it, as centrality measure and it is called initial degree (ID) based attack strategy (Albert and Barabasi 2002, Bellingeri et al. 2014, Wandelt et al. 2018).
-The second strategy uses a macro-scale network metric, the node betweenneess, which is the number of times that a node appears in the shortest paths among all nodes pairs in the network (Brandes 2001).This method is commonly used to break real-world networks and is called initial node betweenneess attack strategy (IB) (Bellingeri et al. 2014;Wandelt et al. 2018).
The network robustness measures
To measure the robustness of the network under nodes attack we traced the size of first largest connected component 1 st LCC and the second 2 nd LCC as a function of p. Further, for each attack simulation, we compute a single value defined as the network robustness (R) as done in Bellingeri et al. (2019a).The value of R is the average of the normalized sized of the 1 st LCC (normalized by the initial number of node N) along the removal process.R can range between two theoretical extremes, R ≃ 0 (absolute fragile network) and R≃1 (absolute robust network).
In addition, we identify the critical value of occupation probability pc as the largest value of p where LCC has an abrupt decrease, as shown in Figure 1B.In the case no abrupt decrease was found (Figure 1A), we compute pc using the "Molloy-Reed" criterion (Callaway et al. 2000;Cohen et al. 2000), which states that the network loses its overall connectivity when each node in the network has less than two links on average.It translates to the mathematic condition of <k 2 >/<k> < 2, where <k> is the node degree.Thus, the higher are R and the lower pc the more robust is network under node attack.As a consequence, when comparing the efficacy of the node attack strategies, the higher are R and lower pc, the lower is the efficacy of the strategy to disrupt the LCC.We then denote pc ID Node degree: is the number of links to the node (Boccaletti et al. 2006).The degree of node i is given by: where aij=1 in the case there is a link connecting nodes i and j and is 0 otherwise; the term N means the sum is over all nodes in the network.
-Modularity: The modularity indicator Q calculates how modular is a given division of a network into subnetworks (modules or communities): where L is the number of links, aij is the element of the A adjacency matrix in row i and column j, ki is the degree of i, kj is the degree of j, ci is the module (or community) of i, cj that of j, the sum goes over all i and j pairs of nodes, and δ(x, y) is 1 if x = y and 0 otherwise (Clauset et al. 2004).
-LCC: the largest connected component (also called 'giant cluster') represents the maximum number of connected nodes in the network (Boccaletti et al. 2006;Bellingeri et al. 2020).Considering all the network clusters, i.e. the sub-networks of connected nodes, the LCC can be defined: where Sj is the size (number of nodes) of the j-th cluster.
-Diameter: the diameter of the network (D) is the longest shortest path length of all pairs of nodes in the network, also called the longest geodesic (Newman 2013) -Transitivity: the transitivity (C) is based on triplets of nodes.A triplet is three nodes that are connected by either two (open triplet) or three (closed triplet) undirected edges.
The transitivity is the number of closed triplets (or 3 x triangles) over the total number of triplets (both open and closed).In formula: where λclosed is the number of closed triples and λtotal is the number of all possible triples in the network.Transitivity represents the overall probability for the network to have adjacent nodes interconnected, thus making more tightly connected modules (Newman et al. 2002) -Link Density: the link density (Density) is number of links divided by the total number of possible links (Boccaletti et al. 2006).
Robustness of non-modular scale-free network
We simulate scale-free network of size N = 10000 nodes and average degree <k> = 4 and run attack simulation using ID and IB strategies.The 1 st LCC decreases continuously under both strategies and the network is completely broken down (i.e. the LCC shrinks to quasi zero) at a critical occupation probability pc (0.62 and 0.56 for ID and IB, respectively), as seen in Figure 3A.At this transition, we also found that the 2 nd LCC has its maximum value as shown in Figure 3B.Such phase transitions are called 'continuous phase transitions' or 'second-order phase transitions' and denote robust network (see Mnyukh 2013).Interestingly, while overall findings showed that nodes attack strategies based on betweenness centrality are highly efficient for most real-world networks (Bellingeri et al. 2014;Iyer et al, 2013;Nguyen and Nguyen 2018;Wandelt et al. 2018), our results shown different conclusion.For scale-free network without modular structures (lower value of parameter w), betweenness-based strategy IB does not perform better than the degree-based strategy ID.For scale-free network with significant modular structures (higher value of parameter w), betweenness-based strategy IB clearly performs better than the degree-based strategy ID.It is therefore arguable that the presence of modular structure in networks is an important factor enhancing the efficacy of betweenneess-based attack strategy for breaking the 1 st LCC, as shown in the next sub-section.
Robustness of modular scale-free network
We first present the robustness of the network of different modularity by varying the re-wiring ratio w, then we discuss the robustness of the network with different node average degree <k> and number of modules m.
Robustness as a function of the modularity
We simulate scale-free network of size N = 10000 nodes with m = 5 modules and average degree <k> = 4, then applying the rewiring method with increasing w.At first when w is small (and the network presents low modularity), we found that the network is resilient and the pc remains approximately equivalent value as with original non-modular network for both ID and IB attack strategies (Figure 4).At this degree of modularity, the network is still homogenous enough with a high number of inter-modules links.In consequence, when the attack strategies remove nodes the 1 st LCC continuously become smaller but still hold the connection between modules, denoting higher network robustness against node attack.
Only when w is higher than 0.95 the network become fragile and the 1 st LCC abruptly decreases at some value of p, as seen in Figure 4A and B when the network is attacked by the IB and ID strategies, respectively.This value of w = 0.95 corresponds to the ratio between the number of intra-module links and inter-modules links κc = 23.8(for m= 5).At this point the connection between modules in network is sparse enough and the removal of critical nodes may break down the global connectivity among modules even though the modules are still relatively well connected.We call pc the largest value of p with an abrupt decrease of the 1 st LCC, as proposed by (Shai et al. 2015), and show its relationship with w in Figure 5A.This transition corresponds to a first-order phase transition where local structures are separated from the 1 st LCC (denoting lower network robustness).As a result, the size of the 2 nd LCC abruptly increase at pc and gradually decrease afterward (see Figure 4C and D).As shown in Figure 5A, the pc IB increases faster than the pc ID and when w is higher than 0.98, pc IB become higher than pc ID showing that the network becomes more vulnerable to the IB strategy than the ID strategy.Similar behavior is confirmed with the robustness measure R (Figure 5B).as a function of the re-wiring ratio w.The trend is clear, when w is higher than 0.98, pc IB becomes higher than pc ID and RIB becomes smaller than RID, showing that the network becomes more vulnerable to the betweenness-based strategy than the degree-based strategy.For illustration we present the the images of two modular networks subjected to node removal: with w = 0.6 and attacked by the ID (c) and IB (d) strategies; with w = 0.998 and attacked by the ID (e) and IB (f) strategies; In all simulation the occupation probability p is 0.6.
Robustness as a function of network density
Next we examine the effect of the link density (i.e. the average number of links per node) by simulating scale-free network of size N = 10000 nodes, number of modules m = 5 with varying average node degree <k> from 2 to 16 attacked by the betweenness-based strategy (IB).We found that pc decreases as the link density increase (Figure 6) -the networks become more robust when nodes have more links.However, the transitions type only depends on the rewiring ratio w and is relatively stable with respect to the average degree <k> change (Figure 6 and 7).
In other words, the ratio of probability of inter-modules links over the probability of intramodule links (which is a function of w) is the critical factor to determine the type of the phase transition.By number of modules: Here, we generated scale-free network of size N = 10000 nodes and average degree <k> = 4 with number of modules m varying from 2 to 20.We run node attack simulation by the betweenness strategy (IB) only founding that pc sharply decreases below w=0.98 regardless of the number of modules in the network (Figure 8).In other word, the network robustness (lower pc) sharply increases below w=0.98, that is when the network present lower modularity, thus reaching the maximum robustness for non-modular network.For this reason, the transition type is insensitive to the number of modules m, similar to the effect of network density.Moreover, we found that the critical occupation probability pc slightly increases with m, suggesting that the model networks become slightly more fragile when they have more modules (Figure 8).Robustness and structural properties in model and real-world networks: To verify the relationship between modularity, node degree, and the efficacy of the attack strategies we show above for model networks we investigate for such relationship also for 12 real-world social networks (Table 1).We fit linear models of the robustness RIB against the modularity Q and the average node degree <k>.In Figure 9A we show the linear model of the RIB with respect to the modularity Q for our model modular network generated with different w from a BA network of N = 10000, <k> = 4 and m = 5.We find a significant trend as RIB decreases when Q increases (p-value= 0.01).Very interesting, in our real social networks dataset, we find a similar RIB decrease with modularity Q (p-value< 0.001).Remarkably, we find an abrupt transition in model network, i.e. when Q is very high (about 0.8), the network's robustness drastically decreased (Figure 9A).We do not find this effect in real-world social networks (Figure 9C).The absence of an abrupt robustness RIB decrease in real-world social networks can be due to the fact that real networks may vary in link density and other structural properties (for example, transitivity, assortativity, number of modules, etc..) that may affect the network response to IB node attack.For this reason, the variability in real-world social networks structure, with many structural factors affecting the network robustness, may prevent the abrupt RIB decrease as a function of the modularity Q that we observe in model networks.
Further, we find a clear RIB increase by increasing the average node degree in our model networks (p-value< 0.001, Figure 9B).This is in agreement with past analyses showing how number of links per node, the slower is the network fragmentation under node removal (Albert and Barabasi 2002;Iyer et al. 2013).Differently, we do not find a significant relationship between RIB and <k> in our real social networks dataset (p-value=0.16, Figure 9D).Even in this case, the variability in real-world networks structure, with many structural factors affecting their robustness, may hide the emergence of a clear relationship between the linkage density measured by the average node degree <k> and the robustness of the network (RIB) against IB node attack.
Discussion and Conclusion
In this work we study the robustness of model scale-free networks and real-world social networks with different modularity.The model networks are generated from BA model with a novel method for tuning their modular structure.Using Monte-Carlo simulation we simulate two node attack strategies, IB and ID based on node's betweenness and degree, respectively.
With both attack strategies, we found two types of percolation phase transitions take place.The 1 st order abrupt phase transition happens when the model network has high modularity, representing by κ > κc with κc ~ 23.8 for both IB and ID attack strategies.Also at and above this critical point, the model network is more fragile under betweenness-based strategy attack: RIB < RID, as found in many real-world complex networks.When κ < κc or when the model network has no modular structure, the network experiences a continuous 2 nd order phase transition under both nodes attack strategies.Interestingly, under this regime, the network is more robust against the betweenness-based attack strategy IB than the degree-based attack strategy ID, contrary to most of results on real-world networks.In addition, our work showed that the ratio κ is the main factor for the type of phase transition: small κ corresponds to 2 nd order continuous phase transition while high κ corresponds to 1 st order abrupt phase transition.
Further, we investigate how the modularity affect the robustness of the system against node removal in 12 real-world social networks and find a similar RIB decrease with modularity Q (p-value< 0.001) that we observe in model networks varying the modularity.This result indicate how network with higher modularity (i.e. with higher community structure) may be more fragile to betwenness based node attack.At the same, this result show how the betwenness based node attack (IB) is highly effective when attacking network with marked community structure (higher modularity Q).Differently, in the case the network shows very low modularity (or no modularity), the degree-based node attack ID may perform better than IB.
-The total of rewired links in the network is (−1) < > -The total of rewired links that will be connected to nodes within the module of the node is: (−1) < >/ -The probability that the node is selected is proportioned to the ratio of its degree to the total degree of all nodes in the module (according to our method) and is: /( < >/) -The expected number of rewired links that this node can be selected is therefore equal to:
Figure 1 :
Figure 1: Schematic behavior of the size of the LCC as a function of the proportion of remaining nodes p during a node removal process: A) LCC shows a continuous 2 nd order decreases without abrupt decrease and B) LCC is subjected to first-order percolation phase transition (Achilioptas et al. 2009) showing an abrupt decrease in correspondence of p=pc.The node attack strategy in B is able to dismantle
-
After the rewiring process, the number of inter-modules links become (1
Figure 2 :
Figure 2: Visualization of the model for generating scale-free modular networks.The modularity of the network increases from A to D. (A) initial non-modular scale-free BA network of size N = 10000 with average degree <k> = 8. (B)-(D): Illustration of the increasing modularity effect of w on the obtained modular network divided into m = 5 modules.
Figure 3 :
Figure 3: Simulation result for the non-modular network with N = 10000 nodes and average degree <k> = 4: A) Size of the first largest connected component (1 st LCC) and B) the second largest connected component (2 nd LCC) as a function of the occupation probability p.
Figure 4 :
Figure 4: Simulation result for modular networks generated from BA network with N = 10000, m = 5 and <k> = 4 with different value of rewiring probability w: Size of the first largest connected cluster (1 st LCC) and the second largest connected cluster (2 nd LCC) as a function of the occupation probability p when attacked by IB (A and C, respectively) and ID (B and D, respectively).
Figure 5 :
Figure 5: Simulation result for modular networks generated from BA network with N = 10000, m = 5 and <k> = 4: (a) The critical occupation probability pc and (b) the single value network robustness (R)
Figure 6 :Figure 7 :
Figure 6: Simulation result by IB attack strategy for the modular networks generated from BA network with N = 10000, m = 5 with different value of rewiring probability w and for different node average degree <k>.Size of the 1 st LCC (A, C, and E) and the 2 nd LCC (B, D and F) as a function of the occupation probability p when attacked by IB for various average node degree <k>
Figure 8 :
Figure 8: The critical occupation probability pc as a function of the re-wiring parameter w for different number of modules m in the network.The result is obtained with IB attack strategy for the modular networks generated from BA network with N = 10000 and <k> = 4.
(
equal to the expected number of links that this node loses.In consequence, the expected number of links of each node after rewiring process is equal to their initial degree, and the network's degree distribution remain unchanged.B.Graph of and as function of rewiring probability w and number of modules mComparison of analytical and simulation results for modular scale-free network for A) as a function of w and m to show and B) as a function of w and m.Both measures show the goodness of mathematical derivation in the Method section.
Nguyen et al. 2019 occupation probability against ID and IB node attack strategies, respectively; as well as we denote RID and RIB the network robustness against ID and IB node attack strategies, respectively.snap.stanford.edu,preparedby(Rozemberczkietal.2019).Beside, we use two financial networks where nodes represent the US SP500 stocks and links are calculated from the correlation matrix using threshold method (seeNguyen et al. 2019for more detail); the co- networks of Facebook's pages where nodes represent pages of different topics -TV Shows, Politician, Government, Public Figures, Athletes, Company, New sites and Artist -and links are mutual likes between them.The Facebook's pages data is collected from https://
Table 1 :
Structural statistics of the real-world social networks: nodes (N), links (L), size of the LCC, size of the LCC as % with respect the total number of network nodes, average node degree <k>, diameter, transitivity, the edge density, modularity Q.
|
2020-09-10T01:01:04.913Z
|
2020-09-09T00:00:00.000
|
{
"year": 2021,
"sha1": "3c456f9b9501fa97b6ced1eb09feda46bda998be",
"oa_license": "CCBY",
"oa_url": "https://appliednetsci.springeropen.com/track/pdf/10.1007/s41109-021-00426-y",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "3c456f9b9501fa97b6ced1eb09feda46bda998be",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.